
Cybersecurity Headlines Stryker hospital tools safe, models apply to power AI scams, cybercrime up 245%
21 snips
Mar 17, 2026 Reports on hospital tools remaining safe despite system outages. Coverage of face model use to lend credibility to AI-powered deepfake scams. Big jump in cybercrime since the Iran conflict, with banks and fintech heavily targeted. Alerts about exploited server vulnerabilities and live-chat phishing stealing payment data.
AI Snips
Chapters
Transcript
Episode notes
Stryker Forced To Process Orders Manually
- Stryker's internal systems were disrupted and staff processed orders manually after devices were wiped via Microsoft Intune.
- Cisco Talos responders said attackers likely used high-level admin account access and Intune's remote wipe to reset thousands of company devices.
AI Face Models Power Large Deepfake Scam Operations
- Wired found recruiters hiring AI face models to build trust in deepfake romance and crypto scams by swapping faces onto fake personas.
- Applicants sometimes record 100–150 calls per day at Southeast Asia scam compounds, with some coerced or trafficked into the work.
Cybercrime Surge Since Iran Conflict
- Akamai reports cybercrime activity rose 245% since the Iran war began, driven by botnet scanning, credential harvesting, and reconnaissance.
- Banking and fintech receive ~40% of malicious traffic, and many attacks route through proxies in Russia and China rather than originating in Iran.
