Sneaky 2FA, a new phishing as a service attack defeats two-factor authentication,
A scammed company ordered to pay $190,000 even though the email that scammed them was legit, and AI-powered romance scams exploit deepfake technology.
This is Cyber Security Today. I’m your host, Jim Love.
Sneaky 2FA Phishing Kit Defeats Two-Factor Authentication
A phishing kit called “Sneaky 2FA” is exposing critical vulnerabilities in two-factor authentication (2FA) defenses, making it a serious threat to Microsoft 365 users.
This Adversary-in-the-Middle kit doesn’t just steal credentials; it captures 2FA codes and session cookies in real time, giving attackers full account access without raising red flags. Victims are lured to fake login pages hosted on compromised WordPress sites. These pages look authentic, often prefilled with email addresses to lower suspicion, and they employ Cloudflare Turnstile to distinguish humans from bots, complicating analysis by researchers.
The attack kit’s code has been linked to W3LL Panel OV6, another sophisticated phishing tool, highlighting the modular, service-driven nature of modern cybercrime. What makes Sneaky 2FA stand out is its seamless operation: from luring users with realistic URLs to leveraging session cookies for immediate authentication bypass.
For enterprises, this attack underscores the limitations of traditional 2FA. Security teams should consider upgrading to phishing-resistant authentication methods like hardware security keys or WebAuthn. Monitoring for unusual account behavior, such as logins from unrecognized devices or geographies, can also help detect compromised accounts before further damage occurs.
There’s a link to a full report from Sekoia.io https://blog.sekoia.io/sneaky-2fa-exposing-a-new-aitm-phishing-as-a-service/
Scammed Company Ordered to Pay $190,000 in Invoice Fraud Case
A Western Australian court has ruled that a company must pay for failing to properly verify a payment change, even though it was deceived by hackers.
In 2022, attackers compromised Mobius Group’s email system and sent fraudulent payment instructions to Inoteq Pty Ltd. Inoteq attempted to verify the change but relied on a single phone call, which didn’t connect, and fake documentation provided by the scammers. By the time Mobius followed up, most of the $190,000 was already gone.
Judge Gary Massey’s ruling is a wake-up call for businesses. He noted that Inoteq’s verification process fell short of reasonable due diligence, stating, “A failed phone call should have prompted a more robust process.” This decision highlights the importance of redundancy in payment verification protocols.
False billing scams are surging. Australia reported nearly 40,000 cases in 2023, a stark rise compared to previous years.
Although this happened in Australia and the majority of our listeners are in Canada and the US, courts often look to other jurisdictions when there are no precedents in their own country.
And even without a lawsuit, the lesson here for businesses is clear: implement layered authentication for payment changes, require approvals from multiple parties, and document verification steps thoroughly. Additionally, updating contract terms to include secure payment protocols can help reduce exposure.
AI-Powered Romance Scams Exploit Deepfake Technology
AI-driven scams are now using cutting-edge tools to deceive victims—and the stakes are high.
A French woman recently lost $180,000 in a scam involving deepfake videos and AI-generated voice mimicking actor Brad Pitt. While celebrity impersonations are rare, they highlight how accessible AI has made such sophisticated attacks.
Romance scams contributed to $1.3 billion in global losses last year, according to the Federal Trade Commission. But most of these scams involve more mundane scenarios: fraudsters posing as relatives in emergencies or professionals in urgent need of financial help. AI tools enable these scammers to create believable interactions, from real-time voice synthesis to highly realistic fake video calls.
For law enforcement, this trend raises key challenges. The decentralized and cross-border nature of these scams complicates enforcement, while the rapid evolution of AI lowers the technical barriers for bad actors. Organizations should focus on educating employees and users about these risks, especially in industries like banking and social media where trust-based fraud is prevalent.
And even though these scams are not classically corporate in nature, compromised individuals who lose all they have can represent a corporate threat. And, as professionals, we may have an obligation to help inform those at most risk in our community.
And for those worried about similar techniques working their way into the corporate world, consider implementing AI detection tools to flag suspicious videos or voices, and emphasize the importance of critical verification steps even in seemingly urgent situations. This growing use of AI should drive a re-evaluation of fraud detection tools and frameworks to keep pace with evolving threats.
That’s our show for today. You can reach me with tips, comments, or questions at cybersecuritytoday@itwc.ca. I’m Jim Love. Thanks for listening.