Attackers Use Voicemail Phishing Attacks To Steal WhatsApp Users’ Data

Share post:

Russian hackers are using email spoofing and fake voice message notifications to steal personal information from WhatsApp users.

According to a report by e-mail security company ArmorBlox, almost 28,000 e-mails were sent using this method and linked to a page labeled ‘center for road safety of the Moscow region.’

The email was able to bypass Google’s and Microsoft’s email security checks after it appeared to come from a legitimate email domain.

WhatsApp users receive a fake email stating that they have a voice message. Embedded in the message is a link that takes users to a page where a play button for the fake voice message is available.

Once clicked, users are asked the question “Are you a robot?” After clicking that they are not robots, a Trojan JS / Kryptik tries to install malicious software on the victim’s computer.

Once infostealer malware is installed, attackers can access the victim’s browser from where they can access personal data.

To protect themselves from this attack, users are asked to follow three security steps. This includes augmenting native email security with additional controls, watching out for social engineering cues, and using multi-factor authentication and password management best practices.

For more information, read the original story in TechRepublic.

Featured Tech Jobs

SUBSCRIBE NOW

Related articles

Google delays launch of new AI model Gemini

Google's highly anticipated AI model, Gemini, has had its launch rescheduled to early 2024, as reported by The...

Canadian group gets $2.2 million to research AI threat detection for wireless networks

Ericsson Canada and three universities have been awarded funds by the National Cybersecurity

Proposed Canadian AI law ‘fundamentally flawed,’ Parliament told

A privacy lawyer said the proposed AI bill is vague and sets a dangerous precedent

Canada, U.S. sign international guidelines for safe AI development

Eighteen countries, including Canada, the U.S. and the U.K., today agreed on recommended guidelines to developers in their nations for the secure design, development, deployment, and operation of artificial intelligent systems. It’s the latest in a series of voluntary guardrails that nations are urging their public and private sectors to follow for overseeing AI in

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways