How to defend your organization against deepfake content

Share post:

IT departments should implement real-time audio and video verification capabilities, passive detection techniques, and better protection of high-priority officers and their communications to defend against AI-generated deepfake messaging, say American cyber intelligence agencies.

“The tools and techniques for manipulating authentic multimedia are not new, but the ease and scale with which cyber actors are using these techniques are. This creates a new set of challenges to national security,” said Candice Rockell Gerstner, a specialist in multimedia forensics at the U.S. National Security Agency (NSA). “Organizations and their employees need to learn to recognize deepfake tradecraft and techniques and have a plan in place to respond and minimize impact if they come under attack.”

In a report issued Tuesday by the NSA, the FBI and the U.S. Cybersecurity and Infrastructure Security Agency (CISA) say there are currently limited indications of significant use of synthetic media techniques by malicious state-sponsored actors. However, they warn the increasing availability and efficiency of synthetic media techniques available to less capable malicious cyber actors means deepfake content will likely increase in frequency and sophistication.

The term “deepfake” refers to multimedia that has either been synthetically created or manipulated using some form of machine or deep learning (artificial intelligence) technology.

Related content: The future of cybersecurity is AI and deepfakes

Employees may be vulnerable to deepfake tradecraft and techniques, which may include fake online accounts used in social engineering attempts, fraudulent text and voice messages used to avoid technical defenses, faked videos used to spread disinformation, and other techniques, the report says.

Malicious actors may use deepfakes with manipulated audio and video to try to impersonate an organization’s executive officers and other high-ranking personnel for, among other things, convincing employees to shift funds to bank accounts controlled by crooks. Or to disrupt an organization’s reputation or share price.

Related content: Deepfakes fueling online anarchy

There was a huge increase in deepfake images for LinkedIn profile pictures last year, the report notes. In 2019, deepfake audio was used to steal the equivalent of US$243,000 from a U.K. company. And in May an AI-generated scene depicting an explosion near the Pentagon caused “general confusion and turmoil on the stock market,” the report adds.

The report says organizations should
— implement identity verification capable of operating during real-time communications. This can include testing to prove an image is live, mandatory multi-factor authentication using a unique or one-time generated password or PIN for logging into video calls, and using known personal details, or biometrics, to ensure those entering sensitive communication channels or activities are able to prove their identity;

— use tools that look for compression artifacts, as well as those that can verify reflections and shadows, in video communications;

— consider using open source tools found on GitHub, like Nvidia’s StyleGAN3 Synthetic Image Detection;

— look for physical properties in videos that would not be possible, such as feet not
touching the ground;

— protect company-created media like promotional or training videos from being copied using tools like watermarks. Organizations should also partner with media, social media, career networking, and similar companies in order to learn more how these companies are preserving the provenance of online content;

— train employees how to spot deepfake audio and video content. Training resources specific to deepfakes include the SANS Institute’s blog “Learn a New Survival Skill: Spotting Deepfakes;” and MIT Media Lab’s blog “Detect DeepFakes: How to counteract information
created by AI”.

The post How to defend your organization against deepfake content first appeared on IT World Canada.
Howard Solomon
Howard Solomonhttps://www.itworldcanada.com
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times.

SUBSCRIBE NOW

Related articles

Cyber Security Today, May 3, 2024 – North Korea exploits weak email DMARC settings, and the latest Verizon analysis of thousands of data breaches

This episode reports on warnings about threats from China, Russia and North Korea, the hack of Dropbox Sign's infrastructure

Hashtag Trending for World Password Day, Thursday, May 2nd, 2024

Security firm Okta warns of an unprecendented password stuffing attack that is piggybacking on regular user’s mobile and...

Google Chrome’s new post-quantum cryptography causes connection issues

The latest update to Google Chrome, version 124, which integrates a new quantum-resistant encryption mechanism, has led to...

UK legislation bans weak passwords

Starting Monday, the UK will enforce new laws banning the sale of devices with weak default passwords such...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways