Site icon Tech Newsday

AI-generated deepfakes used to spread misinformation and hate speech

Artificial intelligence (AI) voice-cloning tools are becoming more widely available, making it easier for people to create deceptive “deepfake” videos that can cause real-world harm.

A recent example is a manipulated video of President Joe Biden, which was created by combining AI-generated audio and a real clip of the President from a CNN live broadcast on January 25 in which President Joe Biden was announcing the United States’ dispatch of tanks to Ukraine. The video was then manipulated to make it appear as if Biden was making disparaging remarks about transgender people.

This deepfake video was created using a technique known as “voice cloning.” It entails teaching an AI system to recognize a person’s voice and then using that knowledge to generate new audio that sounds like the person. This technology has both positive and negative implications. It can be used to make realistic-sounding voice assistants, but it can also be used to make fake audio and video content.

Deepfakes generated by AI are a growing concern because they can spread misinformation and hate speech. Deepfakes are becoming more difficult to detect as technology becomes more sophisticated and accessible. This poses a significant challenge to media organizations and social media platforms that are already struggling to combat misinformation.

“Tools like this are going to basically add more fuel to fire,” said Hafiz Malik, a professor of electrical and computer engineering at the University of Michigan who focuses on multimedia forensics. “The monster is already on the loose.”

The sources for this piece include an article in APNews.

Exit mobile version