ElevenLabs announces safeguards to curb usage of voices for abuse

Share post:

Following an increase in the number of voice cloning misuse cases, ElevenLabs, the startup that provides voice cloning technology has announced additional safeguards it will be introducing to the platform.

Stopping free users from creating custom voices, launching a tool to detect AI-generated audio, and banning accounts reported for creating harmful content are among the safeguards.

Motherboard first reported on ElevenLabs software abuse when it discovered posters on 4chan sharing AI generated voice clips that sounded like celebrities like Emma Watson and Joe Rogan.

ElevenLabs tweeted shortly after the clips went viral on 4chan that; “Crazy weekend—thank you to everyone for trying out our Beta platform. While we see our tech being overwhelmingly applied to positive use, we also see an increasing number of voice cloning misuse cases.”

According to Motherboard, one 4Chan user posted a voice that sounded exactly like Emma Watson reading a section of Hitler’s “Mein Kampf,” while another used a Ben Shapiro-esque voice to make “racist remarks” about US House Representative Alexandria Ocasio-Cortez.

ElevenLabs added that, while it can trace any generated audio back to a specific user, it was looking into additional safeguards. These include requiring payment information or “full ID identification” before performing voice cloning, as well as manually verifying each voice cloning request.

The sources for this piece include an article in Vice.

SUBSCRIBE NOW

Related articles

Sleeper Supply Chain Attack Activates After 6 Years

A coordinated supply chain attack has compromised between 500 and 1,000 e-commerce websites by exploiting vulnerabilities in 21...

Russian-Controlled Open Source Tool Raises Alarms Over U.S. Cybersecurity

A widely used open-source Go library, easyjson, used in healthcare, finance and even defence has come under scrutiny...

Signal Archiving Tool Used By Trump Admin Is Breached, Raising Alarms Over Messaging Security (EDITORIAL)

(EDITORIAL) A messaging tool used by Trump administration officials to archive encrypted Signal messages has been hacked —...

Anthropic Warns: AI “Virtual Employees” Could Pose Security Risks Within a Year

Anthropic, a leading artificial intelligence company, anticipates that AI-powered virtual employees could begin operating within corporate networks as...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways