Researchers launch Nightshade to protect artists’ copyright

Share post:

A new tool called Nightshade, developed by a team led by Ben Zhao, a professor at the University of Chicago, is set to revolutionize the way artists protect their copyright in the age of AI.

Nightshade allows artists to inject invisible alterations into their artwork’s pixels before uploading them online. This poisons the training data of AI models, preventing them from replicating or plagiarizing artists’ creations without consent or compensation.

Nightshade works by exploiting a vulnerability in generative AI models. It then leaves a lasting mark on the data sets that AI companies rely on. It is also open source and its potential impact grows as more users adopt and adapt it. However, there are concerns that malicious actors could misuse this technique to poison AI models for malicious purposes. Experts argue that significant damage would require thousands of poisoned samples, a daunting task for more powerful AI models trained on billions of data samples.

Nightshade is particularly promising against renowned AI models like DALL-E, Midjourney, and Stable Diffusion, which have been used to generate images that are virtually indistinguishable from human-created works. When poisoned with altered images, these models produce distorted results, such as dogs turning into cats and cars morphing into cows. This is a step towards protecting artists’ intellectual property.

In addition to Nightshade, the team behind the tool has also developed Glaze, a tool that allows artists to mask their personal style, preventing it from being harvested by AI companies. Nightshade is set to be integrated into Glaze, allowing artists to decide whether they wish to employ this data-poisoning technique.

The sources for this piece include an article in TechnologyReview.

SUBSCRIBE NOW

Related articles

Research Raises Concerns Over AI Impact on Code Quality

Recent findings from GitClear, a developer analytics firm, indicate that the increasing reliance on AI assistance in software...

Microsoft to train 100,000 Indian developers in AI

Microsoft has launched an ambitious program called "AI Odyssey" to train 100,000 Indian developers in artificial intelligence by...

NIST issues cybersecurity guide for AI developers

Paper identifies the types of cyberattacks that can manipulate the behavior of artificial intelligen

Canada, U.S. sign international guidelines for safe AI development

Eighteen countries, including Canada, the U.S. and the U.K., today agreed on recommended guidelines to developers in their nations for the secure design, development, deployment, and operation of artificial intelligent systems. It’s the latest in a series of voluntary guardrails that nations are urging their public and private sectors to follow for overseeing AI in

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways