Researchers Try To Find Solutions To Racist Text-based AI

Share post:

In July 2020, OpenAI launched GPT-3, an artificial intelligence language model that sparked excitement about computers but sometimes proved to be foul-mouthed and toxic.

Before it was licensed to developers, OpenAI published an essay in May 2020 with tests showing that GPT-3 has a generally low opinion of Black people and exhibits sexism and other forms of bias.

Despite these findings, OpenAI announced plans to commercialize the technology – a sharp contrast to the way OpenAI handled an earlier version of the model, GPT-2, in 2019.

Partners from the education sector have published several studies on how large language models can be misused and have a negative impact on society.

In the recently published paper highlighting ways to reduce the toxicity of GPT-3, OpenAI has published tests showing that the base version of GPT-3 calls some people animals and associates white people with terms like “supremacy” and “superiority,” perpetuating long-held stereotypes and dehumanizing non-white people.

Text generated by large language models quickly approaches a text that originates from a human being but still does not understand things that require reason, which almost all people understand.

Several researchers have found that attempts to refine and eliminate the bias of GPT-3 models can ultimately harm marginalized people.

Researchers say the problem stems in part from people misjudging data on the toxicity and non-toxicity of language, leading to prejudice against people who use language differently than white people.

Researchers say this can lead to self-stigmatization and psychological damage, forcing people to change codes.

Researcher Jesse Dodge says the best way to deal with bias and inequality is to improve the data used to train language models, rather than trying to eliminate prejudice in retrospect.

He recommends better documenting the source of training data and recognizing the limitations of texts scraped from the web that might over-represent people who can afford internet access and have time to create a website or comment.

Microsoft researchers surveyed 12 technicians using AI language technology and found that product teams had little idea how the algorithms could go wrong.

The researchers developed an interactive “playbook” that encourages people working on an AI language project to think about and design errors in AI text technology in the early stages. It is being tested within Microsoft to make it a standard tool for product teams.

For more information, read the original story in Arstechnica.

Featured Tech Jobs

SUBSCRIBE NOW

Related articles

AI surpasses human benchmarks in most areas: Stanford report

Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) has published the seventh annual issue of its AI Index...

Microsoft and OpenAI partner to build a $100 Billion AI supercomputer “Stargate”

In a bold stride towards computational supremacy, Microsoft, in partnership with OpenAI, is reported to be laying the...

US Bill Aims to Unveil AI Training Data Sources Amid Copyright Concerns

In a significant move toward transparency, a bill was introduced in the US Congress on Tuesday by California...

AI presents an “extinction level threat” – US Gov’t Report: Hashtag Trending for Tuesday, March 12, 2024

A new US government report warns that AI presents an “extinction level threat to the human species. Elon Musk is outsourcing his Grok AI code. Hackers have breached the Cybersecurity and Infrastructure Security Agency in the US and a researcher shows how to steal a Tesla by leveraging a feature of the Tesla charging stations.

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways