Google fights misinformation with LLMs

Share post:

Google is using large language models (LLMs) to flag abuse of its products to enforcement teams. The company says that it can now build and train a model in a matter of days, instead of weeks or months, to find specific kinds of abuse.

This is especially valuable for new and emerging abuse areas, as Google can quickly prototype a model that is an expert in finding a specific type of abuse and automatically route it to its teams for enforcement.

The specific LLMs used by Google remain undisclosed, the company’s Senior Director of Trust and Safety, Amanda Storey, shared insights at the Fighting Misinformation Online Summit in Brussels on October 26, 2023. She emphasized Google’s commitment to balancing access to information with user safety, underlining the responsibility to provide trustworthy content.

Google’s strategy revolves around protecting users from harm, delivering reliable information, and collaborating with experts and organizations to ensure a safer online environment. Their decision-making process is guided by principles that prioritize user diversity, personal choice, and the freedom of expression while mitigating harmful content’s proliferation.

To achieve this, Google is continually evolving its tools, policies, and techniques, with a strong focus on harnessing artificial intelligence (AI) for abuse detection. Notably, they have developed a prototype using LLMs, which can rapidly identify and address content abuse at scale. This innovative approach shows promising results in proactively protecting users from emerging risks.

In addition to using LLMs to fight misinformation, Google is also taking other steps to reduce the threat and promote trustworthy information in its generative AI products. These steps include launching new tools, adapting its policies, and partnering with others.

Google is also partnering with other organizations to fight misinformation. For example, the company has committed $10 million to the Global Fact Checking Fund and has partnered with think tanks, civil society organizations, and fact-checking networks to combat misinformation about the war in Ukraine.

The sources for this piece include an article in TheVerge.

SUBSCRIBE NOW

Related articles

Research Raises Concerns Over AI Impact on Code Quality

Recent findings from GitClear, a developer analytics firm, indicate that the increasing reliance on AI assistance in software...

Microsoft to train 100,000 Indian developers in AI

Microsoft has launched an ambitious program called "AI Odyssey" to train 100,000 Indian developers in artificial intelligence by...

NIST issues cybersecurity guide for AI developers

Paper identifies the types of cyberattacks that can manipulate the behavior of artificial intelligen

Canada, U.S. sign international guidelines for safe AI development

Eighteen countries, including Canada, the U.S. and the U.K., today agreed on recommended guidelines to developers in their nations for the secure design, development, deployment, and operation of artificial intelligent systems. It’s the latest in a series of voluntary guardrails that nations are urging their public and private sectors to follow for overseeing AI in

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways