The Biden administration has announced plans to solicit public comments on potential accountability measures for artificial intelligence (AI) systems.
The move comes amid growing concerns about the impact of AI on national security and education. One AI program that has attracted U.S. lawmakers’ attention is ChatGPT, which has gained popularity as the fastest-growing consumer application in history with over 100 million monthly active users.
The National Telecommunications and Information Administration (NTIA) is seeking input as there is “growing regulatory interest” in an AI “accountability mechanism.” The agency is interested in knowing if there are measures that could be implemented to provide assurance “that AI systems are legal, effective, ethical, safe, and otherwise trustworthy.”
“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” said NTIA Administrator Alan Davidson.
President Joe Biden recently commented on AI’s potential dangers and emphasized the need for tech companies to ensure their products are safe before making them public.
ChatGPT, created by California-based OpenAI and backed by Microsoft Corp, has impressed some users with its quick responses to questions while causing distress for others with inaccuracies. The NTIA plans to draft a report that will inform the Biden Administration’s ongoing work to “ensure a cohesive and comprehensive federal government approach to AI-related risks and opportunities.”
Meanwhile, a tech ethics group, the Center for Artificial Intelligence and Digital Policy, has urged the U.S. Federal Trade Commission to stop OpenAI from releasing new commercial versions of GPT-4, citing its “biased, deceptive, and a risk to privacy and public safety.”
The sources for this piece include an article in Reuters.