President Biden has signed an executive order to regulate artificial intelligence, circumventing the need for congressional approval. This action has sparked both support and criticism within the tech industry.
Clem Delangue, Co-founder and CEO of Hugging Face, voiced concerns about setting strict thresholds for AI development, comparing it to counting lines of code in software. Richard Socher, CEO of you.com, argued that regulation should focus on AI applications posing risks to privacy, legality, and security, aspects not adequately addressed in the executive order.
Renowned figures in the AI community, such as Andrew Ng and Yann LeCun, believe that big tech companies are exaggerating AI risks to maintain their market dominance. They contend that the focus should be on regulating AI applications, not stifling research and development.
The executive order introduces measures for accountability and transparency. Developers of powerful AI systems will be required to disclose safety test results, while the National Institute of Standards and Technology will establish rules for AI system development.
Furthermore, the creation of an ‘AI Bill of Rights’ aims to protect against potential AI-related harms, emphasizing privacy, equity, and worker support. The government’s focus is on responsible and ethical AI use across all sectors.
The executive order faces challenges due to its temporary nature and lack of long-term legislative backing. In contrast, the European Union is finalizing a more comprehensive AI regulation emphasizing transparency and accountability.
The order aims to put the U.S. on a regulatory path, but the effectiveness of such executive orders is contingent on future administrations. Constructive dialogue with China is essential for global governance in the AI arena.
Vice President Kamala Harris will play a pivotal role in shaping AI policy, addressing immediate equity issues and long-term existential risks.