Four artificial intelligence (AI) scientists have raised concern after their work was included in an open letter co-signed by Elon Musk, which urges an urgent halt to research.
The letter, issued on March 22, has accumulated over 3,000 signatures and demands for a six-month pause on the development of systems “more powerful” than the current GPT-4. The letter says that AI systems with “human-competitive intelligence” pose serious risks to mankind, citing studies from twelve experts, including university academics and current and former Google, DeepMind, and OpenAI personnel.
Following the letter’s publication, advocacy organizations in the United States and the European Union pushed politicians to limit OpenAI’s research. OpenAI has yet to provide a statement in response to the letter. Detractors of the letter have said that the Future of Life Institute (FLI), which launched the letter and is mostly funded by the Musk Foundation, prioritizes hypothetical catastrophic scenarios using AI above more current problems such as the danger of biased programming in machines.
According to the open letter, advanced AI systems are beginning to compete with human aptitude at broad activities, which has major ramifications. Should machines, for example, flood our communication channels with misleading information and propaganda? Should we automate all occupations, including those that give us a sense of accomplishment? Should we develop nonhuman intelligence capable of outwitting, outnumbering, and finally replacing us?
Should we put our civilisation at risk of losing control? The letter contends that choices on these issues should not be left only to unelected technology executives. The development of strong AI systems should begin only when there is certainty that the impact will be favorable and the dangers will be manageable.
In view of these concerns, the letter advises all AI laboratories to cease training of AI systems more powerful than GPT-4 for at least six months. This disruption should be public and verifiable, and all essential parties should be involved. During this period of pause, AI laboratories and independent experts should work together to develop and execute a set of common safety protocols for advanced AI design and development that are carefully inspected and overseen by independent external experts.
These guidelines should ensure that systems that follow them are completely safe. Moreover, AI research and development should focus on improving today’s cutting-edge systems’ accuracy, safety, interpretability, transparency, robustness, alignment, trustworthiness, and loyalty.
The sources for this piece include an article in Reuters.