Marc Warner, a member of the government’s AI Council and CEO of Faculty AI, has emphasized the need for potential bans on powerful artificial general intelligence (AGI) systems.
In the creation of AGI, Warner highlighted the significance of transparency, audits, and safety measures. He distinguished AGI from AI and voiced worries about its broad capabilities, implying that AGI algorithms aspire to outperform human intellect in a variety of fields, demanding particular laws.
Warner also raised concerns about the safety dangers involved with AGI that outperforms human intellect and questioned the scientific rationale for developing such systems. He recommended limiting computing power and maybe prohibiting algorithms that exceed a particular degree of complexity or capacity. He pushed for government decision-making in these areas rather than leaving it to technological firms.
Warner addressed concerns about prejudice in AI employment and face recognition, as well as making cars and aircraft safer. Instead of severe laws, he argued that fostering safety and ethical AI would give the UK an advantage.
Warner also emphasized the importance of AI safety procedures, comparing it to dependable aviation engines. He highlighted that governments, not tech corporations, should be in charge of developing AGI legislation.
The sources for this piece include an article in BBC.