The Trudeau government’s proposed law regulating high-impact artificial intelligence systems is “fundamentally flawed” and gives so much power to a cabinet minister that it’s “an affront to Parliament,” a Canadian privacy lawyer testified Tuesday.
The proposed Artificial Intelligence and Data Act (AIDA) fails to protect the public from significant risks and harms, and hinders innovation, Barry Sookman of the McCarthy Tétrault law firm told the House of Commons Industry Committee.
While it would set rules for how businesses should create and implement high-risk AI applications, Sookman complained AIDA doesn’t define “high-risk”, doesn’t include guiding principles on how AI systems will be regulated, and will hinder private sector innovation.
Not only that, AIDA puts too much power in the hands of the Minister of Innovation to create regulations for overseeing the proposed act, Sookman said. And while it would create an AI Commissioner to enforce the legislation, that person would report to the Minister and not be an independent officer of Parliament as is the federal privacy commissioner.
For this last reason, AIDA “is in my view an affront to Parliamentary sovereignty,” Sookman said. “AIDA sets a dangerous precedent. What will be next? Fiat by regulation for quantum computing? Blockchain? The climate crisis or other threats? We have no idea.”
AIDA “paves the way for a bloated and unaccountable bureaucracy,” within the Innovation department, Sookman maintained.
In short, he said, “in its current form AIDA is unintelligible.”
AIDA is part of Bill C-27, which also includes the proposed Consumer Protection Privacy Act (CPPA), an overhaul of the federal privacy legislation governing the private sector. Many critics have told the committee the two pieces of legislation are so important they should be split. Sookman’s criticism has so far been the most pointed — and perhaps more is to come, because so far the committee has asked witnesses to focus on the CPPA. Later, the committee will turn its attention to AIDA.
One problem is that, while the government has said it is open to amending the wording of AIDA, it hasn’t yet delivered precise wording. A committee-set deadline is approaching.
RELATED CONTENT: Government releases some proposed changes
Opposition MPs on the committee — and Sookman — worry final changes the government is willing to make won’t be delivered until after witnesses have testified.
The fight over what approach is best for overseeing AI systems comes as businesses rush to adopt generative AI systems like ChatGPT.
The fear of many privacy experts is that, unless there is quickly some sort of regulation, soon AI systems will be allowed to deny people the ability to get loans, insurance or jobs, based on biased algorithms.
The European Union is close to finalizing the wording of an AI Act, but one news report says there are disagreements on how foundation models like ChatGPT would be regulated. One proposal from France, Germany and Italy is that foundation models would face lighter regulation, only having to publish some information on things like how models are tested to ensure they are safe. Initially there would be no sanctions for companies that don’t publish such information.
Former B.C. and U.K. information and privacy commissioner Elizabeth Denham, who also testified Tuesday, approvingly said the proposed U.K. law “reads like a product safety statute.”
In the absence of AI regulatory laws, Canada and other countries say businesses and governments should follow certain guidelines for AI systems. This week, for example, 18 countries agreed on guidelines businesses should follow for the secure design, development, deployment, and operation of artificial intelligent systems.
Tuesday’s hearing also briefly delved into approaches by other governments:
— in the U.S., rather than wait for Congress to pass legislation, President Joe Biden issued an Executive Order to federal departments buying and firms wanting to sell AI solutions to Washington. Federal departments and agencies have to develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy based on guidelines issued by U.S. National Institute for Standards and Technology (NIST). Developers of the most powerful AI systems are required to share their safety test results and other critical information with the U.S. government.
— in the U.K., Sookman noted with approval, a private member’s AI bill has two features: A designated cabinet minister can create regulations controlling AI, but the regulations have to be approved by Parliament; and the proposed law would include principles that regulations have to follow, such as ensuring high impact AI systems deliver safety, security and robustness, and transparency into how they were tested before being released. “This is genius,” Sookman said. AIDA should adopt both features, he added.
Denham noted the U.K. government has decided not to regulate AI, but to give existing regulators the powers they need to oversee the technology. The government has also created an AI Institute to encourage all digital regulators to work together. This, she said, is a “wait-and-see approach.”
So far, it seems, the Canadian government is sticking with AIDA.
The post Proposed Canadian AI law ‘fundamentally flawed,’ Parliament told first appeared on IT World Canada.