Proposed Canadian AI law ‘fundamentally flawed,’ Parliament told

Share post:

The Trudeau government’s proposed law regulating high-impact artificial intelligence systems is “fundamentally flawed” and gives so much power to a cabinet minister that it’s “an affront to Parliament,”  a Canadian privacy lawyer testified Tuesday.

The proposed Artificial Intelligence and Data Act (AIDA) fails to protect the public from significant risks and harms, and hinders innovation, Barry Sookman of the McCarthy Tétrault law firm told the House of Commons Industry Committee.

While it would set rules for how businesses should create and implement high-risk AI applications, Sookman complained AIDA doesn’t define “high-risk”, doesn’t include guiding principles on how AI systems will be regulated, and will hinder private sector innovation.

Not only that, AIDA puts too much power in the hands of the Minister of Innovation to create regulations for overseeing the proposed act, Sookman said. And while it would create an AI Commissioner to enforce the legislation, that person would report to the Minister and not be an independent officer of Parliament as is the federal privacy commissioner.

For this last reason, AIDA “is in my view an affront to Parliamentary sovereignty,” Sookman said. “AIDA sets a dangerous precedent. What will be next? Fiat by regulation for quantum computing? Blockchain? The climate crisis or other threats? We have no idea.”

AIDA “paves the way for a bloated and unaccountable bureaucracy,” within the Innovation department, Sookman maintained.

In short, he said, “in its current form AIDA is unintelligible.”

AIDA is part of Bill C-27, which also includes the proposed Consumer Protection Privacy Act (CPPA), an overhaul of the federal privacy legislation governing the private sector. Many critics have told the committee the two pieces of legislation are so important they should be split. Sookman’s criticism has so far been the most pointed — and perhaps more is to come, because so far the committee has asked witnesses to focus on the CPPA. Later, the committee will turn its attention to AIDA.

One problem is that, while the government has said it is open to amending the wording of AIDA, it hasn’t yet delivered precise wording. A committee-set deadline is approaching.

RELATED CONTENT: Government releases some proposed changes

Opposition MPs on the committee — and Sookman — worry final changes the government is willing to make won’t be delivered until after witnesses have testified.

The fight over what approach is best for overseeing AI systems comes as businesses rush to adopt generative AI systems like ChatGPT.

The fear of many privacy experts is that, unless there is quickly some sort of regulation, soon AI systems will be allowed to deny people the ability to get loans, insurance or jobs, based on biased algorithms.

The European Union is close to finalizing the wording of an AI Act, but one news report says there are disagreements on how foundation models like ChatGPT would be regulated. One proposal from France, Germany and Italy is that foundation models would face lighter regulation, only having to publish some information on things like how models are tested to ensure they are safe. Initially there would be no sanctions for companies that don’t publish such information.

Former B.C. and U.K. information and privacy commissioner Elizabeth Denham, who also testified Tuesday, approvingly said the proposed U.K. law “reads like a product safety statute.”

In the absence of AI regulatory laws, Canada and other countries say businesses and governments should follow certain guidelines for AI systems. This week, for example, 18 countries agreed on guidelines businesses should follow for the secure design, development, deployment, and operation of artificial intelligent systems.

Tuesday’s hearing also briefly delved into approaches by other governments:

— in the U.S., rather than wait for Congress to pass legislation, President Joe Biden issued an Executive Order to federal departments buying and firms wanting to sell AI solutions to Washington. Federal departments and agencies have to develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy based on guidelines issued by U.S. National Institute for Standards and Technology (NIST). Developers of the most powerful AI systems are required to share their safety test results and other critical information with the U.S. government.

— in the U.K., Sookman noted with approval, a private member’s AI bill has two features: A designated cabinet minister can create regulations controlling AI, but the regulations have to be approved by Parliament; and the proposed law would include principles that regulations have to follow, such as ensuring high impact AI systems deliver safety, security and robustness, and transparency into how they were tested before being released. “This is genius,” Sookman said. AIDA should adopt both features, he added.

Denham noted the U.K. government has decided not to regulate AI, but to give existing regulators the powers they need to oversee the technology. The government has also created an AI Institute to encourage all digital regulators to work together. This, she said, is a “wait-and-see approach.”

So far, it seems, the Canadian government is sticking with AIDA.

The post Proposed Canadian AI law ‘fundamentally flawed,’ Parliament told first appeared on IT World Canada.
Howard Solomon
Howard Solomonhttps://www.itworldcanada.com
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times.

SUBSCRIBE NOW

Related articles

Tests unable to distinguish AI from human reviews

AI-generated restaurant reviews can now pass the Turing test, successfully fooling both human readers and automated detectors, according...

Zuckerberg shares his vision with investors and Meta stock tanks

In an era where instant gratification is often the norm, Meta CEO Mark Zuckerberg’s strategic pivot towards long-term,...

AI surpasses human benchmarks in most areas: Stanford report

Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) has published the seventh annual issue of its AI Index...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways