The use of artificial intelligence in Canada’s federal, provincial, territorial and municipal governments has to be regulated as much as its use in the private sector, a conference on AI in the public sector has been told.
However, Stephen Troupe, CEO of the Canadian Institute for Advanced Research (CIFAR), also warned that regulation here can’t be done in isolation from what other countries are doing.
“I am not convinced national level regulation will be enough, or even provincial regulation. And yet I think it’s going to be almost impossible to get global regulation,” he told the conference organized by Ontario’s Information and Privacy Commissioner on Wednesday.
CEOs of major companies are flying around the world calling for a “global compact around AI,” Troupe said, but that “is a cynical exercise, because it’s not likely to happen.”
Ontario AI panel. From the left Teressa Scassa, Colin Mckay, Chris Parsons, Stephen Troupe, Melissa Kittmer, moderator Mike Maddock and Ontario information and privacy commissioner Patricia Kosseim. Panel participant Jeni Tennison appeared by videoconference.
Instead he called for “regulatory coalitions” with other jurisdictions like the European Union to make our regulatory frameworks as compatible as possible with theirs “so we don’t have a regulatory reach for the bottom.”
At the same time, our public and private sector AI frameworks should be flexible so innovation isn’t stifled and creates barriers to Canada’s AI successes.
“That’s easier said than done,” he admitted, “It will be very complicated. But we will lose public trust [in the public and private sector use of AI] if we don’t do enough, and lose the potential for creativity and opportunity for Canada and Ontario if we don’t do it the right way.”
The conference was part of the Ontario privacy commissioner’s education efforts during Data Privacy Week.
The conference opened with Ontario Information and Privacy Commissioner Patricia Kosseim repeating her call for the province to have an AI framework with binding rules governing the use of AI in the public sector.
Melissa Kittmer, assistant deputy minister in Ontario’s Ministry of Public and Business Service Delivery, said the government has been working on a Trustworthy AI Framework since 2021.
It has three priorities: “AI that people can trust” (making clear the risks of using AI, putting in mitigation strategies to minimize harm to people); “AI that is responsible” (have mechanisms allowing residents to challenge decisions informed by AI); and “No AI in secret” (ensuring there is transparency and disclosure when AI has been used to inform government decisions).
The goal of the framework is to enable the responsible use of AI by civil servants, she said. It will include policies, products, guidance, and tools to ensure the provincial government is transparent, accountable and responsible in its use of AI.
She didn’t say when the framework will be released.
Meanwhile, she said, Ontario is already using AI for extracting large amounts of data, in chatbots and virtual assistants, and for predictive soil mapping.
There are several initiatives across the country to legislate and regulate AI. Parliament is in the middle of debating a proposed Artificial Intelligence and Data Act (AIDA). But it only covers federally regulated businesses, as well as firms in provinces and territories that don’t have their own AI legislation. As for the federal civil service, Ottawa issued a directive on the use of AI in 2019. A guide for the federal use of generative AI was issued last year.
Last month, the European Union Council and members of Parliament reached a provisional agreement over a proposed Artificial Intelligence Act covering both the private and public sectors of the 27 member nations. Supporters hope it will be passed before Parliament adjourns for this summer’s elections.
In her opening remarks, Kosseim said AI “ushers in tremendous opportunities, with real world impacts unfolding in real time” that could affect everything from jobs to people’s health.
She said governments could use AI to draft plain language summaries of reports to help political decision-makers, cut delays to residents trying to access government benefits and services, enhance healthcare diagnosis through AI assistants, interpret medical images to find things the human eye might miss, predict the length of hospital stays, and help screen job applicants. Currently AI is being used to translate for people accessing emergency 911 who don’t speak English, she said.
However, she added, around the world there are examples of AI algorithms failing to return accurate results or perpetuating bias and discrimination against historically marginalized groups. One example: An algorithm used by a hospital to predict which patients will require extensive medical care was “heavily skewed in favour of white patients over black patients.” In another case, an algorithm used to accelerate job recruitment turned out to be biased against women.
“These and other examples speak to the importance of ridding bias in data sources used to train algorithms in the first place, as well as the need for human supervision over the returning results,” Kosseim said.
Troupe, who also oversees CIFAR’s Pan-Canadian AI Strategy, spoke of a Canadian Black computer scientist working on a facial recognition system for the art world who realized the system — which was already in use around the world — didn’t recognize her face, and by extension the faces of Black women. “That tells you the teams creating these systems were utterly unrepresentative,” he said. “To help generate widespread public confidence [in AI] we have to address that [system] creation.”
Big companies know about AI’s challenges, said Chris Parsons, manager of technology policy and strategic initiatives at the Ontario privacy commissioner’s office. Many have built safety checks, but there still can be bias in the underlying data they use. Many less regulated systems, he added, are “the wild west” that do things like generating child porn.
Organizations waiting for federal or provincial law on the use of generative AI should in the meantime turn to guidance issued by the country’s privacy commissioners, he said.
There are other reasons why provinces and territories need their own AI laws. Teressa Scassa, Canada Research Chair in Information Law and Policy at the University of Ottawa, reminded the conference that provinces — not the federal government — have authority over broader public sector institutions like hospitals and local police departments.
There are other issues AI raises, Scassa added, that involve non-personal information but that may have an effect on people’s lives. For example, she said, a data marketing company called Environics Analytics has a demonstration website of how publicly-available data it collects can categorize a postal zone for its customers. One north Toronto (North York) zone was described as “white collar,” with older families and empty nesters and an average income of $173,000. Data like this puts people into ‘ad hoc groups,’ she said, that could affect the delivery of services. How, she asked, is that addressed in privacy and human rights legislation?
“We need to have an eye on the broad impact [of the use of technology], not just individual privacy,” agreed Jeni Tennison, executive director of Connected by Data, which advocates for open data governance. What are needed are “group rights” so people can “match the power of big AI companies or governments when they deploy AI.”
The post Why Canadian provinces, territories need to regulate AI first appeared on IT World Canada.