Fairly AI has raised $2.2 million CAD ($1.7 million USD) at a time of concern over the use of responsible AI and regulations on the horizon.

“AI innovation is just outpacing risk and compliance … It’s a ticking time bomb.”
– Fion Lee-Madan, Fairly AI
 

Fion Lee-Madan, COO and co-founder of Fairly AI, told BetaKit in an interview: “We are not here to stop or slow down innovation. We’re here to help accelerate the safe adoption of AI.”

Lee-Madan’s assertion arrives on the heels of concerns over privacy, biases within the tech, and speed of AI development from industry stakeholders and government privacy groups. It also comes as multiple governments are considering implementing AI regulation.

Fairly’s software is designed to automate risk management in the development of AI models to help them proactively comply with regulations and mitigate safety and privacy concerns. The company claims this will make it easier to apply policies and controls early in the development process, then continue to use them throughout the entire product life cycle.

Flying Fish Partners, a fund focused on AI and machine learning, led the round. Partners in Flying Fish include ex-Microsoft AI experts with a background in legal compliance, according to Lee-Madan. Other investors included Backstage Capital, X Factor Ventures, Loyal VC, NEXT Canada, and a few strategic angels. Fairly also closed $1.1 million CAD in a pre-seed round in June 2021, and a $600,000 extension of the round in November 2022.

Fairly is building its product amid fast-paced developments in the AI sector. Recently, the Canadian privacy commissioner launched an investigation into OpenAI, the company behind the AI chatbot ChatGPT. The company is alleged to have collected, used, and disclosed personal information without consent. This followed Yoshua Bengio, the co-founder of Montréal-based Mila, along with hundreds of tech leaders, signing an open letter urging all AI labs to agree to a six-month pause on training systems that are more powerful than GPT-4. The latter is the latest iteration of the powerful chat bot, ChatGPT, which, according to its creators, employs more data and more computation to create increasingly sophisticated and capable language models.

At the federal level, the Canadian government has tabled Bill C-27, which would require that high-impact AI systems are developed and deployed in a way that identifies, assesses, and mitigates the risks of harm and bias. It also introduces new criminal penalties for obtaining data for AI unlawfully, as well as for developing or deploying AI in a way that poses a serious risk of harm. It is currently stalled in Parliament.

The Liberal government claims the bill will significantly strengthen Canada’s private sector privacy law, create new rules for the responsible development and use of AI, and continue to put in place Canada’s Digital Charter.

RELATED: Generative AI will test what’s worse: biased data or user bad faith

Both the European Union and the United States have introduced similar legislation aimed at regulating the use of AI.

Lee-Madan said she understands that Fairly, an AI regulation and compliance startup, might be seen as yet another impediment to AI’s progress. However, she claims, the company will actually accelerate the broad use of fair and responsible AI by helping organizations bring safer AI models to market.

Fairly draws on industry and academic research and techniques from behavioural and cognitive science, ethics, philosophy, software architecture, finance, and data science.

The startup began in 2015 as an interdisciplinary research project involving philosophy, cognitive science, and computer science. After extensive product concept and design iterations, it was formally incorporated in April 2020.

Lee-Madan claims that Fairly’s co-founder and CEO, David Van Bruwaene, is uniquely qualified to deal with AI compliance. Van Bruwaene studied ethics and logic at Cornell, Berkeley, and the University of Waterloo, and ran AI startup ViSR as its CEO and head of data science before its acquisition by SafeToNet for an undisclosed price in 2018.

AI, computing, and programming come naturally to Van Bruwaene because he grew up in a family of computer scientists, Lee-Madan explained. “When he joined a startup, he was the only one who understood both technology and the ethical side of things. It’s a very unique combination for what we do.”

Lee-Madan said the lead investor at Flying Fish saw the compliance problem of AI coming in the same way her co-founder did.

“AI innovation is just outpacing risk and compliance,” Lee-Madan said, who noted that over 3,000 AI guideline principles have been published since 2017. “It’s a ticking time bomb,” she added.

RELATED: Canadian privacy commissioner launches investigation into ChatGPT

According to Lee-Madan, the issues with AI are accelerating because the technology exaggerates and amplifies all the biases that exist in the historical data used to train AI models.

The funds from the raise will go toward sales and marketing, as Fairly hasn’t carried out any significant efforts in that area to date, Lee-Madan explained. She noted that spending its limited budget on education wasn’t something Fairly wanted to do in its early days.

“Now that we’re seeing a product-market fit, our next stage is to really to go out and start marketing,” she said.

At the same time, competition exists in the regulatory AI space, Lee-Madan pointed out: “We’re seeing more and more, and that’s why we know this market is gaining traction, and there is money to be made.”

That said, she noted that as a Canadian startup, Fairly definitely didn’t raise as much and as quickly as its US competitors. “I think it’s well-known that the Canadian startup ecosystem is more conservative compared to Silicon Valley,” Lee-Madan said.

The post Fairly AI raises $2.2 million CAD to help AI firms comply with forthcoming regulatory requirements first appeared on BetaKit.

Leave a Reply