How far the United States should go to regulate artificial intelligence systems will be at the heart of a U.S. government hearing today featuring the testimony of OpenAI CEO Sam Altman, whose firm is behind ChatGPT.
Altman, IBM VP and chief privacy and trust officer Christina Montgomery, and AI author and academic Gary Marcus will be witnesses before the Senate Judiciary Subcommittee on Privacy, Technology and the Law, starting at 10 a.m. Eastern.
Their testimony comes 12 days after the Biden administration met with OpenAI and the CEOs of Alphabet, Anthropic and Microsoft to discuss AI issues. The White House also issued five principles for responsible AI use.
“Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Senator Richard Blumenthal, chair of the Judiciary Subcommittee meeting, in announcing today’s session. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology. I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”
The hearing comes amid a divide among AI and IT researchers following the public launch last November of ChatGPT-3. [It’s now in version 4.] Some were dazzled by its ability to respond to search queries with paragraphs of text or flow charts, as opposed to lists of links issued by other search engines. Others, however, were aghast at ChatGPT’s mistakes and seemingly wild imagination, and its potential to be used to help students cheat, cybercrooks to write better malware, and nation-states to create misinformation.
In March, AI experts from around the world signed an open letter calling for a six-month halt in the development of advanced AI systems. The signatories worry that “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”
Meanwhile a group of Canadian experts urged Parliament to pass this country’s proposed AI legislation fast, arguing that regardless of possible imperfections “the pace at which AI is developing now requires timely action.”
Some problems can be solved without legislation, such as forbidding employees who use ChatGPT or similar generative AI systems from loading sensitive customer or corporate data into the systems. Samsung was forced to issue such a ban.
Aside from the problem of defining an AI systems — the proposed Canadian Artificial Intelligence Data Act says “artificial intelligence system means a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions” — there are questions about what AI systems can be allowed to do or be forbidden from doing.
Although yet to be put into a draft, the European Parliament has been directed to create a law with a number of bans, including the use of “real-time” remote biometric identification systems in publicly accessible spaces.
The proposed Canadian legislation says a person responsible for a high-impact AI system — which has yet to be defined — must, in accordance with as yet unpublished regulations, “establish measures to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system.”
Johannes Ullrich, dean of research at the SANS Technology Institute, told IT World Canada that there are a lot of questions around the transparency of AI models. These include where they get the training data from, and how any bias in training data affects the accuracy of results.
“We had issues with training data bias in facial recognition and other machine learning models before,” he wrote in an email. “They have not been well addressed in the past. The opaque nature of a large AI/ML model makes it difficult to assess any bias introduced in these models.
“More traditional search engines will lead users to the source of the data, while currently, large ML models are a bit hit and miss trying to obtain original sources.
“The other big question right now is with respect to intellectual property rights,” he said. AI models are typically not creating new original content, but they are representing data derived from existing work. “Without offering citations, the authors of the original work are not credited, and it is possible that the models used training data that was proprietary or that they were not authorized to use without referencing the source (for example a lot of creative commons licenses require these references.)”
The post OpenAI CEO to testify today before U.S. Senate Judiciary Subcommittee first appeared on IT World Canada.