The signing yesterday of the Bletchley Declaration, a document that focuses on the safe development and use of artificial intelligence (AI) technology, by representatives of 28 countries including Canada, revolves only around “setting signals and demonstrating a willingness to cooperate,” said Forrester principal analyst Martha Bennett.
“This declaration is not going to have any real impact on how AI is regulated,” she said. “More importantly, the countries and entities represented at the AI Summit would not have agreed to the text if it contained any meaningful detail on how AI should be regulated. And that is OK. We will have to wait and see whether good intentions are followed by meaningful action.”
In August, the U.K. government announced that “international governments, leading AI companies, and experts in research will unite for crucial talks” at Bletchley Park in Buckinghamshire. It is where AI pioneer and mathematician Alan Turing designed his Bombe computing machine, which sped up the process of cracking and decrypting the Enigma cypher used by Nazi Germany’s military command during World War II.
Iain Standen, CEO of the Bletchley Park Trust, said at the time, “we look forward to welcoming the world to our historic site. It is fitting that the very spot where leading minds harnessed emerging technologies to influence the successful outcome of World War II will, once again, be the crucible for international co-ordinated action.”
According to a release issued by the U.K. government on Wednesday, “leading AI nations reached a world-first agreement establishing a shared understanding of the opportunities and risks posed by frontier AI and the need for governments to work together to meet the most significant challenges.”
In a research document released in July, OpenAI wrote that advanced AI models hold the “promise of tremendous benefits for humanity, but society needs to proactively manage the accompanying risks. In this paper, we focus on what we term ‘frontier AI’ models: highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.
“Frontier AI models pose a distinct regulatory challenge: dangerous capabilities can arise unexpectedly; it is difficult to robustly prevent a deployed model from being misused; and it is difficult to stop a model’s capabilities from proliferating.”
Bennett said that while the declaration “features all the right words about scientific research and collaboration – which are of course crucial to addressing today’s issues around AI safety – the very end of the document brings it back to ‘frontier AI.’
“If anybody was hoping that the Summit would include an announcement around the establishment of a new global AI research body, those hopes were dashed. For now, countries are focusing on their own efforts.
“For example, U.K. Prime Minister Rishi Sunak has announced the establishment of ‘the world’s first AI Safety Institute,’ while President Biden has announced the establishment of the ‘U.S. Artificial Intelligence Safety Institute (USAISI).’ Let’s hope that we’ll see the kind of collaboration between these different institutes that the Bletchley Declaration advocates.”
On Monday, the Biden administration went one step further, with the issuing of an AI executive order (EO) it said establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition.
Forrester senior analyst Alla Valente said the EO sets the “tone for the administration’s agenda to bolster the nation’s competitive advantage by investing in new opportunities that will create and foster AI entrepreneurship while mitigating risks.
“(It) has teeth beyond mandatory requirements for the executive branch with its commitment to developing new standards, taking a multi-agency approach to enforcement, and holding the government accountable to the same standards for ‘responsible and effective’ use of AI. Ultimately, the EO will have a big and lasting impact for companies and industries that transact with the nation’s biggest employer –– the federal government.”
Meanwhile, Siân John, the chief technology officer (CTO) at cybersecurity consultancy NCC Group, said, “AI is a global challenge that needs a global response. While (it) offers endless opportunities, we must make sure that the adoption of AI practices is done so responsibly and with cybersecurity in mind. This is true domestically, but also true across borders. Major regulatory divergence between key nation states could be counterproductive and ultimately lead to poor security and safety outcomes for all.
“This week’s Bletchley Declaration – alongside the G7’s Hiroshima Process and domestic moves like the White House’s Executive Order – represent critical steps forward toward securing AI on a truly global scale.”
John added she is “heartened to see commitments from the Bletchley signatories to ensure that the AI Safety Summit is not just a one-off event, but that participants will convene again next year in South Korea and France, ensuring continued international leadership. In doing so, it will be important to set clear and measurable targets that leaders can measure progress against.”
The post Analyst says Bletchley Declaration delivers the right type of signals first appeared on IT World Canada.