LONDON — The British government will build a facility in the United States to test “frontier” artificial intelligence models, to further strengthen its image as the world’s top player in tackling technology risks and strengthen cooperation with the United States as a neighboring government. It is expanding. The world is competing for AI leadership.
The government announced Monday that the U.S. side of the AI Safety Summit, a state-backed group focused on testing advanced AI systems to ensure their safety, will convene in San Francisco this summer.
The AI Safety Institute in the US aims to hire a team of technical staff led by a Principal Investigator. The London-based institute currently has a team of 30 people and is headed by Ian Hogarth, a prominent British technology entrepreneur who founded music concert discovery site Songkick.
British technology minister Michelle Donnellan said in a statement that the rollout of the AI Safety Summit in the US “represents the UK’s leadership in AI in action.”
“This is a pivotal moment for the UK to explore both the risks and opportunities of AI from a global perspective, strengthening our partnership with the US and building on our expertise as the UK continues to lead the world. It paves the way for other countries to utilize the knowledge.” Safety of AI. ”
This expansion will allow the UK to “leverage the wealth of technology talent available in the Bay Area, collaborate with the world’s largest AI research institute headquartered in both London and San Francisco, and collaborate with the United Kingdom to improve AI safety for its citizens.” “This will enable us to strengthen our relationship with the United States,” the government said.
San Francisco is the home of OpenAI. microsoftThe company behind the viral AI chatbot ChatGPT.
The AI Safety Association aims to foster cross-border cooperation on AI safety at a global event held at Bletchley Park, UK, home of World War II codebreakers. It was founded in November 2023 during an AI Safety Summit.
The AI Safety Institute’s expansion to the US comes on the eve of the AI Seoul Summit in South Korea, and was first proposed at the UK summit at Bletchley Park last year. The Seoul summit will be held over Tuesday and Wednesday.
The government said progress has been made in evaluating cutting-edge AI models by several leading companies in the industry since the AI Safety Institute was established in November.
The report said Monday that some AI models have accomplished cybersecurity tasks, but struggle to accomplish more advanced tasks, while some models require only Ph.D.-level knowledge in chemistry and biology. announced that it had been demonstrated.
On the other hand, all models tested by the institute are still highly vulnerable to “jailbreaks,” where users can be tricked into producing responses not allowed by content guidelines, and some models are safe. It is possible to generate harmful output without attempting to circumvent the security measures.
According to the government, the models tested were also unable to complete more complex and time-consuming tasks without a human supervising them.
The name of the AI model tested was not disclosed. The government previously got OpenAI, DeepMind, and Anthropic to agree to release much-needed AI models to the government to help inform research on the risks associated with their systems.
The development comes as the UK faces criticism for not introducing formal regulations on AI, while other jurisdictions such as the European Union rush to enact laws tailored to AI. .
The EU’s landmark AI law is the first major law on AI of its kind and is expected to become a blueprint for global AI regulation once approved and brought into force by all EU member states. Masu.