Ahead of the AI Safety Summit kicking off in Seoul, South Korea later this week, co-host the UK is expanding its own efforts in the space: The AI Safety Association – a UK group founded in November 2023 with the ambitious goal of assessing and addressing risks in AI platforms – has announced it will open a second location in San Francisco.
The idea is to move closer to what is now the epicenter of AI development, particularly the Bay Area, home to OpenAI, Anthropic, Google, and Meta, which are building foundational AI technologies.
Fundamental models are the building blocks of generative AI services and other applications, and it is interesting that despite the UK signing an MOU with the US to collaborate on AI safety efforts, the UK remains It's choosing to invest in building a commitment to safety. It will be present in the United States to address this issue.
“By putting people on the ground in San Francisco, we will be able to access the headquarters of a lot of AI companies,” Michelle Donnellan, the UK Secretary of State for Science, Innovation and Technology, said in an interview with TechCrunch. . “Many of them are based here in the UK, but we think it would be very beneficial to have a base there as well and have access to an additional talent pool and be able to work more collaboratively and collaboratively. “With the United States.” ”
Part of the reason is that for the UK, being closer to its epicenter not only helps us understand what is being built, but also allows the UK to gain more visibility into these companies. It's for a reason. This is important considering how AI and technology as a whole is viewed by other people. The UK represents a huge opportunity for economic growth and investment.
But given the recent drama surrounding OpenAI's Superalignment team, it feels like an especially timely time to establish a presence here.
Launched in November 2023, the AI Safety Institute is a relatively small operation at this point. The organization currently employs just 32 people, and given the billions of dollars invested in companies that build AI models, the companies that bring the technology to market have their own financial incentives. This is David the Goliath. Then it goes into the hands of the users who paid for it.
One of the AI Safety Institute's most notable developments was the release earlier this month of Inspect, the first set of tools for testing the safety of basic AI models.
Donelan today called the release a “Phase 1” effort. Not only has model benchmarking proven difficult in the past, but engagement is currently largely opt-in and inconsistently arranged. As one UK regulatory official pointed out, companies currently have no legal obligation to scrutinize their models. Not all companies are willing to vet their models before release. This means that if a risk may have been identified, the horse may have already stalled.
Donnellan said the AI Safety Institute is still developing the best ways to engage with AI companies to evaluate them. “Our evaluation process is itself an emerging science,” she said. “So we're continuing to develop and further refine our process each time we do an evaluation.”
Mr Donnellan said one of the objectives in Seoul would be to present the inspection to regulators convened at the summit.
“We now have a rating system. Phase 2 also involves making AI safe across society,” she said.
Mr Donnellan believes that in the long term the UK will develop further AI legislation, but repeating what Chancellor Rishi Sunak has said on the subject, he believes that until we have a better understanding of the extent of the risks of AI, will resist.
“We don't believe in legislating before we properly know and fully understand,” she said, noting that a recent international AI safety report published by the institute He pointed out that the main focus was on trying to get a comprehensive picture of previous research. We believe that major gaps are missing and that more research needs to be encouraged and encouraged around the world.
“Also, it takes about a year to legislate in the UK. And if we had started law instead when we were just starting out… [organizing] AI Safety Summit [held in November last year], we're still legislating right now and there won't really be anything to show for it. ”
“From day one of the Institute’s establishment, we have taken an international approach to AI safety, sharing research, collaborating with other countries to test models, and recognizing the importance of anticipating risks in frontier AI. “We've made that clear,” said Ian Hogarth, director of the institute. AI Safety Research Institute. “Today is a pivotal moment as we take this plan further. We are proud to add to the incredible expertise that our staff in London have brought from the start, and to expand in a region rich with technology talent.”