Singapore’s Vision for AI Safety Bridges the US-China Divide | EUROtoday

Singapore’s Vision for AI Safety Bridges the US-China Divide
 | EUROtoday

The authorities of Singapore launched a blueprint at the moment for world collaboration on synthetic intelligence security following a gathering of AI researchers from the US, China, and Europe. The doc lays out a shared imaginative and prescient for engaged on AI security via worldwide cooperation relatively than competitors.

“Singapore is one of the few countries on the planet that gets along well with both East and West,” says Max Tegmark, a scientist at MIT who helped convene the assembly of AI luminaries final month. “They know that they’re not going to build [artificial general intelligence] themselves—they will have it done to them—so it is very much in their interests to have the countries that are going to build it talk to each other.”

The countries thought most likely to build AGI are, of course, the US and China—and yet those nations seem more intent on outmaneuvering each other than working together. In January, after Chinese startup DeepSeek released a cutting-edge model, President Trump called it “a wakeup call for our industries” and mentioned the US wanted to be “laser-focused on competing to win.”

The Singapore Consensus on Global AI Safety Research Priorities calls for researchers to collaborate in three key areas: studying the risks posed by frontier AI models, exploring safer ways to build those models, and developing methods for controlling the behavior of the most advanced AI systems.

The consensus was developed at a meeting held on April 26 alongside the International Conference on Learning Representations (ICLR), a premier AI event held in Singapore this year.

Researchers from OpenAI, Anthropic, Google DeepMind, xAI, and Meta all attended the AI safety event, as did academics from institutions including MIT, Stanford, Tsinghua, and the Chinese Academy of Sciences. Experts from AI safety institutes in the US, UK, France, Canada, China, Japan and Korea also participated.

“In an period of geopolitical fragmentation, this complete synthesis of cutting-edge analysis on AI security is a promising signal that the worldwide neighborhood is coming along with a shared dedication to shaping a safer AI future,” Xue Lan, dean of Tsinghua University, said in a statement.

The development of increasingly capable AI models, some of which have surprising abilities, has caused researchers to worry about a range of risks. While some focus on near-term harms including problems caused by biased AI systems or the potential for criminals to harness the technology, a significant number believe that AI may pose an existential threat to humanity as it begins to outsmart humans across more domains. These researchers, sometimes referred to as “AI doomers,” worry that models may deceive and manipulate humans in order to pursue their own goals.

The potential of AI has additionally stoked speak of an arms race between the US, China, and different highly effective nations. The know-how is seen in coverage circles as vital to financial prosperity and army dominance, and plenty of governments have sought to stake out their very own visions and rules governing the way it must be developed.

https://www.wired.com/story/singapore-ai-safety-global-consensus/