Singapore’s landscape for AI safety US and China
Government Singapore released the project today for global cooperation on artificial information safety after a meeting of artificial intelligence researchers from the United States, China and Europe. The document offers a common perspective on working in artificial intelligence safety through international cooperation rather than competition.
“Singapore is one of the few countries on the East and West,” says Max Tagarak, a MIT scientist who helped meet the Luminaries AI session last month. “They know they don’t intend to build [artificial general intelligence] They themselves – they do this with them – so it is very good for them to talk to the countries they want to make it. “
Countries that were likely to think of AGI are, of course, the United States and China, but it seems that these countries are going to outperform each other to work together. In January, after the launch of the Chinese Deepseek, President Trump called it “a awakening call for our industries” and said the United States should be “laser -focused on competition for victory”.
Singapore’s consensus on the priorities of global artificial intelligence research calls for researchers to collaborate in three main areas: Studying the dangers of border AI models, examining safer ways to build those models, and developing ways to control the behavior of the most advanced artificial intelligence systems.
The consensus was held at a meeting held on April 26, along with the International Conference on Learning Theater (ICLR), a top artificial intelligence event held in Singapore this year.
Researchers at Openai, Anthropic, Google Deepmind, XAI and Meta all participated in the AI safety event, as academics such as MIT, Stanford, Tsinghua and the Chinese Academy of Sciences. Experts from artificial intelligence safety institutes also participated in the United States, Britain, France, Canada, China, Japan and Korea.
“In a period of geopolitical fragmentation, this comprehensive synthesis of cutting research on artificial intelligence safety is a promising sign that the international community is accompanied by a common commitment to shaping the future of artificial intelligence,” said XU LAN, Vice President of the University of Tsinghua.
The development of artificial intelligence models increasingly, some of which have surprise abilities, have made researchers concerned about a wide range of risks. While some focus on the near -time disadvantages, including problems caused by biased artificial intelligence systems or the potential of criminals to inhibit technology, a significant number believe that artificial intelligence may cause an existential threat to humanity because it begins to overtake humans in more areas. These researchers, sometimes referred to as “doomers ai”, are concerned that models may deceive and manipulate humans to pursue their goals.
The potential of artificial intelligence is also talking about the arms race between the United States, China and other powerful countries. This technology is very important in political circles as an important thing for economic prosperity and military domination, and many governments seek to take on their views and regulations on how to develop.