In the summit of China pushing its AI agenda to the world

Three days later The Trump administration has released its highly anticipated AI action plan, and the Chinese government has released its own AI policy blueprint. Is the timing a coincidence? I doubt.
China’s “Global AI Governance Action Plan” was released on July 26, the first day of the World Artificial Intelligence Conference (WAIC), the largest annual AI event in China. Geoffrey Hinton and Eric Schmidt are among many Western tech industry figures participating in the Shanghai celebrations. Our wired colleague Will Knight was also on the scene.
The atmosphere of WAIC is contrary to Trump’s vision of regulation of AI in the first place, and will tell me. In his opening speech, Chinese Prime Minister Li Qiang proposed a sober case, namely the importance of global cooperation in AI. Following closely behind is a series of well-known Chinese artificial intelligence researchers who conducted technical negotiations highlighting urgent issues that the Trump administration appears to have been largely withdrawn.
Zhou Bowen, head of Shanghai AI Laboratory, one of China’s top AI research institutions, touted his AI security work at WAIC. He also suggested that governments could play a role in monitoring commercial AI models.
In an interview with Yi Zeng, a professor at the Chinese Academy of Sciences and a leading voice professor in AI, he hopes that AI security organizations from around the world can find ways to cooperate. “If the UK, the US, China, Singapore and other institutions come together, it would be the best,” he said.
The meeting also included closed-door meetings on AI security policy issues. Paul Triolo, a partner at consulting firm DGA-Albright Stonebridge Group, spoke after attending such parishes, told Wired that despite the obvious U.S. leadership, the discussion was effective. Triolo told Wired. He added that it wasn’t just the missing U.S. government: Of all the major U.S. AI labs, only Elon Musk’s XAI sent employees to the WAIC forum.
Many Western tourists were surprised to learn how much talks about China’s AI have been conducted around safety regulations. “You can keep attending AI security events for the past seven days. Some other global AI summits aren’t like that,” Brian TSE, founder of Concordia AI, a Beijing-based AI security research institute, told me. Earlier this week, Concordia AI held a one-day security forum in Shanghai with prominent AI researchers such as Stuart Russel and Yoshua Bengio.
Switch position
Comparing China’s AI blueprint with Trump’s action plan, the two countries seem to have changed their positions. When Chinese companies first started developing advanced AI models, many observers believed that the government-imposed scrutiny requirements would prevent them. Now, U.S. leaders say they want to ensure that native AI models “pursuing objective truths”, a effort my colleague Steven Levy wrote last week Back shift Newsletters are “a blatant exercise of top-down ideological bias.” Meanwhile, China’s AI action plan reads like a globalist declaration: it recommends that the United Nations help lead international AI efforts and suggests that governments play an important role in normative technology.
Although their governments are very different, when it comes to AI safety, people in China and the US are worried about many of the same things: model hallucinations, discrimination, existing risks, cybersecurity vulnerabilities, etc. Because the US and China are developing frontier AI models “trained on the same architecture and using the same methods of scaling laws, the types of social impact and the risks they pose are Very, very similar,” says Tse. This also means that academic research on AI security is converging in the two countries, including in areas such as scalable surveillance (how humans use other AI models to monitor AI models) and interoperable security testing standards.