The United Nations should create a new international body to help govern the use of artificial intelligence as the technology increasingly reveals its potential risks and benefits, according to UN Secretary-General António Guterres.
The UN has an opportunity to set globally agreed-upon rules of the road for monitoring and regulating AI, Guterres said Tuesday at a first-ever meeting of the UN Security Council devoted to AI governance.
Just as the UN convened similar bodies to manage the use of nuclear energy, boost aviation safety and meet the challenges of climate change, Guterres said, the UN has a unique role to play in coordinating the international response to AI.
Already, the UN has been deploying artificial intelligence in its own operations to monitor ceasefires and identify patterns of violence, he added, and UN peacekeeping and humanitarian operations are also being targeted by hostile actors using AI for malicious purposes, “causing great human suffering.”
“The malicious use of AI systems for terrorist, criminal or state purposes could cause horrific levels of deaths and destruction, widespread trauma and deep psychological damage on an unimaginable scale,” Guterres warned. “Generative AI has enormous potential for good and evil at scale. Its creators themselves have warned that much bigger, potentially catastrophic and existential risks lie ahead. Without action to address these risks, we are derelict in our responsibilities to present and future generations.”
By 2026, the UN should develop a legally binding agreement banning the use of AI in completely automated weapons of war, Guterres said. He also pledged to bring together an advisory council that will develop proposals for regulating AI more broadly by the end of the year, and teased a forthcoming policy brief with recommendations for governments on how to approach the technology responsibly.
Leading Tuesday’s meeting was UK Foreign Secretary James Cleverly, who called for international governance of AI to be tied to principles upholding freedom and democracy; respect for human rights and the rule of law; security, including physical security as well as the protection of property rights and privacy; and trustworthiness.
“We are here today because AI will affect the work of this council,” Cleverly said. “It could enhance or disrupt global strategic stability. It challenges our fundamental assumptions about defense and deterrence. It poses moral questions about accountability for lethal decisions on the battlefield…. AI could aid the reckless quest for weapons of mass destruction by state and non-state actors alike. But it could also help us stop proliferation.”
The Chinese government, meanwhile, argued that UN rules should reflect the views of developing countries as it seeks to prevent the technology from becoming “a runaway wild horse.”
International laws and norms around AI should be flexible to give countries the freedom to establish their own national-level regulations, said Chinese Ambassador Zhang Jun, who also blasted unnamed “developed countries” for trying to achieve dominance in AI.
“Certain developed countries, in order to seek technological hegemony, make efforts to build their exclusive small clubs and maliciously obstruct the technological development of other countries and artificially create technological barriers,” Zhang said. “China firmly opposes these behaviors.”
Zhang’s remarks come on the heels of reports that the US government may seek to limit the flow of powerful artificial intelligence chips to China.
An official representing the United States at the meeting did not directly address the Chinese government’s accusations but added that “no member state should use AI to censor, constrain, repress or disempower people” — a possible veiled reference to China’s use of technology to surveil ethnic minorities.
The meeting also included some voices from the tech industry.
Addressing the security council via teleconference, Jack Clark, the co-founder of the AI company Anthropic, urged member states not to allow private companies to dominate the development of artificial intelligence.
“We cannot leave the development of artificial intelligence solely to private sector actors,” Clark said. “The governments of the world must come together to develop safe capacity and make further development of powerful AI systems a shared endeavor across all parts of society, rather than one dictated solely by a small number of firms competing with one another in the marketplace.”
Read the full article here