Experts call for global mechanism to regulate artificial intelligence
WASHINGTON – Researchers and industry leaders have called for the establishment of a global mechanism to regulate artificial intelligence.
In a letter signed by experts, including CEOs of leading cyber companies, they propose the creation of a body modeled on the International Atomic Energy Agency to regulate artificial intelligence (AI).
They said this body should have the authority to inspect systems, require audits, and verify compliance.
The body should also have the authority to impose restrictions on use and assess the level of security.
Signatories to the letter include Google DeepMind, Open AI, and Anthropic.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” they said.
They compared the risks posed by AI to those of pandemics and nuclear wars.
“As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it,” they added.
In April, a bipartisan group of U.S. lawmakers introduced legislation to prohibit AI from making launch decisions as part of the nuclear command and control process.
The Block Nuclear Launch by Autonomous Artificial Intelligence Act would codify existing Pentagon policy that mandates human action initiate any nuclear launch and would bar federal funds from being used to carry out any launch by automated systems. Nuclear launches would require “meaningful human control” under the legislation.
Other AI devices, such as ChatGPT, have raised fears of economic consequences.
They can also deceive people online and be used to spread propaganda and misinformation worldwide.