AI

AI Experts Call for Safety Plan Amid Control Concerns

By Clementine Crooks

September 17, 2024

226

AI researchers warn of potential "catastrophic outcomes." 
 
A group of leading artificial intelligence (AI) scientists has issued a warning, calling on nations to establish a global oversight system to prevent potentially catastrophic consequences if control over AI technologies is lost. 
 
In their statement published on September 16, the team voiced concerns that the technology they have helped advance could pose severe threats should it fall outside human control. The statement reads: “Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all humanity." It went further to highlight that we are yet unprepared with the necessary scientific tools and knowledge required to regulate and protect against misuse of such advanced intelligence. 
 
The scientists agreed that there's an urgent need for individual nations to create authorities capable of detecting and responding promptly to any AI-related incidents or risks within their jurisdictions. They also suggested developing a global contingency plan as part of long-term strategies. 
 
"In the longer term, states should develop an international governance regime to prevent development models that could pose global catastrophic risks," read part of the statement. 
 
This call comes in light of findings from early September’s International Dialogue on AI Safety held in Venice. This was its third meeting organized by US-based nonprofit research group Safe AI Forum. 
 
Gillian Hadfield, Professor at Johns Hopkins University who shared this statement online, asked rhetorically about what course would be taken if catastrophe struck due to autonomous self-improving models in six months' time—who would be called upon? 
 
These experts defined AI safety as being essential for worldwide public good, hence necessitating international cooperation and governance measures. 
  
They proposed three key initiatives: setting up emergency preparedness arrangements and institutions; creating a framework ensuring safety assurance; and carrying out independent universal research into AI safety verification. 
 
Related: Elon Musk voices support for sweeping regulations aimed at ensuring safe use 
 
Over thirty signatories endorsed this appeal, including some from Canada, China, Singapore, Britain, and the United States, among other countries. This group consists of experts from top AI research institutions and universities, including several Turing Award winners—the computing equivalent of a Nobel Prize. 
 
The scientists noted that this dialogue is increasingly necessary due to dwindling scientific exchange between superpowers and growing distrust primarily between China and the US—factors that complicate consensus on AI threats. 
 
Interestingly, in early September, the EU, UK, and US signed what's regarded as the world’s first legally binding international treaty on AI, which prioritizes human rights alongside accountability in regulating AI technologies. 
 
However, tech corporations and executives have voiced concerns over possible stifling of innovation due to overregulation, especially within European Union jurisdictions.


LATEST ARTICLES IN AI

Study: Brands Concerned About Agencies' Use of AI.

OpenAI May Ban Probes into AI Model's Reasoning Process.

AI Revolutionizes Speedy Drug Synthesis Scouting.

Indian Pilot Creates AI Aircraft Inspection System.

Join Our Newsletter

Advertisement

Popular Articles

  • Mar 13, 2024

    Anyone But You - A Romantic Comedy Surprise of 2023
  • Feb 01, 2024

    AI Company About to Revolutionize the Medical Space?
  • Jul 31, 2024

    Apple Anticipates Higher Revenue in Thursday's Earnings Report
  • Aug 01, 2024

    Samsung Galaxy S25: Potential Big Screen and Camera Upgrades

Categories

AI Blockchain Business Health Markets
Politics Real Estate Tech US News World News
Sports Entertainment Science Editorial Commodities

Useful Links

Home About Pricing Legal
Advertise Terms & Conditions Privacy Policy Contact

Subscribe

© Financial News is owned and operated by FN Publishing Ltd. No portion of this site can be reproduced without explicit written permission of FN Publishing Ltd.

By accessing this website, you are agreeing to be bound by our terms and conditions. Please read carefully before using.