header banner
Default

The head of Google DeepMind, Slashdot, says that AI risk needs to be taken as seriously as climate change


Table of Contents

    An anonymous reader quotes a report from The Guardian: The world must treat the risks from artificial intelligence as seriously as the climate crisis and cannot afford to delay its response, one of the technology's leading figures has warned. Speaking as the UK government prepares to host a summit on AI safety, Demis Hassabis said oversight of the industry could start with a body similar to the Intergovernmental Panel on Climate Change (IPCC). Hassabis, the British chief executive of Google's AI unit, said the world must act immediately in tackling the technology's dangers, which included aiding the creation of bioweapons and the existential threat posed by super-intelligent systems.

    "We must take the risks of AI as seriously as other major global challenges, like climate change," he said. "It took the international community too long to coordinate an effective global response to this, and we're living with the consequences of that now. We can't afford the same delay with AI." Hassabis, whose unit created the revolutionary AlphaFold program that depicts protein structures, said AI could be "one of the most important and beneficial technologies ever invented." However, he told the Guardian a regime of oversight was needed and governments should take inspiration from international structures such as the IPCC.

    "I think we have to start with something like the IPCC, where it's a scientific and research agreement with reports, and then build up from there." He added: "Then what I'd like to see eventually is an equivalent of a Cern for AI safety that does research into that -- but internationally. And then maybe there's some kind of equivalent one day of the IAEA, which actually audits these things." The International Atomic Energy Agency (IAEA) is a UN body that promotes the secure and peaceful use of nuclear technology in an effort to prevent proliferation of nuclear weapons, including via inspections. However, Hassabis said none of the regulatory analogies used for AI were "directly applicable" to the technology, though "valuable lessons" could be drawn from existing institutions.

    Hassabis said the world was a long time away from "god-like" AI being developed but "we can see the path there, so we should be discussing it now."

    He said current AI systems "aren't of risk but the next few generations may be when they have extra capabilities like planning and memory and other things ... They will be phenomenal for good use cases but also they will have risks."

    Sources


    Article information

    Author: Natalie Hall

    Last Updated: 1698963481

    Views: 1598

    Rating: 4.2 / 5 (60 voted)

    Reviews: 90% of readers found this page helpful

    Author information

    Name: Natalie Hall

    Birthday: 1917-09-05

    Address: 2290 Jason Highway Suite 717, New Tracyland, AL 98602

    Phone: +4607440066661695

    Job: Cryptocurrency Analyst

    Hobby: Arduino, Lock Picking, Billiards, Graphic Design, Baking, Running, Photography

    Introduction: My name is Natalie Hall, I am a esteemed, accessible, Colorful, radiant, dear, Precious, candid person who loves writing and wants to share my knowledge and understanding with you.