The chief of Google DeepMind says that the risk of AI should be taken just as seriously as the climate crisis.

A prominent figure in the field of technology has cautioned that the risks posed by artificial intelligence must be addressed with the same urgency as the climate crisis, and cannot be put off any longer.
As the UK government gets ready to hold a summit on ensuring the safety of AI, Demis Hassabis suggested that regulation of the industry could begin with an organization resembling the Intergovernmental Panel on Climate Change (IPCC).
According to Hassabis, the CEO of Google’s AI division, urgent action must be taken to address the potential dangers of this technology. These dangers include assisting in the development of bioweapons and the potential threat of super-intelligent systems.
“We need to give the potential dangers of AI the same level of seriousness as other significant worldwide issues, such as climate change,” he stated. “The international community took too much time to come up with a successful global solution for this and we are currently experiencing the repercussions of that. We cannot afford to repeat the same mistake with AI.”
Hassabis, the creator of the innovative AlphaFold program which visualizes protein structures, stated that AI has the potential to be one of the most significant and advantageous technologies ever created.
He informed the Guardian that a system of supervision was necessary and governments should look to global organizations like the IPCC for guidance.
I suggest we begin with a structure similar to the IPCC, which is a scientific and research agreement that produces reports. We can then expand upon this foundation.
He stated, “In the future, I hope to have an organization similar to Cern that focuses on researching AI safety on an international level. Perhaps one day there could also be an equivalent of the IAEA, responsible for auditing these technologies.”
The IAEA, a United Nations organization, works to ensure the safe and peaceful utilization of nuclear technology and prevent the spread of nuclear weapons through inspections.
According to Hassabis, none of the comparisons made between AI and regulations are directly relevant to the technology. However, there are valuable insights that can be gained from established institutions.
Eric Schmidt, the ex-CEO of Google, and Mustafa Suleyman, co-founder of DeepMind, recently proposed the establishment of an IPCC-like panel to oversee AI. While UK authorities are in favor of this idea, they suggest that it should be done through the UN.
Hassabis stated that AI has the potential to create amazing prospects in areas like medicine and science. However, he also recognized the worries surrounding the technology’s existential impact. These concerns revolve around the potential creation of artificial general intelligence (AGI) – systems that possess intelligence equal to or surpassing that of humans, which could potentially surpass human control.
In May, Hassabis was among the individuals who signed an open letter cautioning about the potential danger of AI and emphasizing that it should be viewed as a risk on the same level as pandemics and nuclear warfare.
“We need to begin contemplating and researching this immediately. In fact, we should have started yesterday,” he stated. “That is why I, along with many others, signed the letter. We wanted to validate the importance of discussing this matter.”
There is worry among certain individuals in the tech industry that AGI (artificial general intelligence) or “god-like” AI may become a reality in the next few years. However, there are also those who believe that concerns about the potential dangers of this type of technology may be exaggerated.
Hassabis stated that the development of AGI systems is still a ways off, but we can already envision the trajectory and therefore it is important to start discussing it now.
Ignore the advertisement for the newsletter.
after newsletter promotion
According to him, the current AI technology is not dangerous, but future generations with added capabilities like planning and memory may pose a risk. These advanced systems will have great potential for beneficial applications, but they also come with potential risks.
The conference taking place on November 1st and 2nd at Bletchley Park, the former headquarters of codebreakers during World War II, will center around the potential danger posed by advanced AI technology in the creation of bioweapons, execution of devastating cyber-attacks, and circumvention of human oversight. Attendees will include Hassabis and CEOs from top AI companies such as OpenAI, the San Francisco-based creators of ChatGPT.
Hassabis’s team has made notable advancements in AI, including the development of AlphaGo, an AI program that beat the top player in the Chinese board game Go. They have also created AlphaFold, a groundbreaking project that can forecast the 3D shapes of proteins, a crucial step in finding solutions for diseases and other fields of study.
According to Hassabis, he holds a positive view on AI due to its ability to transform industries like healthcare and science. However, he believes that a balanced approach is necessary for effectively managing the technology.
The public release of ChatGPT, a chatbot that gained popularity for its impressive text responses to human prompts, has put AI in the spotlight on the political agenda. This includes its ability to generate convincing text on various topics, such as academic essays, recipes, job applications, and even assisting with parking ticket disputes.
Some AI tools, like Midjourney, have surprised people by generating lifelike images, like a fake photo of the pope wearing a puffer jacket. This has led to worries that malicious individuals could misuse AI to spread false information on a large scale.
Those fears have fed concerns about the potential power of the next generation of AI models. Hassabis, whose unit is working on a new AI model called Gemini that will generate image and text, said he envisaged a Kitemark-style system for models emerging.
The UK government recently established the Frontier AI taskforce in hopes of developing a set of standards for evaluating advanced AI models. These guidelines could potentially serve as a global standard for testing initiatives.
Hassabis stated that in the future, there could potentially be a collection of 1,000 or even 10,000 tests that would result in a safety Kitemark.
Source: theguardian.com