According to an expert, the warnings about AI causing doomsday are diverting attention from the current danger it presents.
A senior industry representative attending the AI safety summit this week argues that fixating on potential catastrophic outcomes of artificial intelligence distracts from more pressing concerns, such as the widespread spread of false information.
Aidan Gomez, one of the authors of a study that contributed to the development of chatbot technology, believes that while long-term risks such as AI posing existential threats to humanity should be acknowledged and examined, it is important for politicians to also address more immediate potential dangers.
“I believe that discussing existential risks and public policy is not a productive use of time,” he stated. “When considering where the public sector should focus their efforts in mitigating risks to the civilian population, I think this conversation can be a distraction from more tangible and pressing risks.”
Gomez, CEO of Cohere, a company in North America that specializes in creating AI tools for businesses like chatbots, will be present at the two-day summit beginning on Wednesday. In 2017, at only 20 years old, Gomez was a member of the Google research team that developed the Transformer, a crucial component of the language models used to drive AI tools like chatbots.
According to Gomez, AI, which refers to computer systems capable of performing tasks typically associated with intelligent beings, is already being widely used. The summit should prioritize discussing these types of applications. Examples such as ChatGPT and Midjourney, which are chatbots and image generators respectively, have amazed the public with their ability to create believable text and images based on simple prompts.
“This particular technology is currently being utilized by billions of users, including those at Google and other companies. This brings forth a multitude of potential risks to consider, although none of them are catastrophic or apocalyptic in nature,” stated Gomez. “Our attention should be directed towards the immediate effects that are currently affecting individuals, rather than delving into abstract and speculative discussions about its potential long-term consequences.”
Gomez said misinformation – the spread of misleading or incorrect information online – was his key concern. “Misinformation is one that is top of mind for me,” he said. “These [AI] models can create media that is extremely convincing, very compelling, virtually indistinguishable from human-created text or images or media. And so that is something that we quite urgently need to address. We need to figure out how we’re going to give the public the ability to distinguish between these different types of media.”
The first day of the conference will focus on various AI topics, including concerns about misinformation and its impact on elections and trust in society. The second day, led by Rishi Sunak, will involve a smaller group of countries, experts, and tech leaders discussing actionable measures to mitigate AI risks. Kamala Harris, the Vice President of the United States, will be in attendance.
Gomez, who referred to the summit as “extremely significant,” expressed concern over the possibility of an army of bots spreading false information generated by AI. He emphasized the potential danger this poses to democracy and public discourse.
Last week, the government released a series of documents that discussed potential risks of AI, such as the spread of misinformation and impact on job availability. The government stated that it was possible for AI development to reach a level where it posed a threat to humanity.
A recent publication on risk stated that due to the unpredictability of advancements in AI, there is not enough proof to dismiss the possibility that powerful Frontier AI systems, if not properly aligned or controlled, could present a threat to our existence.
According to the document, numerous experts believe that the likelihood of this risk is minimal. It would require certain conditions to be fulfilled, such as a highly advanced system taking over weapons or financial markets. Worries about the potential existential threat from AI revolve around the idea of artificial general intelligence, which refers to an AI system that can perform various tasks at a level of intelligence equal to or greater than that of humans. This type of AI could hypothetically reproduce itself, elude human oversight, and make decisions that are not aligned with human interests.
As a result of those concerns, a letter was released in March, endorsed by over 30,000 technology professionals and experts, including Elon Musk. The letter called for a six-month suspension of massive artificial intelligence experiments.
Two out of the three prominent leaders in AI, Geoffrey Hinton and Yoshua Bengio, released a statement in May emphasizing the importance of addressing the potential threat of AI causing extinction. They compared this risk to that of pandemics and nuclear warfare. However, Yann LeCun, their colleague and co-winner of the prestigious ACM Turing award, which is considered the equivalent to the Nobel prize in computing, dismissed concerns about AI leading to the demise of humanity as unfounded.
LeCun, the chief AI scientist at Meta, Facebook’s parent company, told the Financial Times this month that a number of “conceptual breakthroughs” would be needed before AI could reach human-level intelligence – a point where a system could evade human control. LeCun added: “Intelligence has nothing to do with a desire to dominate. It’s not even true for humans.”