Godfather of AI: "I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”

Hinton resigned from his position at Google to join a growing number of critics warning about the dangers of generative AI, the technology behind popular chatbots like ChatGPT.

Godfather of AI: "I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”
Photo by Anton Repponen / Unsplash
"Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.” - Geoffrey Hinton

Geoffrey Hinton, a pioneer in artificial intelligence and a leading expert in deep learning, has been a central figure in AI research for decades. He is often referred to as "the Godfather of AI" due to his groundbreaking work on neural networks.

According to a New York Times article, Hinton resigned from his position at Google to join a growing number of critics warning about the dangers of generative AI, the technology behind popular chatbots like ChatGPT. He chose to leave Google so that he could speak freely about the potential risks associated with AI, even expressing some regrets over his life's work.

Hinton's concerns about AI include the following:

  • The internet becoming flooded with false information: As generative AI improves, it could create misleading photos, videos, and text, making it difficult for people to discern what is true.
  • Job market disruptions: While chatbots currently complement human workers, they could eventually replace jobs in sectors such as paralegals, personal assistants, and translators.
  • Threats to humanity: Future AI systems might learn unexpected behavior from the vast amounts of data they analyze, posing a potential risk to human safety. Hinton is particularly worried about the development of autonomous weapons.

In response to these concerns, Hinton believes that global regulation might be necessary to control the development and deployment of AI technologies. However, he acknowledges that achieving such regulation could be challenging, given the secrecy surrounding AI research and development. Instead, Hinton proposes the following measures:

  • Encourage collaboration among leading scientists to develop ways to control AI technology
  • Pause the scaling of AI systems until a deeper understanding of control mechanisms is achieved

Hinton's resignation and his stance on AI safety reflect a turning point in the technology industry, as more experts grapple with the ethical and societal implications of rapidly advancing AI systems.

Hinton was recently interviewed by CBS: