AI pioneer quits Google to warn about the technology's ‘dangers'
Geoffrey Hinton, who helped develop AI, left Google last week to speak out about the dangers of the technology.
New York CNN
Geoffrey Hinton - the man who is known as the "Godfather" of AI - confirmed on Monday that he had left his position at Google to speak about the dangers' associated with the technology he developed.
Hinton's groundbreaking work on neural networks has influenced artificial intelligence systems that power many of today's popular products. He spent a decade working part-time on Google's AI efforts. However, he is now concerned about the technology.
Hinton, who was the first to report on his decision, told The New York Times: 'I comfort myself with the usual excuse: if I hadn't, someone else would have done it.'
Hinton stated in a Monday tweet that he left Google to speak freely about AI risks, and not because he wanted to criticize Google.
Hinton tweeted: 'I left to be able to talk about the dangers AI without thinking of how this impacts Google.' Google has been very responsible.
Jeff Dean, Google's chief scientist, said Hinton "has made fundamental breakthroughs in AI" and expressed his appreciation for Hinton’s 'decade-long contributions at Google'
Dean stated in a CNN statement that 'we remain committed to a responsibly approach to AI'. We're constantly learning to understand new risks, while at the same time innovating.
Hinton's decision, to step away from the company and to speak out about the technology, comes at a time when a growing number lawmakers and advocacy groups have expressed concern over the potential of AI-powered bots to spread misinformation or to displace jobs.
ChatGPT, which was the subject of much attention in late 2018, has sparked a race between tech companies to create and implement similar AI tools into their products. OpenAI, Microsoft, and Google are leading this trend. IBM, Amazon Baidu, and Tencent also work on similar technologies.
Some prominent tech figures signed a March letter calling on artificial intelligence laboratories to stop training the most powerful AI system for at least six month, citing "profound risks to the society and humanity." The Future of Life Institute is a nonprofit that Elon Musk supports. It published the letter just two weeks after OpenAI released GPT-4, a more powerful version of technology used in ChatGPT. GPT-4 has been used in early tests, as well as a demo by the company, to create a website, draft a lawsuit, and pass standardized exams.
Hinton expressed concerns in the Times interview about AI's ability to destroy jobs and create a future where people will "not be able know what's true anymore." He also emphasized the astonishing pace of progress, which was far beyond his and others' expectations.
Hinton stated in an interview that 'the idea that these things could get smarter than humans -- a few believed that'. Most people, however, thought that it was a far-fetched idea. I also thought that it was a long way off. I thought that it would be 30 to 50 or even more years away. I don't think so anymore.
Hinton spoke publicly before leaving Google about the potential for AI to cause harm and good.
Hinton, in his commencement speech at the Indian Institute of Technology Bombay (IITB) in Mumbai in 2021, said: 'I think that the rapid advancement of AI will transform society in ways we don't fully understand. And not all of its effects are going be good.' He noted that AI would boost healthcare, while creating the possibility for lethal autonomous weaponry. This prospect is much more frightening and immediate than robots taking control, which I believe is still a long way away.
Hinton isn’t the only Google employee who has raised a red-flag about AI. In July, Google fired an employee who said that an unreleased AI had become intelligent, claiming he violated the company's employment and data protection policies. The AI community was outraged by the engineer's claim.