Geoffrey Hinton, AI pioneer, quits Google over ethical risks of AI
Geoffrey Hinton, an artificial intelligence (AI) pioneer and longtime head of Google’s AI research division, has resigned from his position, citing growing concerns about the ethical implications of the technology he helped create.
Hinton, who is widely regarded as a genius for his pioneering work on deep learning and neural networks, said he decided to leave Google after more than a decade to speak more openly about the potential risks and harms of AI. He made the move at a time when AI is being overhyped by competitors around the world.
“I console myself with the classic excuse: if I hadn’t done it, someone else would have,” Hinton said in an exclusive interview with the Times. “It’s hard to see how you stop bad actors from using it to do bad things.”
The news of Hinton’s departure comes just over a month after more than 1,000 AI researchers signed an open letter calling for a six-month pause in training AI systems more powerful than GPT-4 (the latest model released by OpenAI in March). The letter states that “AI systems with intelligence comparable to humans can pose profound risks to society and humanity, as extensive research has shown and top AI labs have confirmed.” Hinton’s departure is the latest and most significant manifestation of a growing rift between some of the world’s leading AI researchers and the technology companies that employ them. Many of these researchers are sounding the alarm over the social, environmental and political impacts of AI, as well as the lack of transparency, accountability and diversity in the field.
Other prominent voices in the field of AI ethics are also speaking out against the practices and priorities of the industry. Kate Crawford, Senior Principal Researcher at Microsoft Research and Distinguished Professor at New York University, has written extensively on the risks of AI. She recently published a book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence: Power, Politics, and the Planetary Costs of Artificial Intelligence, which exposes the hidden human and environmental impacts of AI production and consumption.
Stuart Russell, professor of computer science at the University of California, Berkeley, and co-author of Artificial Intelligence: A Modern Approach, the standard textbook on artificial intelligence, warns of the existential threat of superintelligent AI that could surpass human capabilities and goals.
Source, photo: venturebeat.com, google.com
Author of this article
WAS THIS ARTICLE HELPFUL?
Support us to keep up the good work and to provide you even better content. Your donations will be used to help students get access to quality content for free and pay our contributors’ salaries, who work hard to create this website content! Thank you for all your support!