Well-known AI researchers have been sounding the alarm about the rapid and unprecedented growth of AI in recent years, claiming that this development threatens humanity itself.
Award-winning computer scientist Geoffrey Hinton, also regarded as the “godfather of AI,” recently announced his resignation from Google to voice his worries over the unfettered development of new AI capabilities.
In an interview with the MIT Technology Review published this week, Hinton, 75, stated, “I have suddenly switched my views on whether these things are going to be more intelligent than us.”
There’s a good chance they’re already there, and in the future, they’ll be far more intelligent than we are. I don’t know how we’re going to make it through that.
Hinton is not alone in feeling this way. The CEO of ChatGPT’s parent company, OpenAI, Sam Altman, warned in February that the world may not be “that far away from potentially scary” AI technologies and that regulation would be crucial but would take time to sort out.
AI May Now Be More Intelligent Than Humans
Our inborn ability to categorize data, recall past experiences, and reason our way out of sticky situations is what allows us to do amazing things like solve equations, operate vehicles, and remember our favorite Netflix shows. We are able to do this because our brains have 86 billion neurons and, more importantly, 100 trillion connections between them.
Hinton claims that the underlying technology of ChatGPT can support anything from 500 billion to a trillion connections. However, Hinton points out that GPT-4, OpenAI’s most recent AI model, knows “hundreds of times more” than any single human, so it’s not at a complete disadvantage to us. Perhaps it is more capable of cognitive tasks because it has a “much better learning algorithm” than we do, he posits.
Since training artificial neural networks requires a lot of time, effort, and data, scientists have known for some time that they lag far behind humans when it comes to learning new information and putting it to use.
Hinton contends that this is no longer the case because sophisticated systems like GPT-4 can rapidly acquire new skills after receiving adequate training from human experts. In a similar vein, a skilled professional physicist will be able to conceptualize novel experimental data considerably more rapidly than a regular high school science student.
Hinton draws the conclusion that AI systems may have already surpassed humanity in intelligence because they are able to acquire and disseminate information at an unprecedented rate.
We Are At A Loss As To What To Do To Halt It
Unfortunately, “I wish I had a nice simple solution I could push, but I don’t,” Hinton said. There may not be a way out, in my opinion.
Nonetheless, governments are watching the development of AI closely. The White House has invited the CEOs of Google, Microsoft, and ChatGPT developer OpenAI to meet with Vice President Kamala Harris on Thursday for what officials are calling a “open and candid discussion” about ways to reduce the short-term and long-term threats posed by their technology.
Legislators in Europe are working quickly to finalize agreements on sweeping new AI regulations, while the UK’s competition regulator has announced plans to investigate the effects of AI on consumers, businesses, and the economy to determine if additional regulations are required for tools like ChatGPT.
The question of who or what could stop a superpower like Russia from exploiting AI technology to subjugate its neighbors and its own people remains unanswered.
Hinton suggests that a worldwide convention along the lines of the Chemical Weapons Convention of 1997 could be an excellent starting step toward establishing international laws against weaponized artificial intelligence.
However, it should be noted that the chemical weapons compact did not prevent what investigators believe was chlorine gas and the nerve agent sarin attacks on civilians in Syria in 2017 and 2018.
Some worry that AI systems will lead to unfair incarceration practices, an increase in spam and false information, a breakdown in cyber security, and the eventual takeover of critical infrastructure by a “smart and planning” AI. Without a doubt, bias exists in neural networks.