Fri. Jul 26th, 2024

After 10 years of service at Google, artificial intelligence pioneer Jeffrey Hinton resigned from the American giant last Monday, explaining in an interview with the New York Times that he left to speak freely about deep learning AI, and about the 5 fundamental risks of artificial intelligence.

Beware of machines surpassing their creators

According to Hinton, the competition between technology giants has led to progress beyond anyone’s imagination.

The speed at which progress is happening has surpassed scientists’ expectations, and “only a few people believed that this technology could actually become more intelligent than humans…I personally thought it wouldn’t happen for 30 to 50 years or maybe more, and of course, I no longer think that.”

Until last year, this progress was not considered dangerous, but Hinton’s opinion changed when Google and OpenAI developed neural systems capable of processing huge amounts of data, which could make these systems more efficient than the human brain, and therefore very dangerous.

Reducing job opportunities in all fields

From an economic point of view, the godfather of artificial intelligence fears that this technology could significantly disrupt the job market, saying “artificial intelligence eliminates hard work,” and adding that it “can eliminate much more than that,” affecting translators and personal assistants in particular. The elimination of job opportunities will not spare even the “smartest” individuals, even if some of them believe they are immune to it.

The threat of “killer robots”

This expert believes that technological advancement is progressing too quickly compared to the means available to us to regulate the use of artificial intelligence. He comments, “I don’t think we should rush into it until we understand whether we can control it.” What this expert fears is that future versions could become “threats to humanity.”

According to Hinton, future artificial intelligence systems will be able to develop unexpected behaviors after analyzing large amounts of data. This has become possible because artificial intelligence systems generate and direct their own code, which could turn them into “autonomous weapons” and “killer robots,” although many experts downplay this threat.

Artificial intelligence in the hands of malicious entities

According to him, the threat also comes from the misuse of artificial intelligence by dangerous actors. He is concerned that “it is difficult to know how to prevent bad actors from using it for evil purposes.” Hinton opposes the use of artificial intelligence in the military field in particular, and he is primarily concerned about humans developing “robot soldiers.”

Jeffrey Hinton decided to leave Carnegie Mellon University in the 1980s because his research there was funded by the Pentagon.

“Generator of Nonsense”

Finally, Jeffrey Hinton warns against the misleading information associated with artificial intelligence and highlights that this scientific push and its intensive use of artificial intelligence will make it almost impossible to distinguish “what is true from what is false”, to the point that the world is talking about a “generator of nonsense,” a term that refers to the ability of artificial intelligence to produce convincing phrases that seem reasonable without being true.

So what is the solution? The neural network expert supports international cooperation that includes all specialists in this field, but he doubts the ability to achieve this, saying: “It may be impossible… There is no way to know if companies or countries are secretly working on such programs, and the only hope is that the world’s top scientists work together to find solutions for controlling artificial intelligence.”