Some famous scientists, like Stephen Hawking himself, have already predicted the risks that artificial intelligence could pose to humans. Not only Hawking, but big businessmen in the technology sector have also expressed concern about the advancement of AI.
As an example of this, Steve Wozniak, co-founder of Apple, Elon Musk and hundreds of other technology tycoons signed a letter to ask for caution with advances in this area.
see more
Google develops AI tool to help journalists in…
Unopened original 2007 iPhone sells for nearly $200,000; know...
See too: Man claims to have written 100 books with the help of artificial intelligence
In short, one of the premises that the alarmists point out is that artificial intelligence could be developed to the point of acquiring consciousness. Although even the finest philosophy does not know how to define exactly what such consciousness is, it is known that at this stage a machine would understand who it is and what its role in the world is.
It would be at this point that artificial intelligence could take bad attitudes towards humans. It would be enough for her to understand that people are a problem for the existence of the planet and for the very permanence of machines, for example.
Dan Hendrycks, director of the Center For AI Safety, warned that there are "multiple pathways" to "human-scale risks from AI". This is according to an article published by the Daily Star newspaper.
The expert even says that more advanced forms of artificial intelligence "could be used by malicious actors to design new biological weapons that are more lethal than natural pandemics."
Well, it is undeniable that this would be a very efficient way to exterminate human beings on Earth. Just look at the recent Covid-19 pandemic. These threats even exist within several laboratories. Himself virus of rabies, if it suffered a single mutation, which made it transmissible through the air, it would end up with human beings quickly and efficiently.
In an increasingly connected and digitized world, it is perfectly reasonable to imagine that an artificial intelligence can order raw materials to develop a lethal biological weapon.
This is the bet that some experts make, according to the newspaper cited above. They believe it would be a way to preserve machines as they manage to develop human emotions.
“Thanks to a technological revolution in genetic engineering, all the tools needed to create a virus have become so cheap, simple and readily available that any rogue scientist or college-aged biohacker can use them, creating an even greater threat,” said Dan Hendrycks.
On the other hand, there are analysts who do not see risks such as those mentioned above. For them, it is quite complex to analyze that an artificial intelligence is capable of evolving similarly to human mental processes.
According to this current of thought, it would not be a logical way to think about the best possibilities. For this, developers would have to create an algorithm that mimics people's way of thinking and feeling, something that is currently out of the question.
Furthermore, for these people, artificial intelligence would realize that to avoid problems, the best thing to do would be simply not to exist. Thus, there is no tendency to survive as in animals, but to look for the way to solve problems.