Machines dominating humans. Can thinking about this hypothesis seem absurd? With several films and science fiction series that show the endless possibilities of technological control and the advancement of science in the real world, the answer to this question may be: no. In a way, we have learned to respect and fear a robotic revolution that, although it seems distant from our present day, generates countless questions in the mind of the human being.
But, as you just read, we will probably have a friendly relationship with these robotic devices, as there are a number of scenarios that go against this suggestion and could lead to a situation in which non-biological beings would have a single objective: that of exterminating us .
Skynet can become reality
According to the professor of computer science at the University of Massachusetts, Shlomo Zilberstein, today there is already enough technology to build a system that, intentionally or not, could destroy the entire planet if it detected suitable conditions. So, for this to not happen, Zilberstein tells the New Scientist website that the main approach is not to develop programs that can be dangerous enough to get out of control.
As fictitious as it sounds, believe me: something like Skynet – the computerized defense network of the film franchise “The Terminator” that decides to end humanity – can now be built. The question: why does such a system not yet exist? Easy: because nations with nuclear weapons, like the United States, for example, would not like to assign any control responsibility to computers, that is, they do not want to run the risk of a system failure.
In real life, a small degree of autonomy has already been granted to some machines. “What if a computational error occurs? No one wants to take that risk. However, the number of robotic systems that can actually pull the trigger independently is already growing,” said Zilberstein. However, the scientist believes that some guards could prevent an automated system from threatening more people than it is designed to do (in the control of the country’s borders, for example). Furthermore, there is nothing to prevent systems from being programmed with the ability to make major strategic decisions in the same way as Skynet, but with certain limitations.
“All systems that we are likely to build in the future will have specific skills. They will have the ability to monitor the region and perhaps even shoot at an obstacle, but in no way will they replace humans,” said Zilberstein.
Robs superior to humans
If, on the one hand, researcher Shlomo Zilberstein is an optimist, Michael Dyer, a computer scientist at the University of California, has no good relationship with man and machine. He believes that, someday, human beings will be replaced by machines, and the worst: that this transition may not be anything but peaceful.
Continuous progress in artificial intelligence research will make robots as intelligent as we are over the next hundred years. “Advanced civilizations will reach a point of sufficient intelligence to understand how their brain is made, and then build synthetic versions of themselves,” predicts Dyer. This would be caused by the human being himself through biotechnological attempts to achieve his own immortality – and this opportunity to “not die” may be too much for humanity to endure.
Dyer suggests, for example, that a new arms race in the robotic system could result in total lack of control. “In the case of war, by definition, the enemy side has no control over the robots that are trying to kill them.” In this case, like Skynet, electronic weapons can turn against manufacturers. Or, in another example, a situation of over-dependence on robots can also get out of hand: if a robot-controlled factory receives a (human) order to shut down its operations, the robots can refuse commands and thus trigger a war.
Of course, this whole Hollywood scene can feel like a fantasy, or even scare you. But the fact that the profit motive to companies certainly generated more robotic automation, and rationality is not always the one who wins this battle.
“Apocalyptic scenarios are very easy to create, and I would not rule out that kind of possibility. But personally, I am not concerned,” says Shlomo Zilberstein. And are you, worried, anxious or frightened about the (possible) future that awaits us?