Medicines began to print on a 3D printer

What is the developer of AI himself afraid of in his creation?

As a person engaged in the research of artificial intelligence, I often come across the opinion that many people are afraid of AI and the fact that it can turn into. In fact, it is not surprising if we look at it from the standpoint of human history, while paying attention to what the entertainment industry is feeding us, that people may fear a cybernetic uprising that will make us live in isolated territories, and will turn others into a “Matrix-like” human battery. .

And yet, for me, looking at all these evolutionary computer models that I use in the development of AI, it’s hard to think that my harmless, clean as a baby’s tears on my computer’s screen one day can turn into futuristic dystopia monsters. Is it really possible to call me “the destroyer of worlds”, as Oppenheimer once regretted and spoke of himself after he had headed the program to create a nuclear bomb?

Perhaps I would accept such fame, and maybe the critics of my work are right after all? Maybe it's really time for me to stop avoiding questions about what fears I have about artificial intelligence as an expert in AI?

Fear of unpredictability

The HAL 9000 computer, which became the dream of science fiction artist Arthur Charles Clarke and brought to life by film director Stanley Kubrick in his 2001 Space Odyssey tape, is an excellent example of a system that failed due to unforeseen circumstances.

In many complex systems — the Titanic, the NASA space shuttle, and the Chernobyl nuclear power plant — engineers had to combine many components together. Perhaps the architects of these systems were well aware of how each of its elements worked separately, but they did not understand well how all these components would work together.

The result was systems that were never completely understood by their creators, which led to certain consequences. In each case - the ship sank, two shuttles exploded, and almost all of Europe and parts of Asia faced the problem of radioactive contamination - a set of relatively minor problems, but by chance that happened simultaneously, created a catastrophic effect.

I can perfectly imagine how we, the creators of AI, can come to similar results. We take the latest developments and studies in cognitive science (science of thinking, - ed.), Translate them into computer algorithms and add all this to existing systems. We are trying to develop an AI without a full understanding of our own intelligence and consciousness.

Systems such as Watson from IBM or Alpha from Google are artificial neural networks that have impressive computational capabilities and are capable of coping with really complex tasks. But for now, the only thing that a mistake in their work will lead to will be the result of a loss in the intellectual game “Jeopardy!” Or a lost opportunity to defeat the next best player in the world in the board logic game Guo.

These consequences are not of a worldwide nature. In fact, the worst thing that can happen to people in this case is someone who loses some money on betting.


Nevertheless, the architecture of AI is becoming more complex, and computer processes - all faster. The capabilities of AI will only increase over time. And already this will lead us to the fact that we will begin to impose on AI more and more responsibility, even despite the increasing risks of unforeseen circumstances.

We are well aware that “mistakes are part of human nature,” so it will be physically impossible for us to create a truly secure system throughout.

Fear of misuse

I am not very concerned about the unpredictability of the consequences in the work of AI, which I am developing, using the approach of so-called neuroevolution. I create virtual environments and populate them with digital creatures, giving their "brains" teams to solve problems of increasing complexity.

Over time, the efficiency of solving problems by these creatures increases, evolves. Those who cope with the tasks better, all are selected for reproduction, creating on their base a new generation. Through many generations, these digital creations develop cognitive abilities.

For example, right now we are taking the first steps in the development of machines to the level of performing simple navigation tasks, making simple decisions or memorizing a couple of bits of information. But soon we will achieve the development of machines that can perform more complex tasks and will have a much more effective overall level of intelligence. Our final goal is to create human-level intelligence.

In the course of this evolution, we will try to detect and correct all errors and problems. With each new generation of machines will be better to cope with errors, compared with the previous ones. This will increase the chances that we can determine all the unpredictable consequences in simulations and eliminate them even before they can be realized in the real world.

Another possibility that the evolutionary method of development provides is to endow artificial intelligence with ethics. It is likely that such ethical and moral characteristics of a person as reliability and altruism are the result of our evolution and a factor in its continuation.

We can create an artificial environment and empower machines to enable them to demonstrate kindness, honesty, and empathy. This can be one way to make sure that we develop more obedient servants than ruthless killer robots. However, despite the fact that neuroevolution can reduce the level of unintended consequences in the behavior of AI, it cannot prevent the misuse of artificial intelligence.

As a scientist, I have to follow my obligations to the truth and report on what I discovered in my experiments, regardless of whether I like their results or not. My task is not to determine what I like and what does not. The only important thing is that I can publicize my work.

Fear of wrong social priorities

Being a scientist does not mean losing humanity. I must, at some level, reconnect with my hopes and fears. Being a morally and politically motivated person, I must take into account the potential consequences of my work and its possible effect on society.

As scientists and as representatives of society, we still have not come to a clear idea about what exactly we want to get from AI and what it should become as a result. This is partly due to the fact that we still do not fully understand its potential. But still, we need to clearly understand and decide what we want to get from truly advanced artificial intelligence.

One of the biggest areas that people pay attention to in talking about AI is job placement. Robots are already doing hard physical work for us, for example, assembling and welding parts of car bodies together. But one day, the day will come when robots will be assigned to perform cognitive tasks, that is, they will be entrusted with what was previously considered to be an exclusively unique ability of the person himself. Self-driving cars will be able to replace taxi drivers; self-guided aircraft will not need pilots.

Instead of receiving medical care in emergency rooms filled with always tired personnel and doctors, patients will be able to conduct examinations and learn diagnoses using expert systems with instant access to all medical knowledge. Surgical operations will be performed by robots that are not susceptible to fatigue, with a perfectly “ticked hand”.

Legal advice will be available from a comprehensive legal framework. For investment advice we will turn to expert market forecasting systems. Perhaps one day all human work will be done by machines. Even my work can be done faster thanks to the use of a large number of machines, tirelessly exploring how to make machines even smarter.

In the realities of our current society, automation already forces people to leave their jobs, making the richer owners of such automated machines even richer, and the rest - even poorer. But this is not a scientific problem. This is a political and socioeconomic problem that society itself must tackle.

My research will not change this, but my political foundations, along with humanity, may lead to circumstances in which AI can become an extremely useful function, instead of making the gap between one percent of the world elite and the rest of us even wider.

Fear of catastrophic scenario

We got close to the last fear imposed on us by the insane HAL 9000, Terminator and any other villainous superintelligence. If AI continues to evolve until human intelligence is surpassed, will an artificial superintelligent system (or a set of such systems) become a person as useless material? How can we justify our existence in the face of a super-intellectual capable of doing and creating something that no human being will be able to do? Will we be able to avoid the fate of being wiped off the face of the earth by the machines that we helped create?

Therefore, the most important question in such circumstances will be: why will we need an artificial superintelligence?

If such a situation had happened, I would probably say that I am a good person who even contributed to the creation of this superintelligence, to which I am now. I would appeal to his compassion and empathy so that the superintelligence would leave me, so compassionate and empathizing, alive. I would also add that in itself the diversity of species has value and the Universe is so great that the existence of the human species in it is in fact very small.

But I cannot speak for all of humanity, therefore for all of us it will be difficult for me to find a strong argument. Just when I look at us all, I really see that we have done a lot of things and are doing wrong. Hate for each other reigns in the world. We go to war against each other. We unfairly distribute food, knowledge and medical care. We pollute the planet. In this world, of course, there are many good things, but if you look at all the bad things that we have created and continue to create, it will be very difficult to find an argument in support of our continued existence.

Fortunately, we do not have to justify our existence. We still have time. From 50 to 250 years, depending on how quickly artificial intelligence will develop. We, as a species, have the opportunity to come together and find a good answer to the question of why superintelligence will not have to erase us from the face of the planet.

It will be very difficult to resolve this issue. After all, to say that we support diversity and ethnocultural differences, and doing this is a completely different thing. Like to say that we want to save the planet, and successfully deal with it.

All of us, whether each individual or society as a whole, must prepare for the catastrophic scenario, using that time to be ready to show and prove why our creations should allow us to continue to exist. Or we can just continue blindly believing that such a development is impossible, and just stop talking on this topic.

However, no matter what physical danger a superintelligence can pose to us, we should not forget that it will also pose a political, as well as an economic, danger. If we do not find a way to improve our standard of living, then in the end we simply feed capitalism with an artificial laborer, who will serve only a handful of the elect who have all the means of production.

The original of this article was published on theconversation.com website by Arend Hintze, associate professor of the Department of Integrative Biology, Computer Science and Engineering, Michigan State University.

The article is based on materials https://hi-news.ru/technology/chego-boitsya-sam-razrabotchik-ii-v-svoem-tvorenii.html.

Comments