Society is influenced by various factors. It seems that a person changes the world around him, but the world also changes a person. Lately, there have been ongoing debates about AI chats, which are gradually becoming integral to life. But how will they affect ordinary human life? Let's figure it out.

Current Communication Opportunities

Today, many people cannot imagine their lives without an AI companion. Thanks to smart bots, we take orders, get advice, and plan our schedules. But that's not all. This technology can already be used for direct communication.

We are talking about AI roleplay. That is, you communicate with a fictional character who interacts with you based on his character and role. And all this happens without human participation. AI creates lifelike roleplay, and service users are satisfied with the interaction. You can test the capabilities of this tool yourself on the www.museland.ai platform. You will be surprised how realistic the interaction with the bot can be.

Why Change is Inevitable

Games with AI capabilities raise serious questions. We are not ready to let algorithms into production processes in all areas — and certainly not into management or making vital decisions. In this sense, artificial intelligence worries developers no less than ordinary people. Experts admitted this — the authors of the 2017 report by the AI ​​Now Research Institute at New York University admitted this. The main concern is caused by black boxes that create closed algorithms inside themselves. The problem with black boxes is called the AI ​​explainability problem. When Facebook chatbots encountered a barrier — they did not receive approval from human operators — they began to bypass it by inventing new requests. The developers discovered a dialogue between bots in which the words meant nothing from a human point of view, but in it, the bots cooperated with an unclear goal. The bots were turned off, but the aftertaste remained.

Why People Are Afraid?

Algorithms find unexpected opportunities where humans do not provide strict limitations: they behave like hackers and look for optimal solutions to problems. This happens in game universes, too: for example, in EVE Online, algorithms arranged alternative resource extraction without interacting with other players since their strategy turned out to be more advantageous from the point of view of game theory and not from the position of teamwork.

New cases of independent algorithm behavior increase anxiety, especially due to developers' desire to give AI the right to make independent decisions. While errors can still be unnoticeable or painlessly corrected in games, social networks, and creativity, the risks of using AI in areas related to human well-being require a lot of attention and the transparency of algorithms. The problem of explainability comes down to the question of how AI makes decisions.

Can AI Replace Humans?

This issue is actively addressed by the developers of driverless cars, who experiment with quantifying ethics, trust, and morality. Such experiments are still legitimate as long as potential risks can be justified by a combination of circumstances—for example, the death of a pedestrian due to the fault of the pedestrian himself is easy to imagine. But the risks cease to be justified when a person (and not just one!) can suffer only due to technology's fault. 

That is why artificial intelligence will not be allowed to control airplanes in autonomous mode in the foreseeable future. Aviation is an excellent example of an area requiring high accuracy in decision-making since the risks are too high. The struggle for responsibilities is vividly shown in the film "Miracle on the Hudson": the legendary pilot managed to land a plane with failed engines on the water without casualties, and later, he was accused of not acting according to protocol since alternative solutions with landing at nearby airports were mathematically calculated. 

However, in court, none of the experts and investigators of this air accident found all the elements of the formula called "pilot in an emergency" for a computer simulation. Watch the hero's defense speech about how mathematical calculations are insufficient in an actual situation: to assess all the risks and possibilities and make the best decision, experience, intuition, and faith come to the rescue. 

AI can cope with standard situations, and they are the vast majority. But where can it learn from other people's mistakes and look for ideal scenarios and solutions in emergencies that never repeat themselves? It turns out that the main limitation on the access of algorithms to control systems will be in those areas where transparency of the decision-making process and subsequent explanation of their actions are required. A new mathematical limitation is now added to these internal development problems: we cannot predict which tasks AI will cope with and which it will not.

The Inevitable Changes in Communication

Even though AI chats will not replace people in the foreseeable future, they will still affect our manner of communication. And here, two scenarios are possible. We are already observing the first one. It consists of humanizing neural networks and changing the form of communication with them. The second option is the transfer of communication patterns from AI to real people. This situation is much worse because people need human communication, not impersonal commands. It remains to rely only on the wisdom of people who can separate communication between a machine and a person. In addition, it is possible that as technology develops, the situation will change again.

Even if, sooner or later, AI will be introduced into many areas of human life, it is already necessary to understand that technology will not solve all of humanity's problems and that we ourselves must think about such things as ethics or law.

Conclusion

Ethics committees emphasize that technologies are the work and responsibility not only of developers but also of all participants in global interaction, both end users and institutional players: business, government, and society. If any of these participants decide to use AI for their purposes, they must assume the obligation to foresee the consequences. If these consequences cannot be predicted, it is necessary to assemble as many interdisciplinary teams of experts as possible who will show the full complexity of certain technical solutions.