
The inhuman factor: Russians will be warned about conversations with robots

Russians will be informed about communication with a robot, and not with live employees of contact centers of companies and banks. This is provided for by the draft law on the regulation of artificial intelligence (AI), which was reviewed by Izvestia. Businesses abuse the use of AI — citizens who do not realize that they are communicating with a robot cannot quickly solve their problems and become victims of fraudsters, experts say. Interacting with artificial intelligence without understanding that a robot technically cannot answer all questions can lead to irreparable consequences, they add.
How do Russians find out about talking to robots
Russian companies and government agencies will have to indicate that they communicate with citizens by phone or via the Internet using AI — if such technologies are used in their contact centers. This follows from the draft law "On the regulation of artificial Intelligence systems," prepared by a working group that includes State Duma deputies, representatives of businesses and business associations. Izvestia has reviewed the document.
If a customer or service user communicates with their provider by phone, they should be given the opportunity to interact with a human operator, and if through a chatbot on the Internet, there should be an indication of the automated nature of the responses and messages, the document says. In any case, it is necessary to indicate the procedure for sending claims and appeals to the structure that uses AI.
— Today, AI is actively used in call centers, chatbots, and support services, but users do not always realize that they are not communicating with a human. In an environment where technology is becoming more realistic, the issue of fair notification is coming to the fore, both from an ethical and legal point of view," Anton Nemkin, a member of the State Duma Committee on Information Policy, Information Technology and Communications, told Izvestia.
Such information will help minimize the risks of manipulation, especially in sensitive areas, such as financial counseling, servicing vulnerable categories of citizens, or providing medical information, he noted. The user must understand when they receive an automated response in order to adequately assess its significance, accuracy, and possible limitations. In addition, the availability of the function of switching to a live specialist on request enhances control on the part of the client and reduces the risk of misunderstanding, the deputy explained.
This is especially true in the fight against fraud. Hackers can act under the guise of AI services, seeking to mislead a person, lure out personal data or carry out a phishing attack, Anton Nemkin believes.
It is obvious that the requirements for "telephone" AI are created "for growth" — now, as a rule, a person quickly recognizes that he is communicating with a robot, said Karen Ghazaryan, director of the Internet Research Institute. But the issue with online correspondence is acute — a person can "knock" on an AI-based chatbot for days and not understand that it is a robot that cannot respond to unusual situations and solve specific problems, he added.
— The bill in question was prepared by colleagues from the State Duma. We plan to join the discussion. It is too early to talk about the details of the document. We only note that any technology should be used exclusively in compliance with the rights and interests of citizens," the Ministry of Finance told Izvestia.
What is the danger of communicating with artificial intelligence?
It is necessary to inform about calls and online correspondence involving AI - in some cases it may turn out to be a matter of life and death, says Sergey Polovnikov, head of the Content-Review project. A chatbot or an answering machine based on a neural network cannot always solve a person's problem in an unusual situation or solve it at all, he points out.
At the same time, people often turn to support in a state of stress and simply cannot figure out whether they are communicating with a robot or a human, especially for elderly Russians, the expert continues. An incorrect decision, prompted by a "humanoid" assistant, can be fatal — for example, if a pensioner transferred money to fraudsters. The patient may not be able to correctly formulate a request to the AI of his insurance company in order to get an answer to it: here the consequences may be irreparable, he gives examples.
Large commercial organizations, primarily banks, marketplaces, and telecom operators abuse AI when interacting with customers, says Olga Sulim, chairman of the Sulim and Partners Bar Association.
— In some cases, they completely abandon call centers and transfer all communication with customers to voice assistants (bots), and this, firstly, does not contribute to the prompt solution of customer problems. This is relevant, for example, in cases with credit institutions, when a person may become a victim of fraud and they will need to block payment means, restrict access to their personal account, or prohibit certain actions," she explained.
Communication with voice bots with limited functionality can lead to a violation of consumer rights, the expert added. According to unofficial statistics, the most common command for voice assistants has been the phrase "connect with the operator" for several years, and positive feedback on interaction with them is rare, which generally reflects the level of dissatisfaction with people from communicating with automated systems in order to resolve issues, she said.
Companies have increasingly begun to connect neural networks to the user support system and loop the entire platform on AI in order to save on operators, added Anton Averyanov, CEO of the ST IT group of companies, TechNet NTI market expert. This leads to the fact that users cannot reach a consultant and receive support from artificial intelligence in some cases, he claims.
— There have been cases when the AI gave incorrect recommendations, which were followed by a human. It is also necessary to specify which AI system is used, because if it concerns some kind of support system, a person can transfer their personal data to it. And if that's the case, then they shouldn't go abroad," the expert added.
The user may believe that he is interacting with a real person, behave with him as with a person, including provocatively: for example, manipulating pressure for pity or simply acting irrationally, says Alexander Bukhanovsky, an expert at the ITMO-based NTI competence center for Machine Learning Technologies and Cognitive Technologies. A complex generative AI system can react in its own way, as humans do due to the characteristics of emotional intelligence, he argues. The user will be able to "provoke" the AI into an inappropriate (in his understanding) reaction: at best it will cause disappointment, at worst it may be perceived as an insult, the expert said.
— Attempts to regulate the content created by artificial intelligence systems, from texts and images to voice robots and deep fake videos, are being made in many countries around the world. Behind these actions there is a basic human fear of the robot/ algorithm — that it will not understand the request, make a mistake, harm, etc. This is largely due to the novelty of AI technologies. At the same time, few people in the translation agency have doubts that the translator will make mistakes in translation, just as when visiting a notary office, no one doubts the notary's ability to draw up a power of attorney correctly," said Leonid Konik, partner at ComNews Research.
Due to the fact that a person has not yet lived a long life with robots and algorithms, it is reasonable to warn him about the fact of a collision with AI — in particular, with incoming calls and other cases of remote interaction, he believes. This will relieve anxiety and accelerate a person's awareness of the idea that in many cases AI systems are more convenient and faster than another person, the expert believes.
According to Leonid Konik, similar initiatives are currently being developed in other countries. For example, the European Union has a law on artificial intelligence, which is designed to stop the spread of disinformation and requires creators of generative AI models to enter appropriate notifications.
Переведено сервисом «Яндекс Переводчик»