Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

The number of cases in Russian courts involving offenses involving the use of artificial intelligence or its errors has increased almost one and a half times over the past four years, analysts told Izvestia. Attackers use AI to fake other people's content, deceive victims using deepfakes, synthesize the voices of the victim's relatives and take loans from banks. And citizens are trying to challenge the decisions of the authorities and credit institutions made with the help of AI, complaining about chatbot errors and fines incorrectly accrued by automated systems. The information about which civil, criminal and administrative cases most often involve artificial intelligence can be found in the Izvestia article.

How Scammers use AI

Over the past four years, the number of cases involving offenses using artificial intelligence has increased by 39%, the Analytical research center told Izvestia. Business. The right." In 2021, the number of such cases was only 112, and by 2024 their number had grown to 292. The increase last year compared to 2023 was 20.1%.

Moscow and the Moscow region became the leaders in the number of cases considered. In the second and third places: St. Petersburg and the Sverdlovsk region. This is followed by Novosibirsk and Krasnodar.

Мошенник
Photo: IZVESTIA/Dmitry Korotaev

In general, the number of cases in the offense statistics is still small, but there may be many more, Venera Shaidullina, the author of the study and director of the center, told Izvestia.

"Legislative regulation of AI is just being formed, so many disputes may simply not name artificial intelligence as a way to commit crimes," she said. — The current statistics probably show only the tip of the iceberg, as many cases of using AI for illegal purposes may remain undetected or qualify without mentioning its role.

The expert explained that the research methodology took into account cases where artificial intelligence is mentioned in judicial acts. And many cybercrimes and offenses are now classified as traditional cases of fraud or, for example, theft.

What cases involve artificial intelligence

The mention of AI is now particularly common in intellectual property cases — these are disputes over the rights to content created using neural networks, said Venera Shaidullina.

"And criminal cases include cases of fraud using deepfakes, crimes using AI to forge documents, drug sales when AI bot shops are used, and illegal loans," the author added.

For example, in 2024, singer Larisa Dolina got into a scandal caused by scammers who targeted her country mansion. The criminals used the appearance change program and generated a video clip in which the actress allegedly personally explains to the bank the need for a loan of 50 million rubles.

Певица Лариса Долина

Singer Larisa Dolina

Photo: Global Look Press/Belkin Alexey

In the field of consumer protection, AI appears in disputes with banks over decisions made by automated scoring systems, conflicts over errors by chatbots and automated service systems, and complaints about the quality of services provided using AI.

In administrative practice, there are cases of appeals against fines from cameras with recognition systems, disputes about the legality of using such systems, and conflicts over the automatic recording of violations involving AI technology.

So far, the total volume of cases is dominated by civil cases — their share is almost 59%. And in 73% of cases where AI is mentioned, its errors or fraud using neural networks, the requirements are fully or partially satisfied.

The total amount of compensation claims in such cases over the past four years has amounted to almost 2 billion rubles. The average amount of claims in civil cases reaches 2.1 million rubles, in administrative cases — 856 thousand rubles.

What schemes are used by scammers?

In recent years, artificial intelligence technologies have made significant progress, becoming an integral part of various spheres of life, from marketing and medicine to manufacturing, recalled Olga Bolina, an AI marketer and founder of the Concretto marketing agency.

"However, along with progress, new threats are emerging," she told Izvestia. — One of the key problems is the growing number of scams based on the use of generative AI models (used to create a story in the style of a particular writer, create a realistic image of a person, etc. — Ed.). These schemes are becoming more sophisticated.

Although the range of offenses involving neural networks is wide, they do not create new opportunities, but rather make existing processes effective, added Rustem Khayretdinov, Deputy General Director of the Garda Group of Companies.

"Fraudulent actions using audio and video evidence imitating a trusted interlocutor are now in plain sight, but they have not significantly changed the picture of fraud according to the scheme "mom, I had an accident, throw off the money," the expert explained.

Аудиодорожка
Photo: Getty Images/bin kontan

For example, in the summer of 2024, scammers created a deepfake of Moscow Mayor Sergei Sobyanin. And with his help, they tried to deceive the heads of Moscow theaters.

Alexander Bleznekov, Head of the Information security strategy Development Department at the Telecom Exchange IT integrator, agreed with this.

Scammers use neural networks not even for new deception schemes, but in order to upgrade old ones. For example, they use LLM (A large language model, a type of AI program capable of generating content. — Ed.) for writing a large number of different phishing emails," he said. — AI allows you to quickly generate a convincing message so that a conditional employee of the organization clicks on a link or opens a file.

Neural networks are less visible, but much more successful in password selection and vulnerability detection. The level of development of artificial intelligence technologies is such that AI can already replace a mid-level cybersecurity specialist, said Mikhail Khlebunov, Director of Products at Servicepipe.

—And if it is possible to create a script or a set of scripts for testing vulnerabilities, for example, detecting errors in the code, then it is also possible to look for weaknesses for attacks," he said. — A few years ago, a surge in simple attacks occurred during the school holidays, and now a tool has appeared in the hands of young talents to more quickly identify vulnerabilities in systems and test their strength.

Ноутбук
Photo: IZVESTIA/Eduard Kornienko

Olga Bolina noted that one of the most dangerous types of fraud is the use of deepfakes and fake news.

"Such fake news looks very convincing due to the use of photographs of famous people and supposedly reliable information that actually does not correspond to reality," she added.

The expert noted that AI allows you to create an exact copy of any person's voice, based on several samples of his speech. This method is especially dangerous in telephone fraud, when attackers imitate the voices of the victim's loved ones in order to convince her to transfer confidential information or perform certain actions.

If the scammers took the issue seriously, then they can only be recognized by the speaker's uncharacteristic behavior, Alexander Bleznekov added.

— And the number can be substituted using a special program and it will be identified on the phone as familiar, you can create a similar messenger account, — he said.

AI is also increasingly being used to hack security systems, automated attacks on bank accounts and personal data, Ekaterina Kosareva, managing partner of the VMT Consult analytical agency, recalled.

— The investigation of such crimes is complicated by several factors. First, there is the technical complexity — investigators need a deep understanding of machine learning algorithms," the expert noted. — Secondly, the problem with identifying the perpetrators is that algorithms can operate autonomously, and criminals often use spoof servers and anonymous networks. Thirdly, the issue of evidence is that the court needs to figure out exactly how the AI worked and whether the operator had intent.

Верховный суд РФ
Photo: IZVESTIA/Eduard Kornienko

Another serious problem is cyber attacks aimed at collecting and processing large amounts of users' personal data, Olga Bolina added. Companies are making significant efforts to protect their customers' personal data, but hackers continue to find ways to circumvent these measures, often using AI algorithms.

"An example is targeted attacks on organizations and educational institutions with a low level of personal data security, as a result of which information about clients or students is stolen," she explained. — This information can later be used for fraudulent operations such as identity theft or extortion.

By 2030, the share of intellectual property rights cases will increase to 45-50%, analysts believe. The number of administrative offenses can reach 25-30% due to the expansion of automated control systems.

The share of criminal cases will grow to 15-18% due to the growth of cybercrime, the authors of the study predict. Today, it accounts for 12% of the total number of cases.

Переведено сервисом «Яндекс Переводчик»

Live broadcast