- Статьи
- Internet and technology
- My head is spinning: neurobots began to unite in groups for the sake of stealing money

My head is spinning: neurobots began to unite in groups for the sake of stealing money

Bots have appeared on Telegram that mimic live users and join groups for fraudulent activities. The modern level of AI allows to simulate a lively discussion in the comments on the topic of the message, and the main participants in the conversation are fake accounts. At the same time, each such user's profile has its own, usually empty, Telegram channel with phishing links. For more information about new technologies of deception— see the Izvestia article.
How do neuro-commentators work
Bots began to deceive people by joining groups in Telegram. They are no longer limited to primitive spam, but are trained to steal data, analyze user behavior, and conduct multi-level phishing campaigns. This reflects the general trend towards the complication of technological offenses through the use of generative neural networks, Igor Bederov, head of the information and analytical research department at T. Hunter, told Izvestia.
— The group work of bots dramatically increases the credibility of their messages, because the victim sees a "dialogue" that looks natural. The situation with neurobots in Telegram is an alarming signal, demonstrating how quickly attackers adapt to modern technologies. In general, using AI to create fake profiles and simulate live dialogues is a qualitative leap in social engineering methods," he noted.
According to the expert, the openness of Telegram's toolkit for creating bots was an advantage of the messenger when it entered the market, but now it has become a tool for scammers. In the coming years, such attacks may become widespread due to the combination of generative AI, audio and video chips, as well as automators to create "virtual personalities", he added.
As stated by the head of BI.Dmitry Kiryushkin, ZONE Brand Protection, some Telegram accounts use AI to generate meaningful comments on the topic of a particular publication. They don't look like spam and don't cause a negative reaction. Attackers add a phishing link to their account description — users interested in the comment click on it and risk losing personal data and money, the expert clarified. In total, in February 2025, the company's specialists recorded 990 resources that were aimed at hijacking user profiles.
The number of phishing attacks using neuro-commentator bots is growing, added Lyudmila Bogatyreva, head of the Polylog agency's IT department. For example, in the Mammoth scheme, they place ads with low prices and redirect victims to fake payment pages. In addition, scammers are actively creating AI bots masquerading as marketplaces. They promise prizes and discounts, but in fact they steal users' personal data: usernames, passwords, bank card details, etc.
— Some bots work as a group, that is, one bot comes with a seemingly innocent message from the series "tell me the contractor for traffic", and after a couple of minutes another bot responds to him with an alleged recommendation. Previously, it was easy to distinguish them from real people, but lately they have been adapting better to their communication style, analyzing what is written in a post, and creating more believable avatars," explained Yaroslav Meshalkin, managing partner of the Heads'made digital communications agency.
Bots have also learned how to use information about the voice of the person they are portraying. Thus, attackers got not only the opportunity to communicate massively and in an automated mode on behalf of supposedly certain people by text, but also by voice messages, said Alexey Gorelkin, CEO of Phishman, an expert in the field of information security.
What risk do users take when contacting bots
To disguise neuro-commentator bots, their creators try to use photos of real people to increase trust in their published messages — this is more difficult to do with images from a neural network. At the same time, there are no solutions that would prevent the use of photos of real people to create bot accounts that distribute advertising, spam comments and false information, said Evgeny Egorov, a leading analyst at F6's Digital Risk Protection department.
— Over the past few yearsbots have evolved from simple scripts to automation in full-fledged browsers and mobile platform emulators with simulated user activity of clicking, moving, interacting with different forms and pages, the expert added.
Bots analyze messages and can successfully mimic a real user. This problem could be partially solved by AI-generated text detectors, but at the moment such solutions do not have very high accuracy, says Irina Zinovkina, head of analytical research at Positive Technologies.
— As a result of the use of new technologies , the prevalence and effectiveness of complex fraudulent schemes is growing at a previously unprecedented pace. Thus, according to the Central Bank of the Russian Federation, in 2024, the volume of transactions without the voluntary consent of clients of financial organizations increased by 74% compared to 2023 and exceeded 27 billion rubles," recalled Pyotr Klyucharev, Doctor of Technical Sciences, Professor at Bauman Moscow State Technical University.
The introduction of deep learning systems makes it possible to reduce the number of successful phishing attacks by 93% and optimize the use of employees' working hours, said Anton Graborov, Head of Digital Business at Alfa Capital Management Company.
The popularity of bots is due to the fact that they are cheaper for hackers. With their help, attackers administer a network consisting of thousands of bots that successfully replace the same number of living people. Therefore, to protect yourself, you should follow the basic rules of cyber hygiene: do not rush and double-check information, and also do not share confidential data with the bot, including access codes and passwords to any portals, social networks and banking applications, said the Director of development of the Solar AURA Monitoring Center for external digital threats, Solar Group" Alexander Vurasko.
Переведено сервисом «Яндекс Переводчик»