Data from AI chatbots can fall into the hands of cybercriminals, foreign services or advertisers and be used to the detriment of users – warns Mateusz Chrobok, a cybersecurity expert, in an interview with PAP. He recommended the use of digital hygiene and skepticism towards generated content.
Mateusz Chrobok, a cybersecurity and AI expert who runs an educational platform and a YouTube channel followed by 157,000 people, noted that technology companies developing artificial intelligence chatbots often obtain data about users from the internet, including social media, and based on information they enter into the chatbots.
“On this basis, we are usually profiled by a given company so that the chatbot's responses are more tailored to us,” he explained.
The expert pointed out that from prompts, or queries that a given user types into a chatbot, companies can read information about the language they use, appearance – in the case of attached photos, views, family situation, problems, etc. Some companies also ask for access to information from the user's device, e.g. contacts, location. “The Chinese DeepSeek also collected the way we type on the keyboard, from which we can extract a lot of data, e.g. age, or whether we are tired or sleepy that day,” Chrobok noted.
See alsoHow artificial intelligence will affect the work of accountants
“Sometimes we are tempted to give up our data in exchange for free access to a chatbot, a better model, etc. So we are actually giving up our privacy for certain benefits,” he noted. He pointed out that using chatbots is associated with the risk of leaking sensitive data. Chrobok recalled a situation from two years ago, when OpenAI mixed up ChatGPT user indexes. This resulted in a situation in which after logging into their account, a given person had access to another person's conversation history. “The problem affected 1.2 percent of users, but in practice it is millions of people,” he emphasized.
Another security threat associated with chatbots is the ATO attack, or Account Takeover (PAP), the expert pointed out. After taking over an account, a cybercriminal can gain access to data from the user's conversation history, e.g. name, surname, phone number, credit card number, if they accidentally entered it or shared it in a document, Chrobok warned. “Research shows that models store this type of information and if they are trained on it, it cannot be easily erased. There are also ways to extract this data from a chatbot,” the expert noted.
He added that some companies allow users to disable the option of saving history or training a model on their data. “For very sensitive information, it is safest to install a local model on your device or server. Then there is the greatest chance that our data will not leak,” he emphasized.
Chrobok pointed out that the collected data can also be used to create profiled ads. “Let's imagine that we're having a bad day or struggling with depression, obesity or another problem and advertisers use this information to influence our purchasing decisions, which will not necessarily be beneficial to us. Here we enter the gray world of manipulation. What we think would be good for us is not necessarily good for the creators of AI models and companies that optimize profits,” he assessed.
According to the expert, the level of protection of user data may be influenced by the country of origin of the chatbot. “DeepSeek, a Chinese company, creates great models such as r1, but it is blocked in many places because, according to Chinese law, its creators must provide user data to the authorities, and the authorities, for example, can give it to the services,” he pointed out. He gave an example of a hypothetical situation in which an American official writes to a Chinese chatbot and tells it about his family or business problems. “The information collected could be used by China to recruit this person as a spy or to exert influence in some other way. I think that by revealing our weaknesses, we are more susceptible to manipulation in such a situation,” he emphasized.
Chrobok noted that research shows that models reflect the views of their creators. “For example, the Chinese DeepSeek has a negative sentiment (tone of generated content – PAP) when talking about American soldiers, and OpenAI models are completely the opposite,” he pointed out, adding that “every model is a certain information bubble.” “They are not neutral, even if some creators try to make them so. It is worth remembering that,” he emphasized.
The expert, when asked about the safety of using chatbots as emotional support or a therapist – which some users do, recalled a situation that took place in 2023 in Belgium. A man who was very aware of the problems related to global warming, wrote about this topic with a chatbot. At one point, the AI suggested to him that if he wanted to reduce the amount of CO2 he was generating, it would be best if he were gone, as a result of which the man took his own life. This was the first recorded suicide after a conversation with artificial intelligence – the expert noted.
“This is an extreme, but it shows what can threaten us when we ask AI for advice, share our mental state and views with it. The answers it provides are based on statistics, which means they will not always be accurate. Thanks to the progress of technology, they are increasingly like that, which means that we most often trust them, and this can lull our vigilance,” he noted and added that the more specialized the issue, the more – at least for now – artificial intelligence models are wrong.
“This can be significantly improved with technical methods such as Deep Research (advanced information analysis method – PAP), but not everyone knows how to use them. That is why I encourage skepticism towards content generated by chatbots,” the expert emphasized.
Chrobot, when asked about the dangers of chatbots in the workplace, gave an example from 2023, when a Samsung employee uploaded a presentation with secret data to ChatGPT, which was subsequently leaked and could be obtained by competitors. “Many companies and managers are afraid of this, and it happens that they completely ban employees from using artificial intelligence. In my opinion, this is not the right path,” the educator said. According to him, without this technology, companies will be less competitive. “You just have to know how to use it safely,” he noted.
According to Chrobok, the process of safely implementing AI in companies should begin with developing “AI hygiene”, i.e. educating the superiors themselves, and then the employees. “Secondly, technical measures are important,” he pointed out and explained that there is a whole field of solutions called Data Leak Prevention (protection against data leakage – PAP), in which employers can invest. This includes, for example, models that assess what data can leave the company and what cannot. The expert pointed out that every organization should also have rules for using AI, which specify, for example, what data can be uploaded to a chatbot, which of them are sensitive, and which should be marked as generated using artificial intelligence, because, for example, the law requires it.
The expert, when asked whether, in his opinion, despite the risks, artificial intelligence should be implemented in every company, stated that forcing all entrepreneurs to use AI would be “inhumane”. “But it is better to know this technology, because it can help you find your way on the job market, or improve the work itself, so that there is less of it and it is more effective,” he assessed.
“We are at a moment that some call evolution, revolution, or use other big words. This is not yet the moment where AI surpasses us, although in some respects it certainly is,” he pointed out. He emphasized that human intelligence is limited. “AI models also have their limitations and threats, but they are increasingly accurate and in the future they will become better than humans. Our skills and abilities will simply be weaker,” he assessed.
He noted that he is concerned about the approach of some companies to employees in connection with the leaked email of the president of a certain company, in which he wrote that “before you hire a human, test a few AI models”. “Perhaps this is the future that awaits us, if no other solutions appear along the way. Despite everything, I am a fan of implementing artificial intelligence to improve our work. Talking about a “temporary fad” or “AI hype” is, in my opinion, sleeping through an important moment,” emphasized Chrobok.
Monika Blandyna Lewkowicz (PAP)
mbl/ mick/ mhr/