4 out of 10 Poles have had dealings with deepfakes. “People can't keep up”

Deepfakes, or photos, sounds, or videos generated or manipulated by artificial intelligence, are one of the most controversial technologies of our time. Advanced AI algorithms are used to create them, which is why it is increasingly difficult for users to distinguish real content from fake. The report “Disinformation through the eyes of Poles 2024” shows that 4 out of 10 people surveyed have had contact with such content. This is important because deepfakes can be used for manipulation, blackmail, reputation destruction, fraud, and financial fraud.

4 out of 10 Poles have had dealings with deepfakes.

photo: SB Arts Media // Shutterstock

The risk associated with deepfakes is growing with the increasing access to artificial intelligence tools. Today, almost anyone with access to the internet can create films or images depicting situations that never actually happened. The results of the Digital Poland Foundation report prepared as part of the “Together Against Disinformation” initiative indicate that three-quarters of respondents (77%) predict an increase in the scale of this phenomenon over the next 10 years.

– The risk of generating deepfakes and being susceptible to them is actually growing every day. Interestingly, these deepfakes are also getting better every day, because once this technology was created, now even artificial intelligence is improving itself. This means that the human eye and hearing can no longer keep up – Tomasz Turba, a security specialist from the Sekurak.pl website, told the Newseria agency.

– What's worse, the whole world, including the criminal world, has turned towards micropayments. So today we don't have to have a supercomputer, we don't have to have a superservice for 1 thousand dollars a day to generate such material, just a subscription model and 5 dollars a month and criminals generate such deepfakes.

Social media is flooded with AI-generated videos and recordings of politicians, public figures, and celebrities expressing controversial views, advertising fake products or financial services. Deepfakes are also used to create compromising content that can cause image or financial losses. However, before platforms react and label them appropriately, deepfakes go viral, reaching millions of recipients.

– For now, the fight against deepfakes is very uneven because in the world of cybersecurity it has always been the case that criminals have outpaced all defense mechanisms. Here, criminals no longer have to be first, they just have to be more precise in their creation – says Tomasz Turba. – Our vigilance should play a key role, because when we look at a video or read some news, we are often no longer able to see whether it is generated by a human, for example a photo by a photographer, or by artificial intelligence.

Interestingly, artificial intelligence-based tools can also support us in this.

– On the one hand, the criminal branch of AI with deepfakes is developing, but on the other hand, we have more and more tools to detect them. Where human perception can no longer cope, then the AI computer will detect that the image is too perfect or something is missing. And then it will tell us: hey, hey, don't go there, because there's a deepfake there – says the security specialist. – We still have to train ourselves on threats on the Internet. And we can use these tools, but they have finite computing power and nothing happens there, and the criminal can be more and more precise in creating or better lighting of the material, etc.

As he emphasizes, such awareness will be more effective than bans or content control, because this is very difficult in the world of social media.

– We would have to introduce full internet control over what someone watches, then we could control it. But this is North Korea on the internet, and I don't know if you remember, but there was a fight over ACTA, that there would be no censorship on the internet, so this is definitely not the right way either – says Tomasz Turba.

In his opinion, however, there is a way to effectively combat deepfakes. The report “Disinformation through the eyes of Poles. 2024” shows that 86 percent of respondents agree with the opinion that all information generated by AI should be clearly marked.

– It would be enough to program every social networking site so that all the materials that the user sees while scrolling would immediately run through an engine to detect whether it is a deepfake. If this engine determines within half a second that it is fake material, it should be marked accordingly. So what TikTok actually proposes as a solution for its materials could work. The only question is whether all social networking sites would go in this direction, because these are huge costs – adds the Sekurak expert.

Sourse

No votes yet.
Please wait...

Leave a Reply

Your email address will not be published. Required fields are marked *