The unmistakable image of the CEO of that large retail chain announcing the promotion left no room for doubt. By order of Procon, its stores were required to sell state-of-the-art cell phones for just R$1,79. Too good to be true? Even so, thousands of people were interested in the unmissable offer, causing problems for the company.
The above example is not fictitious. It happened at the end of 2023, just before Christmas, when a video of Luciano Hang, owner of Havan, began circulating on social media. This, however, was false.
“It is necessary to develop proactive crisis management and transparent communication strategies, as well as promote awareness campaigns that educate consumers about the dangers of deepfakes, so that people know how to identify them and avoid falling into their traps.”
Events like this show the risks that artificial intelligence (AI)-based scams can pose, even when they are not so sophisticated – the video, in this case, was created from a montage of old images and voice recordings of Hang. Increasingly refined, machine learning technologies that allow for the simulation of audio, images and videos with impressive realism – the so-called deepfakes – can make this type of fraud increasingly complex and capable of causing greater damage to the reputation of companies.
The consequences can be even more serious when deepfakes deceive not only the general public, but also key stakeholders such as customers and suppliers. An AI-forged ad campaign offering non-existent discounts can lead frustrated consumers and misled business partners to view the company with distrust, even if it is the real victim.
In a highly competitive environment, competitors or other malicious actors may promote disinformation campaigns using deepfakes to spread damaging rumors about a company, raising questions about its integrity. The sharing and engagement rates of visual content are usually much higher than those of text. This, combined with the way social media algorithms work – programmed to highlight highly interacted content – can cause manipulated videos to quickly go viral, especially when they are sensationalist or controversial in nature, amplifying the damage to the company's image.
The increasing accessibility of generative AI tools, which are now cheap and widely available, makes it easier for this technology to be used even by those who do not have advanced technical knowledge, increasing these risks.
As a result, investing in cybersecurity has become essential. However, defensive measures may not be enough to protect a company's credibility, as they do not guarantee its ability to neutralize the emotional impact and rapid dissemination of fake content. Given this scenario, it is necessary to develop proactive crisis management and transparent communication strategies, as well as promote awareness campaigns that educate consumers about the dangers of deepfakes, so that people know how to identify them and avoid falling into their traps.
In an environment where misinformation tends to spread quickly, companies must adopt preventive measures, including creating teams dedicated to monitoring their digital image, acting to actively position themselves in building a relationship of trust with their stakeholders.
Eduardo Felipe Matias He is the author of the books “Humanity and its borders”, “Humanity against the ropes” and coordinator of the book “Legal Framework for Startups”. He has a PhD in International Law from USP, was a visiting scholar at Columbia University in New York, Berkeley and Stanford University in California, and is a visiting professor at the Dom Cabral Foundation and a partner in the business area of Elias, Matias Lawyers.
Signed articles reflect the opinion of the authors.