The Growing Threat from Deepfakes

The Growing Threat from Deepfakes

In 2017, an inflection point came in the astounding journey of Artificial Intelligence (AI), as the world saw the emergence of ‘Deepfakes’ for the first time. A type of synthetic – or fake – media created using a deep learning method called generative adversarial networks (GANs), deepfakes have been often labelled as the most concerning aspect in the development of AI. This technology liberally uses puppeteering full body movements, lip synching, and voice cloning to create realistic media that look real enough to be deceptive. And this inherent characteristic of being flawlessly deceptive makes the technology decidedly infamous and dangerous.

Fabricated texts, images, and videos can easily be turned into weapons of disinformation and misinformation. Such manipulated or falsified content with the tremendous ability to replicate voice patterns, exact facial expressions, and even the most intricate behavioural nuances of individuals, makes them appear to have been said or done by certain people. This inability of viewers to tell the real from the fake creates the strongest element of threat for both individuals as well as organisations.

Deepfakes can take the concept of realistic-looking phishing emails to an entirely new level of sophistication, embedded with images and videos which can convince employees of interactions with senior company officials or even the CEO. Deepfake video clips or images can also be used to blackmail employees for passwords, money, or other sensitive information. Cyber criminals with malafide intentions can severely damage a company’s brand image by spreading false marketing content while also posing a tangibly serious threat of fraudulent impersonation attacks on company executives, or even tampering footage evidence in legal cases. Recently, a Hong Kong based company was duped of $25.6m in an orchestrated scam through a multi-person video conference in which every participant, apart from the victim, was created using deepfake.

Organisations run the risk of losing public trust and credibility on the back of fake news and disinformation in their name. Given technology’s ability to imitate, biometric security – such as facial or voice recognition systems – might also become extremely prone to be compromised.

These impacts of reputation and legality are not confined to corporate entities only – individuals are just as prone to being impacted by deepfakes. Global music icon Taylor Swift’s AI generated explicit images created a furor among netizens on the potential threats that the technology poses. A recent deepfake video of Indian cricketing legend Sachin Tendulkar promoting a games app is another prominent case in point where a company used a falsified clip to promote its product.

But is everything about deepfake essentially bad? The short answer is ‘no’. There are numerous examples of brands putting deepfake to good and positive use. Recently, Mondelez International created a campaign that allowed local shop owners to create a free personalized ad for their businesses by featuring alongside Shahrukh Khan’s deepfake and sharing some basic information. A deepfake video of the actor was then developed in which he talked about the store and the business. Such deepfake ads have significant self-distribution capabilities, given the direct engagement of the audience with the campaigns.

However, the global regulatory framework around deepfakes is extremely nascent and unclear yet. Advancements in the technology have far outstripped the ability of current legislations to keep up. For instance, the new Online Safety Bill in England & Wales proposes to classify sharing of deepfake pornography as illegal but leaves other AI-generated content – including without consent – out of its ambit. Using copyrighted material to create deepfake content creates a severely complex quagmire about intellectual property rights. A formidable legal framework is an absolute must in fighting the dangers of this technology.

A targeted approach is required to mitigate risks from deepfakes. Amongst several possibilities, the use of blockchain technology can help certify genuine content, thereby minimising the risks of falsified and spurious content. In addition, organisations must embark upon comprehensive awareness and training for their employees to minimize challenges related to trust, fraud, privacy, and data integrity. If put to good and judicious use, deepfake technology can revolutionise various aspects of businesses, ranging from marketing to customer service and training. However, it does come with a significantly strong darker side which has the ability to disrupt global order in many ways. As deepfakes continue to evolve, it is imperative for organisations to remain vigilant as well as invest in employee training and cybersecurity. Governments, concurrently, must focus on developing robust legal frameworks to mitigate the inherent risks and protect both organisations and individuals. A balanced approach is the best oar to navigate the deep – and possibly turbulent – waters of deepfakes successfully.

Author: Prithwijeet Mukherjee

Sr Consultant, Strategy Consulting

Image courtesy: Markus Winkler on Pexels

No Comments

Post A Comment