

Humans always could manipulate and deceive by telling lies, but we’re now entering into an era in which more or less anyone with an internet connection can create images using AI that look - at least to the casual observer - completely real. That’s unsettling for several obvious reasons. “What deepfakes and generative AI and synthetic content have done is they've introduced plausible deniability into a space that previously felt pretty sacrosanct in terms of reliability,” Ajder says. Not so long ago, photographs, video recordings and audio clips were generally regarded as irrefutable evidence that an event took place. “And that's leading to this real tectonic shift in the landscape.” “We're now in a moment where accessibility is meeting functionality and realism,” says Henry Ajder, a generative AI and synthetic media expert who contributed to the 2019 DeepTrace report.
#DUA LIPA DEEP FAKE FREE#
And many of these models are free to use. Other models, such as OpenAI’s Dall-E 2 and Stable Diffusion, have also become immensely popular. For one thing, generative AI has been catapulted into the zeitgeist, thanks largely to the phenomenal rise of ChatGPT, which, following its November 2022 launch by AI research lab OpenAI rapidly became the most popular consumer AI product in history. Much has changed since that original DeepTrace report was released. (Earlier this month, a deepfake app ran ads across Meta-owned platforms Instagram, Facebook and Facebook Messenger showing what appeared to be Emma Watson and Scarlett Johansson engaged in sexually provocative acts). The rise of deepfakes, it seems, was being driven by demand for porn. The following year, a report titled ‘The State of Deepfakes’ published by DeepTrace Technologies – an organization devoted to mitigating the risks of deepfakes - found that “non-consensual deepfake pornography” made up some 96% of all deepfake videos found online. The term ‘deepfakes,’ however, did not truly enter mainstream consciousness until Vice editor Samantha Cole published an article in 2018 about a Reddit user under the name ‘Deepfakes‘ who was running a page devoted to AI-generated celebrity porn videos. (Goodfellow is now lovingly known as the ‘GANfather’ in the AI world.) In 2014, computer scientist Ian Goodfellow (who, incidentally, currently works for DeepMind, an AI company owned by Google) developed the first generative adversarial network, or GAN, an AI model that essentially pits two algorithms against each other to create a lifelike digital image, ushering in a new era of synthetic media. While social media users are giggling at these and similar images, their proliferation raises a range of concerns and risks for the marketing industry – and for society at large – that is becoming increasingly urgent. The Balenciaga pope images began making headlines less than a week after AI-generated images depicting former US president Donald Trump getting arrested, crying in a courtroom and lifting weights in prison also went viral.

In an interview with Buzzfeed News, the man behind the images – a 31-year-old named Pablo Xavier who declined to share his last name – admitted that he was tripping on psilocybin when he decided to give the pope an AI-generated fashion makeover. But eventually, word began to spread that they had been created using Midjourney, an AI model that generates digital images from text-based prompts. Many people were convinced that the images were real. Images depicting the pontiff strolling in a white Balenciaga puffer jacket (which gave one the impression of the Michelin Man attending the Met Gala) went viral on social media, alongside a handful of similar images in which the pope was shown wearing stylish sunglasses, gloves, slacks and sneakers – garb not typically associated with the Catholic Church’s highest office.
