OUR PARTNERS

Concerns Grow Over Manipulated Media in 2024 Election


30 June, 2024

As the 2024 election season approaches, it’s bringing with it a heightened concern about the authenticity of media, specifically the menace of deepfake technology crafted through artificial intelligence. The term deepfake denotes a sophisticated fabrication, frequently referring to counterfeit videos of individuals that are made to deceive. These distortions are not just visual but often paired with a matching audio component to increase believability. The stakes are particularly high this year as roughly half of the global population will be casting votes in impactful elections – a list that includes India, the United States, several European Union countries, and potentially the United Kingdom. There is an unease percolating among the electorate about how this technology might disrupt the democratic process.

The inception of deepfake technology dates back to 2015 when Google’s team published details on what they termed “inceptionism,” later popularized as “DeepDream.” Engineers demonstrated how neural networks, initially designed to categorize images, could also generate them. Although these early creations were more abstract than accurate replications, they set a precedent for the further development of this tech.

Fast forward to the more recent past, and we observe that tweaking pre-existing photos and videos with AI has become notably less challenging. Back in 2017, creating face-swapped celebrity images required considerable computational resources and expertise. However, the technology has become alarmingly accessible, with reports from think tanks like Labour Together stating that certain deepfake tools can churn out convincing deepfake pornography in less than half an hour, free of cost.

Creative applications followed suit. Midway through 2018, we witnessed face-swapping tools evolve to the point where they were being employed beyond dubious purposes. A notable example featured actor and director Jordan Peele voicing an artificial rendition of Barack Obama, emphasizing the technology’s potential impact.

The AI landscape took another leap forward in 2021 with the release of the Dall-E AI images generator from OpenAI, which could conjure up images from textual descriptions, broadening the horizon of generative AI. Although initial results weren’t flawless in terms of photorealism, the technology’s capacity for visual creation was clear, offering a plethora of unique, completely original illustrations.

These advancements in AI-generated images and the latest ai news & ai tools are not without their dark sides, especially in geopolitical strife, as seen with the instance of Ukrainian President Volodymyr Zelensky’s deepfake imploring soldiers to put down their arms—an evident progression in how believably content can be manipulated over a short period, as reported by NewsGuard. Such instances showcase the rapid maturation of the technology, underlying the necessity for vigilance.

AI video generators have not only been instrumental in creating counterfeit imagery but also in perpetrating political subterfuge. For example, there was the surreally realistic image of Pope Francis in designer attire, which vividly demonstrated the generative prowess of modern AI and its viral potential. On another front, politically charged fabrications, such as a spurious photograph of former President Donald Trump with Black supporters, showcased the technology’s potential misuse to distort public perception and influence voter demographics.

With the current developments, the very fabric of visual records is being threatened by the infusion of fake imagery. Renowned figures like Professor Hany Farid of the University of California, Berkeley, an expert in deepfake detection, have expressed trepidation over the prospects of political exploitation of deepfakes in the current electoral cycle. The manipulation has also seeped into audio, with fraudulent voicemails surfacing that employ convincing impersonations to spread disinformation—an example being a mimicked exhortation from President Biden undermining voter turnout in a primary election.

As we edge closer to 2024, the potential for artificial intelligence generated images and videos to be weaponized in politics is unnervingly evident. The veracity of visual and audio media is now under examination as these advanced tools are poised to sway public opinion and potentially alter election outcomes. It stresses upon us, as an AI news industry audience, to cultivate both awareness and skepticism, staying informed with the latest ai news, and endorsing strategies for the validation and authentication of media. Consumers must stay vigilant to distinguish between factual reports and deceptive fabrications, powering democratic integrity with informed discernment as the cornerstone of our electoral processes.