Skip to content

Opinion | AI-generated ‘deepfakes’ are the newest and most dangerous form of celebrity scandal

Recent AI-created explicit images (also known as ‘deepfakes’) of Taylor Swift caused former Twitter platform X to block the searching of her name on the site. Eliza Densham discusses this new threat, while calling on social media platforms to take action.

By Eliza Densham, Third Year, English and German

Recent AI-created explicit images (also known as ‘deepfakes’) of Taylor Swift caused former Twitter platform X to block the searching of her name on the site. But will AI really be the dangerous monster it threatens to become or will it be kept at bay? Eliza Densham discusses this new threat, while calling on social media platforms to take action.

The rise of Artificial Intelligence is not a newly emerging topic. As students, we are aware of the dangers and risks that come with relying heavily on such technology and yet we increasingly find ourselves reaching for AI tools such as ChatGPT to help summarise long and tricky readings. But one concern about this emerging technology, that isn’t necessarily reaching us all on a personal level, is the rise of ‘deepfakes’ – images and videos that digitally manipulate the voice and likeness of a person (often a celebrity) to show someone doing something that he or she did not do.

However, it mustn’t be overlooked that, when used in a certain, 'correct' manner, AI is an incredibly helpful tool. With consent, using 'deepfakes' can save both money and time for the celebrity and company involved. Neither party needs to physically meet and record advertisements or campaigns, essentially cutting out the middle man and making the process of advertising much more efficient.

'The changing faces of deepfakes' / Eliza Densham

The negative connotations with 'deepfakes' stem from its misuse. It can be a dangerous and exploitable tool due to the constant public need for celebrity scandal. This is a big step from the exaggerated headlines of the 2000s magazines and they can feel somehow reliable due to the video format – if we can see it happened in a video, surely it must be real?

It is only when AI starts to recreate real life beyond human recognition (as it begins to show signs of doing so) that it will have succeeded and really be a danger to celebrity culture and what we know to be ‘real’. A shocking 2019 figure recorded that 96% of 'deepfakes' on the internet were pornographic. A recent addition to this statistic is the invasive AI Taylor Swift content that was shared on X. It was recognised quickly as 'deepfake' technology, and celebrities are very aware of privacy and scandal so anything that seems over the top or out of character can often be deduced as fake. While the viral videos and photos on X may have been shared widely, and there can be no denial that this type of content is unequivocally outrageous, it ultimately will not have damaged Swift’s reputation. Ironically, one of the main scandals of her career involved an edited phone call with Kim Kardashian, so perhaps audio sources are more convincing or harder to detect than the current level of 'deepfakes' (although that was 2016 and the credibility standards for celebrity scandal has completely changed). But ultimately the more videos and photos there are of these celebrities online, the more convincing the 'deepfake' will be and the more important it will be to prevent them before they can even be posted.

While Swift, TIME magazine’s 2023 person of the year and a highly successful business woman, is seemingly a preferable target, there is a potential for civilians to be impersonated too - often to commit financial fraud or crimes with this same technology. Of course, the stakes are much higher when it comes to exposing a celebrity but the principal remains the same - if someone has a reputation or money to lose and this technology is in the wrong hands, it can be taken too far.

'X Platform' / Rubaitul Azad / Unsplash

So what can we do? If we are aware of the dangers we are ultimately better equipped to face them. These 'deepfakes' aren’t going anywhere and neither is AI, so the responsibility to regulate the usage lies in those who facilitate, promote and allow it on their platforms. Google and Meta along with other tech giants are trying to combat the issue by banning the use of false 'deepfakes' and detecting the technology more quickly. But as we know, AI technology is often a step ahead and perhaps we're a little too late. For example, in the case of the Taylor Swift videos, her name became a ‘trending’ topic at the time of the 'deepfake' sharing, meaning X algorithmically boosted the ability to share the videos. Social media platforms need to take more responsibility and more content regulations are going to be needed in order to prevent this happening more frequently in the future.

Featured Image: Rosa Rafael / Unsplash


How do you think social media platforms should attempt to control the harmful use of 'deepfakes'? Tell us @epigrampaper

Latest