Deepfake a threat to democracy?
We recently wrote a blog explaining what “deepfake” technology is, how deepfakes can pose a threat to individuals privacy and autonomy, and what legal regulation is forthcoming.
The deepfake debate is set against a backdrop of a number of celebrities and politicians who have already fallen victim to deepfakes. This year, many voters will go to the polls in May to elect local Councillors and a general election is expected at some point this year, although no date has yet been confirmed. It is reported that James Cleverly, the Home Secretary, has commented that deepfakes provide the “perfect storm” for those looking to disrupt the upcoming general election.
There has been widespread concern about how deepfakes may interfere with the election. There have been known deepfake images of the Prime Minister Rishi Sunak, and the Leader of the Labour Party, Sir Keir Starmer.
Speaking in the House of Commons in December of last year, the Shadow Foreign Secretary David Lammy MP expressed concern about what he described as “malign activity” such as AI and deepfakes “to see false narratives” and to “spread lies.”
The global insight and strategy consultancy Think, has conducted online research about how deepfakes might impact an election. Think conducted two 75 minute focus groups, with 6 participants in each and two online surveys with a minimum of 2,000 participants in each. Think reported that 4 out of 10 participants ‘reacted’ to an audio deepfake of Sir Keir Starmer.
As has been previously explored within our blogs, at the moment, legal regulation is, at best, piecemeal, made up of the Government’s flagship legislation the Online Safety Act and current data protection legislation. The worry of how deepfakes may (or may not) influence an election emphasises the importance of the integrity of journalism in that a professional journalist will check sources and independently verify content prior to publication, and the debate of which sources the public can trust when consuming their news.
Videos available online, often via social media platforms, are generally unverified. You may have noticed that X (Twitter) added the “Community Notes” feature that lets users add contexts to tweets in an attempt to combat potentially misleading content published in an X (Tweet).
Could the solution be the tech companies themselves? Some of the biggest tech companies in the world including Meta, IBM, Hitachi and Sony have joined forces to create an AI Alliance. The AI Alliance say that they will create safe and secure AI.
Meta has announced that it is forming a team to tackle misleading and deceptive AI content in the upcoming EU elections. It is not known whether this will extend to the upcoming UK and US elections also. TikTok also announced in February that it will be launching “Election Centres” within the app for each of the 27 member states that will be a point at which users can access authoritative information.