How fake AI images could stoke tensions in the Indo-Pacific
Seeing is no longer believing.
Surprisingly realistic – yet fake – images created by Artificial Intelligence (AI) are here.
To date, most have seemed more like curiosities than genuine deception attempts.
Last month, it was revealed that New Zealand’s National Party had used the AI image generation app Midjourney to produce promotional images. The results included imaginary healthcare workers and fearful-looking citizens worried about crime.
In this case, the use of AI was relatively benign – the AI creations effectively replaced the stock photos that would have been used in the past.
Until a media outlet raised suspicions, few people – if any – had even noticed that the realistic-looking images were actually fake.
While the New Zealand example showed how AI images can be used in election campaigns, we can also expect them to have an outsized impact on international relations.
In March, Bellingcat founder Eliot Higgins tweeted startling ‘deepfake’ images of Donald Trump being ‘arrested’ by New York police.
Making pictures of Trump getting arrested while waiting for Trump's arrest. pic.twitter.com/4D2QQfUpLZ
— Eliot Higgins (@EliotHiggins) March 20, 2023
Higgins, who also used Midjourney, clearly labelled the images as AI creations.
But that did not stop the images from going viral and serving as a showcase of the sophistication of the technology.
Higgins also tweeted fake AI-generated images of an imaginary ‘peace summit’ between Vladimir Putin and Joe Biden – brokered by France’s Emmanuel Macron.
Playing around with #midjourneyv5 I decided to see if it could create realistic news images, so here's the entirely fictional Ukrainian peace talks between France, the US, Russia and Ukraine: pic.twitter.com/saCKtgBL91
— Eliot Higgins (@EliotHiggins) March 16, 2023
Given the current state of the war in Ukraine, these would be unlikely to fool anyone.
However, the realistic-looking deepfakes Higgins created of an imaginary nuclear explosion in Ukraine showed the potential AI could have amidst the fog of war.
Just after Russia invaded Ukraine last year, a fake video of Volodymyr Zelensky calling on Ukrainians to lay down their arms was published by hackers on the Ukraine 24 news website.
In the Indo-Pacific, it is surely only a matter of time until credible-looking AI-generated images are used to stoke geopolitical tensions in the region even further.
New Zealand Prime Minister Chris Hipkins has echoed his predecessor Jacinda Ardern in decrying what he calls the ‘militarisation of the Pacific’.
But in just the first half of 2023, the US has struck new defence arrangements with the Philippines and Papua New Guinea – a response to China signing its own security pact with Solomon Islands last year.
Tensions are building – and AI could take them to the next level.
Arguably, the sheer vastness of the Pacific creates some advantages for disinformation attempts.
Fake images of, say, military vessels near a remote island atoll could be hard to immediately disprove.
Poor communications infrastructure in the more remote corners of the Pacific will also not help.
Moreover, the rise of AI-created content may mean the public are left wondering what to believe.
A Brookings Institution report published in January identified ‘sowing confusion’ as a major aim of those behind disinformation operations.
The report gives an example of a robotic-like presidential address from Gabon’s president: while some believed it was evidence of a deepfake, others thought the president was simply ill. Regardless of the actual truth, chaos and a military coup attempt ensued.
AI has the power to create a hall of mirrors, where even genuine content is viewed with scepticism and distrust.
A real-life example of AI’s potential to wreak havoc came last month, when a fake image of an explosion at the Pentagon in Washington circulated on Twitter and caused financial markets to tumble briefly.
There are many potential responses to the new AI challenge for international relations.
One option is to regulate.
China recently banned deepfake images of real people unless consent has been given – and the country’s regulator now requires AI-generated, ‘synthetic’ content to be clearly labelled as such.
In New Zealand, the Department of Internal Affairs last week proposed tough new media regulations that would also cover content on social media. The suggestions include ‘warning labels and content advisories’ – and fines for non-compliance.
The downside to regulation is the potential for overreach.
The terms ‘disinformation’ and ‘misinformation’ are now frequently weaponised and used simply to denigrate the arguments of political opponents.
More optimistically, the public are less naïve than they are often given credit for.
After all, most people are by now well-accustomed to the idea that photos can be digitally altered. The first version of what is now Adobe Photoshop was released in 1987.
Still, the need to boost media literacy stood out as a theme in discussions on combating disinformation at the inaugural Global Media Congress that was held in Abu Dhabi last year – a topic that is also likely to feature at the 2023 edition of the event to be held this November.
A new ‘white paper’ from the first Congress, which the author attended as a guest of the organisers, summarised the view that there were no easy quick fixes.
Rather, everyone – from governments, to social media platforms, media outlets, educational institutions and consumers themselves – had a role to play.
Boosting awareness of the potential for AI to be used in disinformation is undoubtedly part of the solution.
We won’t have any choice.
Writing in the Global Media Congress white paper, Copenhagen-based futurist Sofie Hvitved estimates up to 99 per cent of the content we consume in the future could be created by AI.
Beyond this, we probably need to tackle the root causes.
In international relations, the de-escalation of conflicts and tensions would make it much harder for disinformation to take hold.
It is no accident that the current deepfake frontlines are on the battlefields of Ukraine.
In wartime, peacetime rules are thrown out of the window – making almost anything seem possible, or at least potentially plausible.
The fictional nuclear bomb created by Bellingcat’s Eliot Higgins is a prime example.
In the Indo-Pacific, steadily rising geopolitical temperatures form the ideal breeding ground for future AI-generated disinformation and propaganda efforts.
Ultimately, the best recipe to blunt the impact of AI fakes in international relations is as simple as it is difficult.
We need more diplomacy, engagement and compromise between states that don’t see eye-to-eye.
Only by reducing competition and conflict will the terrifying creations of AI seem more like fiction than reality.
Of course, this is easier said than done.
It will take human intervention to solve human problems.
AI can’t do it for us.
Geoffrey Miller is the Democracy Project’s geopolitical analyst and writes on current New Zealand foreign policy and related geopolitical issues. He has lived in Germany and the Middle East and is a learner of Arabic and Russian. Disclosure: Geoffrey attended the Global Media Congress in 2022 as a guest of the organisers, the Emirates News Agency.
This article was originally published on the Democracy Project.