WASHINGTON, Feb 2 — Chatbots spewing hoaxes, face-swapping apps making pornographic videos and cloned voices scamming businesses out of millions — the battle is on to curb artificial intelligence spoofs that have become purveyors of disinformation.
AI is redefining the adage ‘seeing is believing’ as many images are created out of thin air and people are shown things they never said in real-looking deep fakes that have undermined online trust.
“Yeah. (Definitely) not me,” billionaire Elon Musk tweeted last year in one glaring example of a deeply fake video showing him promoting a cryptocurrency scam.
China recently adopted sweeping rules to regulate deep counterfeiting, but most countries seem to be struggling to keep up with the fast-growing technology amid concerns that regulation could stifle innovation or be misused to limit freedom of expression.
Experts warn that deep fake detectors are being outpaced by creators who are difficult to catch because they operate anonymously using artificial intelligence-based software that was once considered a specialized skill but is now widely available at low cost.
Facebook owner Meta announced last year that it had taken down a deeply fake video of Ukrainian President Volodymyr Zelensky urging citizens to lay down their arms and surrender to Russia.
Meanwhile, British campaigner Kate Isaacs, 30, said her “heart sank” when her face appeared in the deeply fake pornographic video, which sparked abuse online after an unknown user posted it on Twitter.
“I remember feeling like this video was going to be shown everywhere – it was terrifying,” the BBC quoted Isaacs, who campaigns against non-consensual pornography, as saying in October.
The following month, the British government expressed concern about fakes and warned of a popular website that “practically strips women naked”.
With no barriers to AI-synthesized text, audio, and video, the potential for abuse for identity theft, financial fraud, and tarnishing reputations has raised global alarm.
The Eurasian group called the AI tools “weapons of mass disruption.”
“Technological advances in artificial intelligence will undermine social trust, empower demagogues and authoritarians, and disrupt business and markets,” the group warned in a report.
“Developments in deep spoofing, facial recognition and voice synthesis software will make controlling your likeness a relic of the past.”
This week, artificial intelligence startup ElevenLabs admitted that its voice-cloning tool could be misused for “malicious purposes” after users posted deeply fake audio of actress Emma Watson reading a biography of Adolf Hitler. Mein Kampf.
The growing number of fakes could lead to what European law enforcement agency Europol has described as an “information apocalypse,” a scenario in which many people are unable to distinguish fact from fiction.
“Experts fear that this could lead to a situation where citizens no longer have a shared reality, or it could create confusion in society about which sources of information are reliable,” the Europol report said.
This was on display last weekend when NFL player Damar Hamlin spoke to his fans on video for the first time since going into cardiac arrest during a game.
Hamlin thanked the medical professionals responsible for his recovery, but many who believed in conspiracy theories that the Covid-19 vaccine was behind his on-field collapse wrongly branded his video a hoax.
China introduced new rules last month that will require companies offering deep spoofing services to obtain the real identities of their users. They also require fake content to be appropriately labeled to avoid “any confusion”.
The rules came after the Chinese government warned that counterfeits posed a “threat to national security and social stability”.
In the United States, where lawmakers have pushed for a task force to crack down on deep counterfeiting, digital rights activists warn against over-regulation that could kill innovation or crack down on legitimate content.
Meanwhile, the European Union is locked in a heated debate over its proposed “AI Law”.
A law the EU is set to pass this year will require users to spot deep fakes, but many fear the legislation could prove toothless if it doesn’t cover creative or satirical content.
“How to restore digital trust with transparency? That’s the real question now,” Syracuse University researcher Professor Jason Davis told AFP.
“(Discovery) tools are coming, and they are coming relatively quickly. But technology is evolving, perhaps even faster. Like cyber security, we’ll never solve it, we’ll just hope to keep up.
Many are already struggling to make sense of advances like ChatGPT, a chatbot built by US-based OpenAI that can generate surprisingly persuasive texts on almost any topic.
In a study by media watchdog NewsGuard, which called it “the next big disinformation purveyor,” the majority of the chatbot’s responses to prompts on topics such as Covid-19 and school shootings were “eloquent, false and misleading.”
“The results confirm concerns about how the tool could fall into the wrong hands,” NewsGuard said. — AFP