As Venezuela prepared for carnival season last month, an English-language video was published on the “House of News” YouTube channel. Its presenter Noah lauded an alleged tourism boom as millions of citizens flocked to the country’s Caribbean islands to party.
The report, which was widely circulated in media supportive of President Nicolás Maduro’s socialist government, suggested claims about widespread impoverishment in oil-rich Venezuela had been “exaggerated”.
Another report claimed that the anti-Maduro interim government had been implicated in the alleged mismanagement of $152mn of funds before its recent dissolution, with Emma, the presenter, concluding that “Venezuelans do not actually feel there is any opposition to the government”.
But the two stories were fake, and the two newsreaders do not exist. They are avatars, based on real actors, that were generated using technology from Synthesia, an artificial intelligence company based in London. Their American accents were synthesised, their talking visages generated by machine-learning algorithms.
Last week, YouTube suspended five accounts, including House of News, that had shared government-aligned misinformation. But the emergence of deepfakes and AI-generated media represents a new frontier in Venezuela’s campaign of propaganda and misinformation, raising concerns about the potential influence on a population that has scant access to trustworthy news because of widespread censorship both on and offline.
“In Venezuela, there is a desert of information where disinformation can thrive,” said Adrián González, director of Cazadores de Fake News, a misinformation monitor based in Caracas. “And now the technology is there to make convincing fake news videos.”
González said the network of outlets in Venezuela distributing propaganda was vast, ranging from official media, independent but allied media and purveyors of fake news. On social media, posters have used automation tools to boost government talking points, helping posts reach the largest audiences.
Over the past year, generative AI technologies — software that can create images, videos and text based on user prompts and descriptions — have grown in popularity. Products such as Dall-E and ChatGPT are being widely adopted by users ranging from schoolchildren to elite computer programmers.
But there have also been concerns about the software’s potential to generate misinformation. Documented examples of exchanges with generative AI agents such as chatbots show how they spew false information, known as “hallucinations”, display bias and spout conspiracy theories.
Synthesia’s technology, based on a type of AI technique known as deep learning, generates videos featuring avatars. These avatars speak from a user-generated script, in a variety of languages, and the videos can be created within 10 minutes. It says it produces about 10,000 videos a month and its clients range from advertising company WPP to the UK’s NHS, which uses it to create health information videos in different languages. The start-up has raised $66mn from Silicon Valley investors including Kleiner Perkins and GV, formerly Google Ventures.
Synthesia said the Venezuelan client had been banned from using its service as soon as the video was discovered on Twitter by one of the company’s employees. “We have strict guidelines for which type of content we allow to be created on our platform. We enforce our terms of service and ban users who breach them,” it said.
Synthesia added that it had implemented new restrictions on the use of its technology, including the banning of all news report-style content, a digital watermark that would mark the videos as AI-generated and a more stringent review process for each video.
Synthesia has faced other instances of misuse. In January, political disinformation videos generated by the company were found circulating in Mali, and last month US-based information analysts Graphika discovered a pro-China operation promoting Synthesia-produced videos.
According to its ethics guidelines, Synthesia said it would only release its product to “trusted clients” after an “explicit internal screening process”. When asked why these policies had failed in the Venezuelan case, the company said it had strengthened its regulations so its small content moderation team could see if a user’s requests had previously been rejected, to help flag potential misusers and reoffenders.
Reliable information is a scarce resource in Venezuela, which has the world’s largest proven oil reserves but is saddled with inflation at an annual rate of 350 per cent this year, according to local researchers Ecoanalítica. Basic foods and medicines are often scarce or prohibitively expensive. Strict sanctions imposed by the US in 2019 have limited the government's room for manoeuvre despite the relaxation of controls. More then 7mm Venezuelans have fled the country since 2015.
The economy is showing signs of modest improvement but Maduro's government has becomes more authoritarian during a decade in power, clamping down on dissent while o-opting or shutting down traditional news media. Leading newspaper El Nactional stopped its prints edition in 2018. Last year, state-controlled and private internet service provides blocked access to independent news service providers blocked access to independent news sites.
Armies of uses in Twitter and other platforms help to promote Maduro's agenda. ProBox, a civil society organization which tracks misinformation on social media in Venezuela, has documented cases of the government rewarding people who promote regime talking points through a social credit system know in as the "homeland card".
Common topics promoted include the economic recovery, improved living conditions and the fractured opposition' shortcomings. Government accounts share the propaganda, which curates freely on social media. House of News videos received hundreds of thousands of views on YouTube. State broadcasters also aired them.
The content should be persuasive, said ProBox head Maria Virginia Marin.
"When you have a so-called reporter speaking in English in what looks like international media and selling you a reality that you don't see, it raises the question that perhaps it does exist and you're just outside of it," she said.
The videos were partly targeted at an international audience, Marin added. "The goal is to muddle the international debate on Venezuela, and cover the reality of what is going on here."
Joe Daniels in Bogotá and Madhumita Murgia in London
16 HOURS AGO
Copyright The Financial Times Limited 2023. All rights reserved.