The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 12 Dec, 12:02 AM UTC
2 Sources
[1]
ElevenLabs' AI voice generation 'very likely' used in a Russian influence operation
One recent campaign was "very likely" helped by commercial AI voice generation products, including tech publicly released by the hot startup ElevenLabs, according to a recent report from Massachusetts-based threat intelligence company Recorded Future. The report describes a Russian-tied campaign designed to undermine Europe's support for Ukraine, dubbed "Operation Undercut," that prominently used AI-generated voiceovers on fake or misleading "news" videos. The videos, which targeted European audiences, attacked Ukrainian politicians as corrupt or questioned the usefulness of military aid to Ukraine, among other themes. For example, one video touted that "even jammers can't save American Abrams tanks," referring to devices used by US tanks to deflect incoming missiles - reinforcing the point that sending high-tech armor to Ukraine is pointless. The report states that the video creators "very likely" used voice-generated AI, including ElevenLabs tech, to make their content appear more legitimate. To verify this, Recorded Future's researchers submitted the clips to ElevenLabs' own AI Speech Classifier, which provides the ability for anyone to "detect whether an audio clip was created using ElevenLabs," and got a match. ElevenLabs did not respond to requests for comment. Although Recorded Future noted the likely use of several commercial AI voice generation tools, it did not name any others besides ElevenLabs. The usefulness of AI voice generation was inadvertently showcased by the influence campaign's own orchestrators, who - rather sloppily - released some videos with real human voiceovers that had "a discernible Russian accent." In contrast, the AI-generated voiceovers spoke in multiple European languages like English, French, German, and Polish, with no foreign-soundings accents. According to Recorded Future, AI also allowed for the misleading clips to be quickly released in multiple languages spoken in Europe like English, German, French, Polish, and Turkish (incidentally, all languages supported by ElevenLabs.) Recorded Future attributed the activity to the Social Design Agency, a Russia-based organization that the U.S. government sanctioned this March for running " a network of over 60 websites that impersonated genuine news organizations in Europe, then used bogus social media accounts to amplify the misleading content of the spoofed websites." All this was done "on behalf of the Government of the Russian Federation," the U.S. State Department said at the time. The overall impact of the campaign on public opinion in Europe was minimal, Recorded Future concluded. This isn't the first time ElevenLabs' products have been singled out for alleged misuse. The company's tech was behind a robocall impersonating President Joe Biden that urged voters not to go out and vote during a primary election in January 2024, a voice fraud detection company concluded, according to Bloomberg. In response, ElevenLabs said it released new safety features like automatically blocking voices of politicians. ElevenLabs bans "unauthorized, harmful, or deceptive impersonation" and says it uses various tools to enforce this, such as both automated and human moderation. ElevenLabs has experienced explosive growth since its founding in 2022. It recently grew ARR to $80 million from $25 million less than a year earlier, and may soon be valued at $3 billion, TechCrunch previously reported. Its investors include Andreessen Horowitz and former Github CEO Nat Friedman.
[2]
The Latest AI Voice Developments Show the Perils and Power of New Tech
According to a report from a Massachusetts-based threat intelligence company called Recorded Future, which helps alert organizations to cyber threats and "see them first so they can prioritize, pinpoint, and act to prevent attacks," a recent digital misinformation campaign "very likely" used voices generated by ElevenLabs' systems. News site TechCrunch said the campaign was dubbed "Operation Undercut" and aimed at harming European support for Ukraine. The Russian propaganda initiative used AI-made speeches played on top of fake news videos that tried to portray Ukrainian politicians as corrupt, and even threatened the safety of the "American Abrams tanks" which have been used by forces on the defensive frontline. RecordedLabs actually used a tool made by ElevenLabs to check if a recording of a spoken voice contained AI-generated material, and verified it was likely made by the company's AI. Though other AI voice systems may have been used, the report only mentions ElevenLabs, TechCrunch notes. The bad actors used ElevenLabs tech to generate very convincing content that sounded like English, French, German and Polish native speakers, TechCrunch notes -- bizarrely highlighting the power of the generative AI systems in question. The issue here is simple: ElevenLabs is a buzzy company because its tech is so impressive, and so useful. Though its terms and conditions specifically forbid "unauthorized, harmful, or deceptive impersonation" and it has security protocols in place to try to prevent this sort of use, anyone who pays for a voice on the site could devise a way to use the resulting content maliciously. It's not the company's fault that it's being used this way, of course. As controversial and complex as digital voice tech can be, another company has just highlighted that for some content creators and business users, this particular use of AI can be an amazing boon. YouTube turned on automatic foreign-language dubbing features for creators who make and share videos "focused on knowledge and information."
Share
Share
Copy Link
A report by Recorded Future reveals that ElevenLabs' AI voice generation technology was likely used in a Russian influence operation targeting European support for Ukraine. The incident highlights both the power and potential misuse of advanced AI voice tools.
A recent report by Recorded Future, a Massachusetts-based threat intelligence company, has uncovered a Russian-linked disinformation campaign that "very likely" utilized AI-generated voices, including technology from the startup ElevenLabs. The campaign, dubbed "Operation Undercut," aimed to undermine European support for Ukraine through the dissemination of fake or misleading "news" videos 1.
The influence operation targeted European audiences with videos that portrayed Ukrainian politicians as corrupt and questioned the effectiveness of military aid to Ukraine. One example highlighted the alleged vulnerability of American Abrams tanks to jamming devices, suggesting the futility of sending advanced armor to Ukraine [1].
Recorded Future's researchers employed ElevenLabs' own AI Speech Classifier to confirm the use of AI-generated voices in the campaign. The AI-generated voiceovers were created in multiple European languages, including English, French, German, and Polish, without discernible foreign accents [1][2].
While the report noted the likely use of several commercial AI voice generation tools, ElevenLabs was the only company specifically named. The startup, known for its impressive voice synthesis technology, has not responded to requests for comment on the matter [1].
This is not the first time ElevenLabs has faced scrutiny over potential misuse of its technology. In January 2024, the company's tech was reportedly used in a robocall impersonating President Joe Biden, urging voters to abstain from a primary election [1].
The incident highlights both the power and potential dangers of advanced AI voice technology. ElevenLabs has implemented safety features, including automatically blocking the voices of politicians, and maintains policies against "unauthorized, harmful, or deceptive impersonation" [1][2].
Despite these measures, the ease with which bad actors can potentially misuse such technology raises concerns about the broader implications for information integrity and national security.
Recorded Future attributed the campaign to the Social Design Agency, a Russia-based organization sanctioned by the U.S. government in March for operating a network of websites impersonating legitimate European news outlets [1].
While the overall impact of the campaign on European public opinion was deemed minimal by Recorded Future, the incident serves as a stark reminder of the evolving landscape of digital disinformation and the role of AI in shaping it [1].
The controversy surrounding ElevenLabs comes amid the company's rapid growth, with annual recurring revenue reportedly jumping from $25 million to $80 million in less than a year. The startup, founded in 2022, is backed by prominent investors and may soon be valued at $3 billion [1].
As the debate over AI voice technology continues, other companies are exploring its potential benefits. YouTube, for instance, has recently introduced automatic foreign-language dubbing features for educational content creators, showcasing the technology's capacity to enhance global communication and knowledge sharing [2].
Google experts caution that AI could be used to create convincing fake images, videos and text, distorting reality. This has significant implications for businesses in terms of trust and reputation.
2 Sources
AI-powered voice cloning technology is rapidly advancing, raising concerns about fraud, privacy, and legal implications. Celebrities like David Attenborough and Scarlett Johansson have been targeted, prompting calls for updated regulations.
3 Sources
OpenAI reports multiple instances of ChatGPT being used by threat actors for malicious activities, including malware development and attempts to influence elections worldwide.
15 Sources
As AI technology advances, the creation of deepfakes becomes easier and more sophisticated, posing a significant threat to the integrity of US elections. Experts warn about the potential for widespread misinformation and the need for improved detection methods.
2 Sources
Microsoft has developed an advanced AI voice generator that can mimic human speech with unprecedented accuracy. However, due to potential misuse concerns, the company has decided not to release it to the public.
8 Sources