top of page

What went wrong during Romania's Presidential Elections?

If you haven’t been following the brouhaha surrounding Romania’s presidential elections, let me bring you up to speed. Then, I’ll explain why the current responses by regulators, traditional media, and other stakeholders miss two key issues and how this oversight isn’t setting anyone up for success—not the platforms, not the regulators, not the fact-checkers, and definitely not Romania or any other country and its internet users. Because here’s the truth: the obsession with identifying who threw the stone when it involves claims of foreign interference in election (insert Russia, China, Iran, or any other usual suspect) won’t matter if no one trusts anything anymore.


Context: What Happened?


Călin Georgescu, a pro-Russia candidate, unexpectedly led the first round of voting in Romania’s presidential elections. According to Romanian intelligence agencies, Georgescu’s rise was not organic but orchestrated by Russia. Allegedly, Russia coordinated a campaign through TikTok, using thousands of fake accounts and influencers to amplify Georgescu’s content. In response, Romania’s Constitutional Court annulled the first-round results on December 6, 2024, citing compromised electoral integrity. The European Commission has since launched an investigation into TikTok’s role, assessing whether the platform violated the Digital Services Act (DSA) by failing to prevent disinformation and foreign interference.


Issue 1: Nobody Cares About the Long PDFs


In Romania’s case, pro-Georgescu content succeeded in gaining an audience because it was delivered in a highly engaging format. The narratives were snappy, relatable, and packaged as engaging content by influencers. These weren’t dense, jargon-heavy policy documents but rather digestible snippets designed to grab attention.


Compare that to the official response. Romanian intelligence declassified five documents related to Georgescu’s activities. These were long PDFs, collectively exceeding 40 pages. Do we really expect the average internet user to sift through such dense material while juggling their daily life? As one participant in a recent study on information sensibility noted about fact checks, “It was too long to read, so I didn’t even read all of it.” Another said, “Sure, you can look on your own, but no one has time for that.”


Younger audiences, in particular, are not engaging with traditional long-form content. Bad actors have adapted to this cultural shift by delivering concise, engaging narratives, while trusted sources and regulators remain stuck in outdated formats. This disconnect is a critical vulnerability in the fight against disinformation. The average user cares a lot less about <insert Russia, China> and by the time that can be established the damage is very much done.


I realize the irony of this long-form content being delivered to your inbox, and you are likely skimming as you read this. But that’s the point—we are asking people to pay attention in ways they no longer do. (..and I should think about practising what I preach in 2025)


Solution : If trusted sources want to compete with disinformation, they must modernise their approach. Verge and seen[dot]tv offer two compelling alternative approaches that we can all learn from.


  • Engage on Social Media: Platforms must be tools for direct engagement, not just content distribution for traditional media sources and fact checks. i.e create content which is short form video, concise texts

  • Deliver Snappy Narratives: Relatable, first-person content that mirrors the style of influencers will resonate better. Empower young journalists in news organisations to build their brands, humanising legacy organisations.

  • Embrace Accessible Formats: Replace dense, jargon-heavy reports with engaging, shareable insights that are easy to understand.


If we fail to adapt, we’ll fail at effective mitigation . Meanwhile, the audiences we aim to protect will remain too busy scrolling to care.


Screen grabs from Verge press release. seen[dot]tv debunks climate change misinformation through engaging first person narratives.
Screen grabs from Verge press release. seen[dot]tv debunks climate change misinformation through engaging first person narratives.

Issue 2 : Naming Problems Without Defining Responsibilities = Execution Failure


The EU Code of Practice on Disinformation, for example, lists “co-opting of influencersi.e simply put using influencers as a vehicle of conveying disinformation as a tactic that Very Large Online Platforms (VLOPs) such as Meta, TikTok, Google must disclose actions to prevent. Romanian officials revealed that Fame Up, a platform used to recruit micro-influencers, was leveraged to amplify narratives favorable to Alexandru Georgescu.


Countless platforms like Fame Up exist globally, operating in different regions and contexts. But here’s the catch: platforms like TikTok cannot track off-platform payments and contracts. Their only options are third-party intel providers or in-house teams—both imperfect solutions that raise privacy and data protection concerns.


This raises critical questions:

  • Who is responsible for ensuring transparency in global influencer contracts for civic or political content?

    • Should anyone even be responsible for this? Is transparency the right lever to pull? (Probably not.)

  • When coordination happens on platforms like Telegram and Discord, how can there be effective actionable formalised intel sharing between platforms ? Does creating a Discord channel to amplify civic content inherently violate Discord’s terms of service? Unlikely, because that's activism 101. So, at what point is there a call to collective action based on a shared definition of harm, and priorities?


Conversations about disinformation cannot ignore granular operational questions: Who does what, and when? When we default to a kitchen-sink definition of disinformation that conflates misinformation, inauthentic engagement, and/or coordinated manipulation, we cannot build solutions that target each specific aspect. Attribution often lacks publicly shared, reproducible evidence should in an ideal world, not drive headlines. See interference2020[dot]org


Why Platforms Struggle to Respond Effectively


Critics often ask: Why didn’t [insert platform name] act decisively? How could they not know what was happening? The answer lies in three operational challenges:


  1. Blind Spots: The first issue is that often, nobody is actively monitoring the problem in the specific country or language where it is occurring. Global platforms prioritize larger markets and languages where they have more resources or business impact, leaving smaller or less lucrative regions underserved. Problems can thrive in these blind spots.


  2. Resource Trade-Offs: even when a problem is identified, it can be less of a priority as “something worth solving now” This can be due to competing demands for resources or a lack of complete understanding of the problem’s broader implications. It may simply not rank high enough against other issues deemed more urgent or impactful. Trust and safety teams are often fighting multiple fires at once, all around the world and despite best efforts, sometimes, mistakes happen.


  3. Fragmented Execution: Lastly, when platforms do decide to act, their responses can be often half-baked. This happens because of inadequate resourcing, poorly designed internal tools, or fragmented organizational structures that prevent teams from understanding and addressing the problem comprehensively. As a result, interventions fail to dismantle the problem effectively, and fail to deploy targeted forward looking interventions, essentially slapping a band-aid on a hip fracture.


A Practical Focus: Fix What Platforms Can Fix


Platforms should not be tasked with “boiling the internet ocean.” They can and must focus on what’s within their ecosystems. This means:


  • Acting swiftly on actionable insights provided by regulators, civil society, and media organizations.

  • Delivering effective operational execution to address issues decisively, holistically

  • Accepting clear accountability for failures across all stakeholders —with significant consequences to create incentives for meaningful investment and collaboration.


The regulatory focus should be on holding platforms accountable for operational effectiveness. This is where they can be impactful, and this is what regulators can measure, and there must be consequences to create the right incentives


Closing Note


Having spent time at platforms tackling technology risks, working alongside regulators, and collaborating with fact-checkers, I saw the same patterns repeat themselves over and over. Reports were written, policies were debated, and fingers were pointed. It felt like no matter how much everyone worked, the tide was always against.


At some point, I realized that looking at the problem solely through the lenses of regulation, technology accountability, or journalism wasn’t helpful, especially when everyone had a different definition of the problem. We need to look at the actual consumers of information, and what helps them make informed decisions.


This is why I think my focus in 2025 will be on answering one key question, and looking at local organisations where I live and work


  • Practically speaking, what can I do in my local community to improve information resilience and trust in each other?


Wishing you all a happy holiday season, and a prosperous 2025! If anything in this article resonated with you in a good or bad way, shoot me a note!


Comments

Share Your ThoughtsBe the first to write a comment.

© 2025 by Devika Shanker-Grandpierre

bottom of page