10 Things No One Tells You About Fighting Disinformation: Lessons from Real Hybrid-Warfare…
When you publish verified reporting on state-sponsored attacks or war crimes, you stop being an observer. Smear campaigns turn you into the target to discredit the evidence.
Reporting on war crimes or hybrid operations isn’t just about collecting data, sometimes the story fights back. In Bulgaria, the anti-corruption collective BG Elves, cyber-researchers who track Russian disinformation and criminal networks, have seen exactly what that means. Their work has helped expose propaganda channels, financial networks, even animal-abuse rings run for profit. It also made them targets. Earlier this year, the group’s public spokesperson, Petko Petkov, survived a car bombing in 2019 after months of threats tied to anti-corruption research. Most recently Petkov and his finance have been subjected to a brutal smear campaign on one of Bulgaria’s oligarch affiliated cable “news” stations. Desperate mafia affiliated oligarchs, who in some cases, sadly own much of the parliament. Are using Russian style misinformation playbooks in vain attempts to discredit Bulgarian researchers, and why many now find it safer to live in Ukraine.
That’s the real cost of open-source intelligence in regions that sit between Russian and Western influence. You don’t just analyse propaganda; you start living inside it. When you speak publicly about state-sponsored hacking or disinformation, as I do, the same playbook appears in smaller ways: online defamation, sudden “experts” discrediting your work, or planted hecklers trying to disrupt talks.
The line between researcher and target is thinner than most Western audiences realize. Every verified finding becomes a potential flashpoint in a much larger information war.
2. “Expert” Doesn’t Always Mean Expert.
If you want to see the Kremlin playbook in miniature, watch how false experts are manufactured. The method is simple: attach political messaging to someone with a credential, then amplify them through friendly outlets until they look legitimate. Suddenly propaganda has a PhD.
This isn’t theory; it’s something I’ve watched in real time. The Bulgarian research collective BG Elves and I documented how a pro-Kremlin YouTuber, running a “think tank” with opaque Russian funding sources and publications which included passages matching Wikipedia-level plagiarism; was elevated to a professorship at Bulgaria’s flagship university. The case, “Professor Wikipedia,” showed exactly how academic titles can be turned into weapons of influence: reuse content, reward loyalty, and let media call you “Professor” on air.
The same tactic surfaced the day my DIE ZEIT interview on hybrid warfare went live, when a German commentator who describes himself as a “cyber-security expert” M. Atug, posted defamatory remarks questioning my international media reporting of a potential Russian war crime. His posts echoed familiar Kremlin talking points about Ukrainian war crimes being “unproven,” a pattern already documented in other influence campaigns.
In analysing public Wikipedia-edit data, I also found that several “expert” biographies in this circle appear to be maintained primarily by single, closely linked contributors, an easy way to create the appearance of independent authority. It’s a reminder of how self-authored credentials can circulate unchallenged once they’re online.
Ironically, some self-described experts even seem to curate their own public profiles. In one publicly visible case my team examined (M. Atug), Wikipedia-edit data showed that a single user account was responsible for creating and making nearly all changes to the page of a German cybersecurity commentator. While that doesn’t prove intent, it illustrates how easily a biography can look independently verified when, in fact, it reflects one perspective. It’s a reminder that even open-source reputations deserve scrutiny.
That’s the danger of unvetted expertise. In the information war, titles are tools. The question is no longer who said it, but who benefits when they do.
3. Smear Campaigns Are Cheaper Than Cyberattacks.
A single defamatory thread or a fake “exposé” cost almost nothing and can wreck a career faster than malware spreads. Build a little narrative, pay a few accounts to amplify it, add a recycled hashtag, and suddenly reporters and managers see “coverage” not a campaign. The ROI is obscene: low budget, high impact.
Worse, the theatre scales. When a state lacks plausible domestic advocates (or wants deniability), it hires people: mercenary influencers, paid hecklers, ghost-writers, PR farms, and the odd academic who’ll lend a title for a fee. These actors perform the same basic function as a botnet: amplify, coordinate, drown out truth. But they’re human and harder to block. You can take down a server; you can’t arrest a narrative.
Tactics to watch for:
Cheap amplification: multiple small accounts repost the same claim within an hour.Credential laundering: a shaky “think tank” or a self-penned bio gives the lie an academic sheen.The poison drip: tiny insinuations across platforms, then a “big” thread that looks organic.Paid hecklers at events: plant a question, force an answer, extract a soundbite to clip and circulate.Why editors fall for it: speed and scarcity. Newsrooms want instant expert reaction. A confident-sounding thread gets quoted before anyone does due diligence. Add vanity metrics (shares, comments) and the lie looks like truth.
If you want to stop this cheaply weaponised noise, you don’t need more cops, you need more verification. Two simple moves help: demand verifiable work (published research, CV, traceable outputs) before you quote someone, and slow down the reflex to amplify, especially when the story touches geopolitics or human suffering.
4. Hybrid War Has Local Actors.
Harassment often starts long before a smear campaign has a flag attached to it. My first taste came years ago, during my public disclosure of severe aviation cybersecurity issues. At a US technology conference, an organizer tried to derail my presentation with comments about my personal life and monopolised the Q&A. It was petty and gendered, related to US corporate harassment and retaliation, but it showed how easy it is to silence uncomfortable research by turning the spotlight onto the researcher instead of the findings.
If you want to support further articles and research consider becoming a paid subscriber or buy one of my books :-)
Those same dynamics are now industrialised. State-aligned networks use social media to amplify defamation and drown out verified work. What was once a single person trying to humiliate a speaker has evolved into organised information warfare: coordinated posts, fake “experts,” and propaganda narratives seeded to discredit ethical hackers and journalists who expose wrongdoing.
5. The West Underestimates the Frontline
For many researchers in Eastern Europe, harassment isn’t theoretical, it’s a workday hazard. The “frontline” isn’t a trench; it’s a conference hall, a lab, or a parliament Wi-Fi network. I’ve spoken at events where the building lost power mid-panel because Russian cyber units were targeting the parliament that hosted us. Yes, in a EU and NATO parliament, hit with deliberate cyber-attacks in real time, to silence a Ukrainian researcher and the conference itself.
At another government-level event in the Balkans, a keynote speaker was deepfaked within two hours of finishing her talk. Her face and voice repurposed online to twist her own message. That’s what hybrid war looks like: code, electricity, and credibility attacked at the same time.
Western audiences often imagine this as “over there.” It’s not. These attacks test responses, refine tactics, and creep westward through supply chains, media ecosystems, and conferences. For those of us on stage or behind a keyboard, the front line is wherever the next packet of data lands.
6. Fact-Checking Isn’t Just for Journalists Anymore.
In an information war, verification becomes everyone’s job description. Researchers have to fact-check the conference badge next to them, the “think tank” asking for a quote, and the social media account applauding their slides. Background checks aren’t paranoia; they’re basic hygiene in the Hybrid War with Russia.
The BG Elves, Bulgaria’s volunteer OSINT collective, proved how powerful that vigilance can be. During the Sofia Information Integrity Forum, they quietly verified attendees, traced suspicious amplification patterns online, and identified three Russian-linked assets posing as observers. Their quick work kept the focus on the research instead of the disruption. One Russian paid actor was chased out of the conference after exposing her GRU Officer father whose extreme violence against women (threatening acid attacks) is posted and publicly available on YouTube. These paid provocateurs came back to the conference on day two, and started a physical fight with other conference attendees.
That’s the new reality: disinformation doesn’t just distort stories; it infiltrates events, panels, and peer networks. Every conference badge, follower, and “expert citation” is a potential vector. The same curiosity that drives good research now must extend to people and platforms. Integrity isn’t maintained by trust; it’s maintained by checking twice and thanking those who do.
7. Media Gatekeeping Still Favours the Hoodie
In cybersecurity coverage, authority still has a dress code. Western media love the archetype: a man in a hoodie, confident tone, maybe a dimly lit photo. It reads as “hacker.” A woman with twenty years of operational experience? She’s asked to prove it repeatedly. Editors demand full bios, documentation, and proof of fieldwork while handing instant credibility to whoever shouts loudest on social media.
That bias isn’t just unfair; it’s exploitable due sadly to sexism in the Western tech industry. Influence networks know the look and play it perfectly. Propaganda launders itself through optics: techie posture, jargon, maybe a self-written Wikipedia entry, And suddenly the talking points sound credible. Meanwhile, qualified experts, often women, spend their time re-establishing legitimacy instead of briefing policymakers.
The solution isn’t tokenism; it’s due diligence. Vet expertise by output, not outfit or gender. Ask what a person has built, published, or defended, not how well they “look” the part. In hybrid warfare, every shortcut to credibility becomes an entry point for manipulation. A good newsroom, or an alert reader, learns to look past the man wearing a hoodie.
8. Digital Attacks Don’t End at the Screen
The screen is just the first layer. Behind every smear campaign or phishing lure is a human willing to push it offline. Once you’re visible, digital hostility becomes a kind of weather system that follows you everywhere, emails probed, conference credentials cloned, social accounts impersonated, hotel rooms suddenly “upgraded.” Bahrain, Manama City, Intercontinental Hotel, room 825 is a great example of this. No matter what room I booked, this was the only room I was ever given.
I’ve had coordinated online attacks bleed into the real world: strangers showing up at events, an unregistered Russian foreign agent showing up to my home, phones tampered with, neighbours quietly briefed by local police after foreign-backed harassment and death threats. None of it is random. It’s designed to drain time and energy, to make the target defensive instead of effective.
For anyone working on sensitive research, journalists, ethical hackers, investigators. The rule is simple: treat digital threats as physical-intent warnings. If the campaign goes public, loop in your embassy. If the message feels off, preserve evidence before blocking. Visibility is protection.
Cyberwarfare doesn’t stop when you log off; it adapts. Every keystroke, every conference badge, every trip becomes potential reconnaissance. Staying safe now means blending the habits of a reporter, a security analyst, and a field operative.
9. Integrity Needs Allies
In a hybrid war, truth doesn’t survive in isolation. Integrity needs a network as much as any system does. When one person is targeted, others must mirror, archive, and verify so the story can’t be erased.
The BG Elves are a perfect example of that principle. They didn’t wait for permission; they cross-checked data, traced networks, and confirmed facts that would’ve been buried otherwise. Across Europe, similar informal alliances are forming, security researchers, investigative reporters, and digital-rights lawyers sharing intel and protecting each other’s work when it’s under attack.
It’s easy to underestimate how much that quiet collaboration matters. A quick mirror of deleted evidence, a screenshot saved, an encrypted hand-off, all those small acts keep propaganda from rewriting reality.
Integrity isn’t a solo sport. It’s collective defence, built on trust, encryption, and a shared refusal to let intimidation rewrite the record. Every verified byte, every ally who double-checks a source, every colleague who refuses to look away strengthens the firewall around truth itself.
10. Courage Is Contagious
Courage spreads faster than fear if you give it a signal boost. Every time one person refuses to be silenced, it sends a ripple through the noise, showing others that truth is worth defending, even when it costs sleep, safety, or peace of mind.
The first step is always the hardest: to speak. The second is to keep speaking when the pushback begins. Every OSINT volunteer, every ethical hacker, every journalist who holds the line against manipulation teaches someone else that resistance is possible; and that data, when handled with integrity, can be armour.
I’ve seen it happen from Kiev to Bucharest, from Sofia to Amsterdam. Courage builds in clusters: a message of solidarity here, a mirrored file there, a small act of verification that stops a lie from metastasizing. It’s never glamorous. It’s slow, messy, human, and it works.
The good news? The playbook of intimidation only works when we stop showing up. The cure is stubborn transparency and collective grit. Courage doesn’t just survive attacks; it multiplies through them. And every time it does, the disinformation machine loses another gear.
Thank you BG Elves!📌 More on Me • Chris Kubecka — Wikipedia
#CyberSecurity #Russia #BGElves #NationStateThreats #Hacking #OSINT #Cyberwar #TheHacktress #Ukraine
Chris Kubecka is the founder and CEO of Hypasec NL an esteemed cyberwarfare expert, advisor to numerous governments, UN groups and freelance journalist. She is the former Aramco Head of Information Protection Group and Joint Intelligence Group, former. Distinguished Chair of the Middle East Institute, veteran USAF aviator and U.S. Space Command. She specializes in critical infrastructure security and unconventional digital threats and risks. When not getting recruited by dodgy nation-states or embroiled in cyber espionage, she hacks dictatorships & Drones (affiliate link to my books) and drinks espresso.
@SecEvangelism on Instagram, X, BlueSky LinkedIn Substack
[image error]

