Docket+ 9 September

Hi! I'm Victoria and welcome to DisinfoDocket. Docket+ is a weekly roundup of the latest influence operations-related academic research, events and job opportunities.
Did you know that DisinfoDocket takes requests? If you have suggestions or relevant work that you'd like to see included, simply reply to this email and send us the details!
Highlights
- Book Review: A Misinformation Researcher’s Guide to the ‘Carnival of Mirrors’ (Just Security, 3 September)
- The #Americans: Chinese State-Linked Influence Operation Spamoflage Masquerades as US Voters Push Divisive Online Narratives Ahead of 2024 Election (Graphic, September)
- Best Practices for Producing Culturally Competent Prebunking Messages for U.S. Latinos (DDIA, 4 September)
- Simple fusion-fission quantifies Israel-Palestine violence and suggests multi-adversary solution (ArXiv, 4 September)
- Investigating the role of source and source trust in prebunks and debunks of misinformation in online experiments across four EU countries (Nature, 5 September)
🧵New work from @LauraEdelson2, Damon McCoy & me @cyber4democracy w/ stark finding: "Illegal activity in Ads on Meta Apps linking to Telegram." A simple search found MORE than 1/2 the Telegram-linked ads violated policies, including illegal activity. But there's a simple fix! 1/4
— Yael Eisenstat (@YaelEisenstat) September 4, 2024
Call for Papers
— Alan Jagolinzer (@jagolinzer) September 8, 2024
Cambridge Disinformation Summit 2025
Papers from any discipline or research methods will be considered that relate to the Summit’s theme,
“Research on the efficacy of disinformation interventions”.https://t.co/05WAQyxbMj
1. Academia & Research
1.1 Platforms & Technology
The OII's @KeeganMcB, alongside @_FelixSimon_ and Sacha Altay, assert that AI's impact on elections is being overblown, and the focus on AI is distracting us from some deeper and longer-lasting threats to democracy, for @techreview.https://t.co/5KmtUoLgx1
— Oxford Internet Institute (@oiioxford) September 5, 2024
- Durov, Musk, and Zuckerberg: Tech Oligarchs Cry Censorship and What It All Means (Just Security, 30 August)
- Mythical Beasts and Where to Find Them: Mapping the Global Spyware Market and its Threats to National Security and Human Rights (DFRLab, 4 September)
- Mythical Beasts and Where to Find Them: Data and Methodology (DFRLab, 4 September)
- Modeling offensive content detection for TikTok (ArXiv, 29 August)
- The sound of disinformation: TikTok, computational propaganda, and the invasion of Ukraine (SAGE, 30 August)
- AI misinformation detectors can’t save us from tyranny—at least not yet (Bulletin of the Atomic Scientists, 5 September)
Images & Visualisations
- Harmful YouTube Video Detection: A Taxonomy of Online Harm and MLLMs (GPT-4-Turbo) as Alternative Annotators (OSF, 2 September)
Availability and spread of information
- What drives acceptance and propagation of online misinformation? An integrative re-examination of key enabling factors (OSF, 3 September)
- Exposure to Misinformation Does Not Increase Truth Relativism (OSF, 3 September)
- Who Shares Fake News? Uncovering Insights from Social Media Users' Post Histories (SAGE, 1 September)
1.2 World News
This is astounding.
— Joan Donovan, PhD 🦫 🏳️🌈 (@BostonJoan) September 5, 2024
The common denominator here is YouTube. @YouTube knew its platform could be weaponized by foreign actors to hoax audiences and spread disinformation.
This is what netwar looks like. https://t.co/3DoiCiDxjY
- NewsGuard Uncovers Massive India-Aligned Network Using AI and Fake Accounts to Target Country’s Foes Operating without Detection for Three Years (News Guard, 4 September)
- Exposure to Misinformation Does Not Increase Truth Relativism (OSF, 3 September)
- Understanding anti-immigration sentiment spreading on Twitter (Plos One, 4 September)
- Wolves in Sheep’s Clothing: The Autocratic Subversion of Brazil’s Fourth Estate (SAGE, 4 September)