Docket+ 10 March

Hi! I'm Victoria and welcome to DisinfoDocket. Docket+ is a weekly roundup of the latest influence operations-related academic research, events and job opportunities.
Did you know that DisinfoDocket takes requests? If you have suggestions or relevant work that you'd like to see included, simply reply to this email and send us the details!
Call for abstracts: The Paris Conference on AI & Digital Ethics
Sorbonne University, Paris, June 16th and 17th, 2025
https://paris-conference.com/
Submissions are welcome until March 15th, 2025.
Academic and industry researchers are invited to submit a contribution in one of the five following disciplines or related disciplines.
- Computational philosophy and social sciences
• Ethics and moral philosophy
• Political theory and science
• Statistics and computer sciences
• International relation
• Law
Highlights
Graphika’s new report Character Flaws is out today.
— Graphika (@Graphika_NYC) March 5, 2025
It looks at AI character chatbots that present potential for harm: personas of sexualized minors, those advocating eating disorders or self-harm, and those with hateful or violent extremist tendencies: https://t.co/ZNNTaM37KM pic.twitter.com/50N3zkM3ZC
- IU's Observatory on Social Media defends citizens from online manipulation – the opposite of censorship (Indiana University, March)
- The Truth is Warranted: The Impact of Voluntary Accountability on Misinformation (Preprint, 4 March)
- When it comes to understanding AI’s impact on elections, we’re still working in the dark (Brookings, 4 March)
1. Academia & Research
1.1 Platforms & Technology
- The Common Sense Census: Media Use by Kids Zero to Eight (Common Sense Media, March)
- An Alert to the World: The Role of Social Media Platforms in Bolsonaro’s Disinformation Campaign Targeting Brazil’s Democratic Institutions (Tech Policy Press, 5 March)
- Engagement, user satisfaction, and the amplification of divisive content on social media (Oxford Academic, 5 March)
- Should scientists ditch the social-media platform X? (Nature, 4 March)
- Better Feeds: Algorithms That Put People First (Knight-Georgetown Institute, 4 March)
AI & LLMs
Does generative AI present an unprecedented threat to democracy, as many worried ahead of 2024? Or did it have little impact, as others now claim?
— NYU's Center for Social Media and Politics (@CSMaP_NYU) March 4, 2025
The truth is we don't have enough data to draw concrete conclusions.
Our latest at @BrookingsInst https://t.co/mBYbdXLVa5
- DeepSeek Points Toward U.S.-China Cooperation, Not a Race (Lawfare, 5 March)
- Unmasking Digital Falsehoods: A Comparative Analysis of LLM-Based Misinformation Detection Strategies (ArXiv, 2 March)
- Slopaganda: The interaction between propaganda and generative AI (ArXiv, 3 March)
- Persuade Me if You Can: A Framework for Evaluating Persuasion Effectiveness and Susceptibility Among Large Language Models (ArXiv, 3 March)
- An Empirical Analysis of LLMs for Countering Misinformation (ArXiv, 28 February)
- Deepfake-Eval-2024: A Multi-Modal In-the-Wild Benchmark of Deepfakes Circulated in 2024 (ArXiv, 4 March)
- Disinformation in the digital era: The role of deepfakes, artificial intelligence, and open-source intelligence in shaping public trust and policy responses (SSRN, 4 March)
Platform Announcements
- Adversarial Threat Report, Fourth Quarter (Meta, 27 February)