Concerns over social media’s potential to disrupt democratic societies have existed at least since the emergence of the so-called Islamic State in the first half of the 2010s.
The outcome of the UK’s Brexit referendum in 2016 and evidence of Russian influence operations during the 2016 U.S. presidential election only made such concerns more salient.
Between the rampant spread of misinformation around the coronavirus pandemic and the failed U.S. Capitol insurrection by supporters of then U.S. president Donald Trump on January 6, 2021, it might seem like we aren’t that much farther along in addressing these concerns.
When Trump was taken off several online services—including Twitter and Facebook—in the wake of the insurrection he was accused of inciting, some European officials took the opportunity to advocate for the regulation of tech platforms.
German Chancellor Angela Merkel stressed that social media platforms should not be making decisions to suspend political accounts by themselves “but according to the law and within the framework defined by legislators.”
European Commissioner for the Internal Market Thierry Breton made the case for the Digital Services Act (DSA), introduced in December 2020, which the EU hopes will “better protect consumers” online, encourage the transparency and accountability of tech companies, and foster competition.
But while there is a need for increased regulation in the short term, governments should also encourage the development of mechanisms that enable research to better understand what works against influence operations and related tactics like disinformation over the longer term.
As it is, the DSA—if adopted—will compel large social media platforms to share data with “vetted researchers” that relates to risks such as “the dissemination of illegal content” and “inauthentic manipulation” of online services. But the DSA stops short of outlining how that will work in practice and should outline a concrete roadmap for better collaboration between platforms and researchers.
Moreover, it isn’t enough to compel the bigger companies to share information with researchers upon request. There is still a significant knowledge gap in terms of understanding the effects of influence operations and the efficacy of different countermeasures. New regulations like the DSA are great if they solve problems—but rules ought to be grounded in evidence that they work, not just in evidence that the problem exists.
What is needed is a permanent mechanism that facilitates collaborative research between industry and academia. To better understand influence operations and related countermeasures, researchers need more than one-time access to data; they need to regularly collect and update quantitative data to facilitate hypothesis testing and design of intervention strategies.
Here, the EU has an opportunity not just to lead in implementing regulation of the information environment, but also in fostering longer-term collaborative research. The DSA could articulate a viable model for how research collaboration would work in practice and encourage industry to support the development of an independent collaborative research center. The EU could match funds with industry to support these initiatives.
Nearly everyone working on this challenge calls for increased data and information sharing. But they seldom say in detail how to make that happen. So far, the DSA seems to be falling into the same trap.
In forthcoming research with Princeton University’s Empirical Study of Conflict Project, we found that, outside of fact-checking, there is scant research on the effectiveness of influence operations countermeasures. If social media platforms are measuring the efficacy of the interventions they make to counter influence operations, findings are seldom disclosed in subsequent public announcements about them.
For academics to answers the difficult questions of how to counter influence operations, they need access to what can be sensitive personal data, which currently can be achieved only by working inside social media companies—where publishing in peer-reviewed journals becomes more difficult while external attacks on one’s credibility for working with the industry increase.
It is becoming increasingly clear that if democracies are to get a handle on the problem of influence operations, a bridging mechanism to facilitate research is paramount.
One model is a multi-stakeholder research and development center (MRDC). An MRDC would be a new kind of research entity—funded and supported, at least in part, by the social media platforms but allowed to operate independently.
It takes inspiration from U.S. government–sponsored entities like the RAND Corporation, which for decades has produced trusted, high-quality research on public policy issues and handled highly classified information responsibly. With an MRDC, online platforms would take the place of the U.S. government, providing money, data, and some input on research priorities—but not direct control.
An MRDC could provide a venue where industry and academic researchers come together for a sustained period to collaborate within a shared structure. The key is that such an institution must be independent. With sustained funding from industry, governments, and philanthropies, a multi-stakeholder research and development center could address five key issues:
- Facilitate funding for long-term projects.
- Provide infrastructure for developing shared research agendas and a mechanism for executing studies.
- Create conditions that help build trusted, long-term relationships between sectors.
- Offer career opportunities for talented researchers wishing to do basic research with practical application.
- Guard against inappropriate disclosures while enabling high-credibility studies with sensitive information that cannot be made public.
Not to be confused with a more operational threat-assessment center, an MRDC would focus on longer-term research, such as understanding the effects of influence operations on democratic decisionmaking.
But if the event-driven timing of interventions by social media platforms is any indication, an MRDC is unlikely to emerge in the absence of leadership and pressure from governments and society.
To go one step beyond calling for an MRDC in the Digital Services Act, the EU could offer to co-host such a center or to lead the charge in laying out the details for establishing one. And either of these initiatives could be done in collaboration with the social media platforms.
Comments(5)
This article perplexes in the way it derides the Trump election which was legitimate and he won under the American system. Also Brexit was an election won by people wishing to leave the EU they finally achieved this in Jan 31 2020. I might not like the results myself but this was democracy working, are we now saying a populist or right wing government is not allowed to be elected because the left wing or centre do not like it. Right wing governments change laws also they may not put as much credence to privacy as left wing liberal or green parties but if peoples vote them in concerns on privacy will just have to be worn, as thats now the law. In this digital world protecting private data is always going to be hard to say the least, this battle will go on and on. The other problem is many countries around the world whether it be China, UK, Russia, US, France, Vietnam, Brazil are always monitoring the chatter even on Carnegie, they hold data on who knows and maybe every soul on the web, and any database they can get into, they will not tell privacy advocates and journalists may become foul of the secrets laws of any country should they expose something. As for the EU its not a country and its commission and leaders are watched very carefully I am sure by agencies within from France to Malta and all in between this is now the great game never mind Grand theft auto, or Super Mario 110 controlling and playing on it is by every country in this world. Just a thought everyone I know has had a crank call or fishing call from the Philippines, India, Nigeria or somewhere or it maybe an email, or virus or advert keeping data private is a huge task.
What perplexes me is that they speak and write in Brussels about freedom and democracy while rejecting my criticisms of what they do, of what they think with "orchestrated malice" by Donald Trum, and the vast majority do not publish them. .. The different opinion is not respectable for them .. They do not publish everything ... is that the European Union? There is some evil in all this.
Yes you are right they are all in this game, private data is almost impossible to protect hackers will find a way in.
Kadesh was one of the greatest mobile warfare battles in history. Thousands of chariots collided, the Kursk of antiquity. Muwatalli II lied to the Hitite people, Ramesses II lied to the Egyptian, it was a draw. 15 years later Hattusili III and Ramesses II concluded one the first peace treaties in recorded history. It is displayed at the UN, and like all peace treaties, it was broken. Disinformation is as old as recorded history, and before it was based on speech; influence operations work hand in hand with it. The author doesn’t mention Cambridge Analytica. CA would collect data on people from public/private sources: demographics, consumer behavior, internet activity, other. With this input data, CA’s Nix, 10/2016: “Today in the United States we have somewhere close to four or five thousand data points on every individual ... So we model the personality of every adult across the United States, some 230 million people.” This information can be sold, and used for microtargeting with basically everything digitally and more. It was reported that the company had acquired and used personal data about Facebook users from an external researcher who had told Facebook he was collecting it for academic purposes, exactly what the author mistakenly suggests. That was done with tools at the 2018 levels, which are way behind of what is available today; it was probably more than enough anyway. It is difficult to believe that all of a sudden, the advertising platforms would become social, in the civic sense. Their business model is aligned with Friedman’s view of human societies, not Smith’s. First and foremost, any kind of even minor regulation would be seen as an infringement of various freedoms. If any regulation would impact revenues, there is enough lobbying power to counter that. For years Zuckerberg has been promising that advances in AI would allow the advertising platforms to cleanup disinformation and influence campaigns. That is a long and separate discussion, and will end back to freedom infringement, as it does today. Another approach would be rebuilding the educational system to instill critical thinking based on knowledge. In theory such education would have stopped the pandemic in its tracks solely based on the 191X pandemic; within days of learning of a respiratory infection masks and social distancing would have kicked in, limiting the spread, avoiding mutations. When you have an entire country relying on Tegnell, that is impossible; it is like Lomborg in global warming.
It is extremely difficult in most cases to distinguish information from disinformation,and I do not believe any institution financed by the states can achieve it. Which sources are reliable? None in political matters, as prejudices and power interact with truth. An example is the recent video of Putin's palace provided by Navalny. Is it information, illustrating the corruption of Putin, or disinformation, with fake 3D images from a computer, of a property which belongs to the russian oligarh who built the bridge to Crimea? Neither of the 'sources should be trusted without further checks.A famous physicist with whom I worked, Luis Alvarez (Nobel prize winner) told me NEVER to believe a photograph. He owned an optical compant.
Comment Policy
Comments that include profanity, personal attacks, or other inappropriate material will be removed. Additionally, entries that are unsigned or contain "signatures" by someone other than the actual author will be removed. Finally, steps will be taken to block users who violate any of the posting standards, terms of use, privacy policies, or any other policies governing this site. You are fully responsible for the content that you post.