The hum of the buzzers can be deafening when an election draws near.
In Indonesia, which will hold a general election on 14 February, a swarm of buzzers — people paid to post large volumes of material on social media — is in full swing. Their aim: to sway the electorate. Amid the digital noise, Ika Idris, a social scientist at Monash University’s Jakarta campus, and her colleagues are trying to track changes in hate speech, as well as the influence of misinformation — such as an artificial intelligence (AI)-generated ‘deepfake’ video, that shows a presidential candidate speaking Chinese, —which would suggest a close alliance with China — on voters.
The disinformation sleuths: a key role for scientists in impending elections
Previously, Idris had free access to data from X (formerly Twitter), but last year, the social-media platform ended its policy of free data access for academic researchers, and she cannot afford the fees. Now, Idris must ask collaborators in wealthier countries to share their data with her during the run-up to the election, giving her less room to experiment with search parameters.
Some 13,000 kilometres away in Seattle, Washington, computer scientist Kate Starbird and her colleagues at the University of Washington are studying how rumours spread on social media, as the United States moves towards its own presidential election in November. In the last election cycle in 2020, her e-mail inbox teemed with requests for collaborations and advice. This year it is much quieter, she says.
2024 is the year of the election: nearly half of the world’s population live in countries with elections this year. Meanwhile, social media’s reach continues to grow, and generative AI tools capable of creating deepfakes are becoming more accessible and more powerful than before. Yet, researchers say that they are in the worst position they’ve been in for years in monitoring the impact of these tools on elections.
“When we close the book on 2024, there is a very good chance that we are going to know much less about what happened in 2024 than what happened in 2020,” says Joshua Tucker, a computational social scientist at New York University. But, he adds, as Idris and Starbird are doing, others are finding ways to work around the limitations. “Researchers are creative.”
Social-media studies
Table of Contents
In Europe, where nine countries as well as the European Union are expected to hold parliamentary elections this year, there is more optimism. The EU’s Digital Services Act (DSA) — sweeping legislation that aims, in part, to limit the spread of disinformation — is due to come into effect for social-media platforms on 17 February. Included in that act are provisions for giving vetted researchers access to data from social-media platforms to study systemic risks posed by social media in Europe.
How Facebook, Twitter and other data troves are revolutionizing social science
“I’m putting a lot of hope in the DSA,” says Philipp Lorenz-Spreen, a computational social scientist at the Max Planck Institute for Human Development in Berlin. For now, researchers do not yet know how these provisions will be implemented, including what kind of data will be provided, what kind of research will be deemed eligible for access and whether the data will be useful for those hoping to monitor the 2024 European elections. In countries outside the EU, researchers are anxious to see whether they will be eligible to use the DSA’s provisions to access data at all. Some platforms, including Facebook and X, have released early versions of interfaces for extracting large amounts of data access in compliance with the DSA. When Lorenz-Spreen applied for access, X asked him to explain how his research would affect systemic risks to, among others, public health, the spread of illegal content and the endangering fundamental rights in the EU. He is still awaiting a decision on his application.
Even so, researchers abroad are hopeful that the DSA will provide them with an option for obtaining data — or, at the very least, that the DSA will inspire other countries to introduce similar legislation. “This is a door that’s opening,” says Maria Elizabeth Grabe, who studies misinformation and disinformation at Boston University in Massachusetts. “There is quite a bit of excitement.”
But she can also feel the effects of political pressure in the United States on the field, and she worries that funders are shying away from research that mentions the word ‘disinformation’ to avoid drawing criticism — or even legal action — from technology companies and other groups. This is a worrying possibility, says Daniel Kreiss, who studies communication at the University of North Carolina at Chapel Hill. “We’re a pretty robust crew,” he says. “But what I most worry about is the future of the field and the people coming up without the protections of tenure.”
Creative workarounds
Despite ongoing challenges, the community of researchers trying to assess the impacts of social media on society has continued to grow, says Rebekah Tromble, a political-communication researcher at George Washington University in Washington DC.
And behind the scenes, researchers are exploring different ways of working, says Starbird, such as developing methods to analyse videos shared online and to work around difficulties in accessing data. “We have to learn how to get insights from more limited sets of data,” she says. “And that offers the opportunity for creativity.”
Some researchers are using qualitative methods such as conducting targeted interviews to study the effects of social media on political behaviour, says Kreiss. Others are asking social-media users to voluntarily donate their data, sometimes using browser extensions. Tucker has conducted experiments in which he pays volunteers a small fee to agree to stop using a particular social-media platform for a period, then uses surveys to determine how that affected their exposure to misinformation and the ability to tell truth from fiction.
Tucker has conducted such experiments in Bosnia, Cyprus and Brazil, and plans to extend them to South Africa, India and Mexico, all of which will hold elections this year. Most research on social media’s political influence has been done in the United States, and research in one country doesn’t necessarily apply to another, says Philip Howard, a social scientist and head of the International Panel on the Information Environment, a non-profit organization based in Zurich, Switzerland, with researchers from 55 countries. “We know much more about the effects of social media on US voters than elsewhere,” he says.
Twitter changed science — what happens now it’s in turmoil?
That bias can distort the view of what’s happening in different regions, says Ross Tapsell, who studies digital technologies with a focus on Southeast Asia at Australian National University in Canberra. For example, researchers and funders in the West often focus on foreign influence on social media. But Tapsell says that researchers in Southeast Asia are more concerned about local sources of misinformation, such as those that are amplified by buzzers. The buzzers of Indonesia have counterparts in the Philippines, where they are called trolls, and Malaysia, where they are called cybertroopers.
In the absence of relevant and comprehensive data about the influence and sources of misinformation during elections, conflicting narratives built on anecdotes can take centre stage, says Paul Resnick, a computational social scientist at the University of Michigan in Ann Arbor. “Anecdotes can be misleading,” he says. “It’s just going to be a fog.”