Kevin Schoenmakers: When we talk about influence campaigns on social media, what are we talking about? What forms does it take and what are usually their aims?
Hannah Bailey: Influence operations can take a wide variety of forms, including diplomats and state-owned media posting on social media, or Chinese citizens, whether within China or abroad, appearing as themselves and trying to convince international audiences to support a particular narrative put forward by the Chinese state.
It can also be these kinds of more nefarious, inauthentic social media accounts, trying to convince audiences incognito. Traditionally, when people think of influence operations, they think of bots operating en masse to try and convince people, but that’s not always the case. Academics are divided on what the term bot even means and how common these are.
Kevin Schoenmakers: Is the boundary quite murky between influence operations that most people would agree are fair game, and those that people would say are not?
Hannah Bailey: Different people would have different opinions on what is and isn’t fair game, and some could argue that the social media space is a neutral playing ground where anyone can try and convince other people to support their own arguments. But the more nefarious activities are those that involve social media users misrepresenting themselves — users that are pretending to be someone else, or networks of accounts that are just run by one person — with the aim of appealing to a particular genuine audience. We call these kinds of activities inauthentic, and that’s the kind we tend to be more concerned about.
Kevin Schoenmakers: What kind of inauthentic social media influence operations are used by Chinese actors?
Hannah Bailey: I can list a couple instances of inauthentic activity that we’ve found. That’s not to say that it’s the whole list of activities that are happening out there, because there’s probably a lot of things happening out there that we can’t detect or don’t know about yet.
My team and I recently uncovered a network of inauthentic accounts that were amplifying the tweets posted by Chinese diplomats. The Chinese diplomat would make a tweet, and all these accounts would rush to retweet, like, and comment on these posts to try and game the algorithm to make it reach more audiences, and to try and make it seem like this post had genuine support.
We also zoomed in on the case study of the UK. Oftentimes, these accounts will pretend to be people in the UK. In their bio, they say stuff like, “I’m a Londoner,” or that they support UK football teams. This is the kind of inauthentic activity that we have uncovered.
There are other organizations that have also uncovered similar campaigns related to Xinjiang and other issue areas of concern to the Chinese state. These are suspected to originate from the Chinese state — because we can never really confirm the actor behind the screen. In our reports, we were very clear to say that we’re not ascribing this behavior to any individual or organization, because we don’t have the data to be able to say that a particular person or organization is behind these influence operations. We can only say who or what they are promoting.
We believe from the takedowns that Twitter and Facebook have reported that they have access to more data that allows them to figure out who are behind these operations.
Kevin Schoenmakers: Do Chinese state media use Western social media in inauthentic ways?
Hannah Bailey: We haven’t researched this ourselves but I know from other people’s research that similar inauthentic amplification activities are happening or have happened in the past, and there have been releases by Twitter to that effect. We’ve also seen Western YouTubers and other influencers apparently being paid by Chinese state media to amplify their content, or to discuss Chinese state media content on their platforms.
Kevin Schoenmakers: Is there a particular timeline for these kinds of efforts? Did they start or ramp up at some point?
Hannah Bailey: We often say that, compared to Russia, China is very new to the internet game. And it’s kind of been playing catch-up in that sense. But we really saw a rapid escalation from virtually nothing to an awful lot of activity in the wake of the Hong Kong protests. I think this was really the spark that made the Chinese state realize, “Oh, wait, international audiences are seeing this happening within our country, they are becoming increasingly critical of us, and they have a negative perception of the way China governs. We should try and rectify that by pumping out favorable content on these international social media platforms.” And ever since then, we’ve just seen an escalation and also a migration away from Hong Kong to other issues that the Chinese state particularly cares about.
Kevin Schoenmakers: Is that maybe also in a way, admission of sorts that what they were previously doing to try and convince foreign audiences of the Chinese viewpoints wasn’t really having enough of an impact?
Hannah Bailey: It’s hard to read into that. It could mean a number of things. It could be an admission, it could be just a sudden recognition that this was an activity that lots of other states partook in and that they, as a large nation state with an image problem, should also partake in. Around that time, there were also a number of polls coming out that showed international audiences had very negative perceptions of China.
And, of course, it’s only in recent years that public awareness of these influence operations has emerged. So you can say that they were just kind of a little bit behind, but generally following the trend of countries and also businesses and media organizations jumping on this bandwagon of conducting influence operations.
Kevin Schoenmakers: It sounds like most of what Chinese actors do, or what you suspect they do, is to try and convince foreign audiences of Chinese viewpoints. Do they ever aim to destabilize, like Russia did during the US elections and also the Brexit referendum?
Hannah Bailey: This is a very good question, because it’s hard to say for any fixed period of time that we know exactly what China is doing and why they’re doing it. But initially, in the first couple of years that China was conducting these kinds of operations, we really saw them focus on issues that had domestic relevance, that were threatening their domestic sovereignty, essentially — issues like Hong Kong, Xinjiang and territorial issues like the South China Sea. We assume that the Chinese state wanted to reassure international audiences that they had a handle on these issues, but also potentially to convince domestic audiences via this international messaging that they had the confidence of international audiences, and that domestically, this would all be taken care of. This is broadly in line with a lot of other strategies where we’ve seen China prioritize its domestic sovereignty.
Russia cares an awful lot more about destabilizing Western democracies rather than projecting a positive image of itself. So initially, there was this real dichotomy between the Russia approach and the China approach. But in the last year or so, especially with the emergence of COVID-19, we’ve really seen China shift tack a little bit towards not necessarily trying to destabilize but criticizing Western democracies and particularly the US. This could be for a number of reasons. But predominantly, I think it’s to appeal to a domestic nationalist sentiment. By criticizing the West and the US, they can say, “the West is doing really bad at dealing with COVID-19,” or, “the West is doing this wrong and that wrong and we’re a better governance system.” That’s still a slightly different approach to that of Russia.
Kevin Schoenmakers: Does China do any destabilizing of Taiwanese democracy?
Hannah Bailey: Yes. Particularly surrounding Taiwanese elections, there have been a number of studies that have found very significant evidence of influence operations. I’m not sure how much evidence there is that those actually worked. And I don’t think they were very successful from the studies that I’ve read. But yes, those exist.
Kevin Schoenmakers: Is there any indication whether any of these influence campaigns have any effect?
Hannah Bailey: So there’s a short answer and a longer answer. The short answer is no, we don’t have any evidence that they do. But that’s not to say that they don’t. The caveat is that it’s hard to study the effects of messaging on social media audiences, particularly as an academic. Because it’s a real world setting with real world people you’re dealing with, and they are heard to measure. It’s not like in a laboratory experiment where you can expose someone to a particular message and ask them what their answer is. And even if you did such an experiment, you might not have findings that are necessarily representative of people’s real life online experiences.
But broadly speaking, we see that the messaging put out by Chinese state-backed media, Chinese diplomats, and these influence operations that we’ve uncovered so far really hasn’t had the engagement pick-up that you would expect of a successful influence campaign.
The influence operation that we uncovered recently didn’t seem to be receiving an awful lot of engagement outside of their own little network. That’s not to say that they won’t in the future. An important element of social media influence operations is that there’s a very quick feedback loop. Unlike with traditional print news, where you might nog necessarily have feedback on whether propaganda you published was successful, you get instant feedback on social media. So it’s easy for state actors to better tailor their messaging toward international audiences. So I think we’ll see a lot more of that in the future.
Kevin Schoenmakers: And outside of China, is there evidence that other countries or other groups have been effective at these kinds of campaigns?
Hannah Bailey: I’m not a Russia expert, but according to a large literature on Russian influence operations, typically surrounding elections and promoting populist narratives, they have been slightly more successful at that.
Kevin Schoenmakers: You just mentioned your recent research project during which you found a network of Twitter users boosting the tweets of UK-based Chinese diplomats. Could that have any effects even if there wasn’t a lot of engagement by users outside the network of boosters?
Hannah Bailey: It’s hard for us to say that few or no users saw it. We have a certain number of things we can measure, like retweets. And then we use these to quantify engagement. And so I can say that there was not a lot of engagement with these networks. But at the same time, these networks also served to artificially inflate the profile of the particular diplomat. And then this could have gamed the Twitter algorithm to think, “Oh, this person’s very popular so we should show this tweet to more people.” And so it’s hard for us to know exactly the ramifications of this influence operation.
Kevin Schoenmakers: When I read your report and saw the somewhat uninspired way they went about faking engagement I couldn’t help but think, could it be that they have some kind of social media outreach goal from the Foreign Ministry back in Beijing that they need to meet, and this is how they go about it?
Hannah Bailey: That’s another hypothesis, that this is kind of a self-promotion tool to signal to their bosses and their higher ups that, “I’m doing really well at my diplomatic job. The domestic audiences love me, and they’re engaging a lot with my content. And so I’m a good diplomat.”
Kevin Schoenmakers: You looked at Twitter. Have you done research or do you know of research into other social media, such Facebook or YouTube?
Hannah Bailey: We did use Facebook data in the first report, but there’s a limit to how much Facebook lets you dive into their data. Twitter is far more transparent, allowing researchers more access to more granular data. So yeah, we would have loved to do more Facebook analysis, and see if there were similar influence operations networks on Facebook. But unfortunately, we couldn’t get the data. And we’d love to do future research on TikTok and all these other social media platforms. But data access is always an ongoing problem.
Kevin Schoenmakers: I suppose TikTok is a very unique case in that it’s the only worldwide social media platform that is Chinese.
Hannah Bailey: How much it still is Chinese or not Chinese is up for debate. But obviously, it operates very differently from other forms of social media. So if there were to be influence operations on that platform, I’d imagine they’d be behaving in a very different way to what we’ve seen on Twitter and, to a lesser degree, Facebook.
Kevin Schoenmakers: Twitter from time to time announces they’ve taken down groups of accounts that purport to be people who they are not. Are such cleanups effective at countering influence campaigns?
Hannah Bailey: I don’t know exactly what methods or tools they use to identify these accounts. When we alerted Twitter to the accounts that we discovered, they took almost all of them down. But it would be nice if the response was less reactionary, and more proactive.
We’ve found in research before and after our report that these networks tend to pop up pretty instantaneously. You take one down and another pops up. And they seem to be pretty widespread. Despite these kind of takedowns by Twitter, they’re still very prevalent on the platform from what we’ve seen. So maybe there’s scope for Twitter to expand their detection operations.
Kevin Schoenmakers: What are some of the other countermeasures that social media platforms, and Twitter in particular, take to combat inauthentic behavior?
Hannah Bailey: Last year or so, Twitter and Facebook have started putting labels underneath particular users to make audiences more aware when they’re consuming content by a particular state actor. And so we, as part of our analysis, looked at how effectively these labels were being applied to users that we knew were Chinese diplomats or Chinese state-backed media outlets. And we found that they weren’t as widely applied as we might have hoped. Some of the accounts that we identified and flagged as being not labeled were then labeled after our report. But that’s quite concerning if this is one of the measures that these social media platforms are taking to inform users when they’re consuming content by state and state-owned media outlets, and that these labels are not being well applied.
Broadly speaking, any way of informing audiences when they’re consuming content by a particular actor can better allow them to digest that information properly. So they can say, “Is this information trying to influence me and should I trust this information?” So we would hope that these labels will be more widely applied in the future