Algorithmic nationalism

Tracking discourses of danger on Baidu and Google

The bilateral Sino-US relationship has long been described as the world’s most important and complex. It is then well worth asking, how can the Chinese and American publics become better informed on the nuanced complexities of the other?  Such a simple question now feels particularly prudent, especially in this period of tense relations alternatively labelled as Sino-US ‘strategic competition,’ or  more provocatively, as ‘a new cold war.’ Perhaps at first glance, we can optimistically note that the potential to seek knowledge about distant nations and cultures is now instantaneously available to curious digital publics in both countries. Like anything in our content soaked postdigital era, such knowledge is ‘searchable.’ Just Google it! (or use Baidu!百度一下).

However, my recent research shows that when seeking information on ‘China’ from Google or ‘the US美国’ from Baidu, these search engines are far from neutral research tools. Instead, both platforms frequently present search results related to the other nation from within the boundaries of considerably nationalistic ‘discourses of danger.’ As will be argued below, by streamlining political content to domestic users steeped in base appeals to collective identity, emotional outrage, and out-group antagonism, these algorithmic search engines obstruct the acquisition of meaningful knowledge by truth-seeking publics in both China and the US. While the digital information age once signalled hope for increased cross-cultural understanding of and between web users living in rivalrous nation-states, my research charts a trend where algorithms programmed to maximize engagement and profit seductively orient users in more nationalistic directions.

The power of search engines

As Jutta Haidin and Olof Sundin note, ‘search engines are now one of society’s key infrastructures for knowing and becoming informed.’ Although the internet can still function as something akin to an infinitely rich global library for some disciplined users, the vast majority of individuals who rely on search engines like Google and Baidu are fed curated snippets of humanity’s diversity. In his book The Black Box Society, Frank Pasquale writes, ‘the power to include, exclude, and rank is the power to ensure which public impressions become permanent and which remain fleeting.’ By applying predictive calculations based on personalized data profiles and guiding users towards ‘this content’ over ‘that content,’ socio-technical algorithms filter the full diversity of information available online and lead users towards far narrower knowledge-building possibilities.

By taking a cue from Kate Crawford in her work on Maurizio Lazzarato’s concept of neoliberal governance, we can see how these algorithmically driven search engines have begun to take on the authority of ‘governance,’ which Lazzarato idiosyncratically defines as ‘the ensemble of techniques and procedures put into place to direct the conduct of men and to take account of the probabilities of their action and their relations.’ From this move, such algorithms start to resemble what Lazzarato conceptualizes as ‘discursive technologies,’ that ‘utilize and manage the discursive practices that enable the formation of know-hows and statements.’ These tools, therefore, shape perceptions and construct shared realities, a key aspect of what Taina Bucher refers to as ‘algorithmic power.’ But crucially, this ‘algorithmic power’ is not neutral. Much indispensable research has scrutinized how platform algorithms frequently reproduce content tinged in racism, sexism, partisanship, extremism, and key to this discussion, nationalism.

These critical research trends have excelled in explaining how fallible human engineers with their own prejudices and blind spots have programmed these algorithms under great secrecy. Meanwhile, with the primary goal of harvesting user data through gaining clicks and lengthening viewing time (thus maximizing ad revenue), the ‘surveillance capitalist’ profit model undergirding these algorithmic tools have created conditions where web ecologies feed upon emotion, reproduce seductive political biases, and hyperbolize conflict already lurking within and between societies. In the pursuit of attention over all else, popular search engines are now oriented to filter up forms of content that elicit psychological responses from their respective users. When users employ these socio-technical tools to search for information related to contested international political issues, in particular, we find that these algorithmic processes tend to highly rank both banal and bellicose forms of nationalistic discourse specifically curated for users with country-designated data profiles. This tendency is what I label algorithmic nationalism.

Filtering the digital world on national terms: The rise of algorithmic nationalism

Algorithmic nationalism begins with how web architectures are structured. Contrary to the once commonly indulged in dreams of an open and cosmopolitan internet, search engines are now designed to deliver results within national frameworks. For example, there is not one Google, but many. Depending on user location, search results will differ based on regional IP (e.g., for Japan or for Russia). The possibilities for publics to encounter diverse and unexpected cross-cultural content online then gives way to more parochial outcomes of localized content fed within nationally oriented bubbles, or what Sabina Mihelj and César Jiménez-Martínez refer to as ‘national digital ecosystems.’ These regional targeting features support the reproduction of information biased towards the presuppositions and symbolic resources of specific ‘imagined communities.’ Building from Benedict Anderson’s work on socially constructed nationalism, Florian Schneider posits that much like how newspapers and novels facilitated the ‘collective imagining’ of national identity in the industrial age, internet-based communication technologies now streamline and normalize nationalized ‘symbolic resources’ for domestic internet users to individually ‘imagine’ themselves as members of larger national communities, what Schneider aptly labels ‘digitally imagined communities.’

When states pursue ‘digital sovereignty’ by establishing alternative local platforms and domestic network architectures, the potential for internet communication technologies to facilitate digitally imagined communities is supercharged. China’s digital governance is perhaps the most relevant example of such efforts. As Schneider writes in his landmark study on China’s Digital Nationalism (2018), ‘mainland China’s search engines systematically reproduce the biases of the PRC’s media ecology.’ He argues that ‘through market mechanisms, ideological guidelines, coercive legal tools, technological innovations, and a good deal of persuasion,’ China’s domestic platforms largely ensure that internet political discourse on ‘high-profile political issues’ stays within ‘simplistic, nationalist frameworks of understanding.’ The authoritarian technic behind the management of such nationally oriented ‘frameworks of understanding’ is highly developed and still advancing in China. This can be seen via recent moves in October 2021 by the Cyberspace Administration of China to ensure ‘algorithmic recommendation service providers shall uphold mainstream value orientations … vigorously disseminate positive energy, and advance the use of algorithms upwards and in the direction of good.’ Admittedly, this language is vague. But, following Schneider, if past trends hold, we can expect China’s platforms and regulators to continue to coordinate in ‘upholding mainstream values’ by filtering available information on international political issues to ‘emphasize values such as sovereignty, authority, national community, and righteous indignation rather than transnational understanding, nuanced debate, and multi-faceted meaning-making.’ Such an argument is strongly supported by my research, as will be seen below.

Of course, Silicon Valley’s platforms do not adhere to the same kinds of authoritarian guidelines regulating digital China. However, my data demonstrates that when US-based users employ Google to seek information on geopolitically relevant search terms, the top-ranked results typically feature content framed within nationalistic understandings. This shows continuity with Michael Billig’s 1990s research that expounds on how nationalism has long been normalized, backgrounded, and assumed in mass media and elite discourse common to ‘Western’ political life. For example, in mainstream newspapers, the nation is ‘flagged’ repeatedly to the extent that it becomes commonsense to see the primacy of ‘our nation in a world of nations,’ an ‘ideology which is so familiar that it hardly seems noticeable.’ Recent studies inspired by Billig’s thesis confirms that such banal nationalism indeed holds steady in the digital age. As such, the mediated content that search engines in the US draw from is already imbued with ‘commonsense’ nationalistic presuppositions.

I argue that this banal nationalism can become more bellicose when it fuses with the ‘economics of emotion’ flourishing under surveillance capitalism, a concept Vian Bakir and Andrew McStay define as the way ‘emotions are leveraged online to generate attention and viewing time, which converts to advertising revenue.’ In this context, platform algorithms reward content creators who gain attention by triggering emotional outrage, in-group solidarity and out-group antagonism. This feeds into problematic web ecologies where domestic and international propaganda dually flourish. For Baidu and Google’s digitalized presentation of Sino-US relations, in particular, the algorithmic tendency to reproduce out-group antagonism in both country’s digital environs promotes provocative discourses of danger that tend to overwhelm more nuanced (or boring) voices. As shown below, an ideology like nationalism that thrives on exploiting human psychological needs for connection, community, and identity is privileged by black-box algorithms programmed to increase the targeted engagement of digitally imagined communities.

Reading Sino-US Relations on Google and Baidu: Nationalized and normalized discourses of danger

In my study tracking the top-ranked results for the queried terms ‘China’ via Google and ‘the US美国’ via Baidu from late 2020 to early 2021, clickbait, sensationalism, and fear towards the other are commonplace in both platform’s top-ranked results. Note: my research method used anonymous privacy browsers when searching on both platforms. These results were therefore not generated in relation to my own personal search history/data profile, but rather emerged from a ‘blank slate’ profile with a Chicago-based IP for Google and a Beijing-based IP for Baidu. Through interpretive content analysis, my findings show that for the top results tracked within my study’s five-month timeframe, about 67% of top Google results for the search term ‘China’ and 53% of Baidu results for the ‘US’ are characterized by significantly negative representations. In both datasets, we find the quotidian reproduction of existing tropes, narratives, and presuppositions common within both nations’ elite-led mediated discourses towards the other. While these kinds of provocative representations of a hostile other may not be entirely new per se, such algorithmic amplification is.

When Googling ‘China’ daily from an American IP, emotionally charged word associations frequently highlight the looming ‘danger’ that China represents, with repeated employment of words like ‘weapon,’ ‘war,’ ‘threat’ and ‘genocide.’ We can find Fox News highlight that ‘China is sinisterly collecting the world’s DNA and that ‘China has placed its Communist flag on the moon.’ CNN alerts readers to a ‘secret’ Chinese navy ‘swarming the South China Sea.’ NBC reports China is ‘creating biologically enhanced soldiers.’ Yahoo cautions that China is ‘targeting people on US soil.’ Apparently, China is directly targeting Americans with insidious personal data collection, anal swabs, and unjustified detention. Meanwhile, other writers emphasize how China is ‘weaponising trade,’ ‘weaponising conspiracy theories,’ ‘weaponising currency,’ and ‘weaponising rare earth technology.’

In Baidu, we find similarly inflammatory, yet more vague word-based connotations towards the US, like ‘black hands 黑手,’ ‘depravity 堕落,’ ‘scourge 之祸’ ‘soul stain 灵魂污点,’ ‘evil 恶行,’ and historically loaded terms like ‘leading party 带路党,’ and ‘humiliation 羞辱.’ These latter associations drum up Chinese traumas of historical humiliation and victimhood at the hands of imperialist powers like the US. Meanwhile, other nationalistic outlets sensationally highlight American ‘decline.’ State-led sources like The Global Times describe ‘the disease of the US system 美国制度之疾.’ Beijing News charts ‘the three nightmares of the US. 眼下美国的三大噩梦.’ Popular new media outlets like Guancha note as the US purportedly enters ‘a state of chaos 一片混乱的状态.’ Meanwhile, commonplace fear-based language highlights American aggression, with emphasis that the US is ‘belligerent 好战,’ that it has ‘interference addiction 干涉成瘾,’ that it is a ‘centre of global massacre 全球屠杀中心,’ and that ‘these American politicians want us to die 美国的一些政治家希望我们死.’ Despite being bounded by considerably different regulatory frameworks and unique domestic anxieties, these Google and Baidu results are functionally similar as they simultaneously target the curiosity and threat perceptions of both imagined communities.

As is common for media outlets capitalizing on the economics of emotion, my dataset commonly tracked ‘clickbait,’ where users are invited to select provocative hyperlinked queries like: ‘China v Russia v America: is 2021 the year Orwell’s 1984 comes true?’ and ‘Is there a war coming between China and the US?‘ Such question-based headlines are more prevalent in the Baidu dataset, with headlines like: ‘The disease of the American system: in the tissue or in the bone marrow? 美国制度 之疾:在腠理还是骨髓?’ and ‘Is the US still clamoring for nuclear war in this situation? 美国这状态还叫嚣核战争?’ Before clicking, users’ limbic systems may spark as they are tempted to inquire: Is war really coming? Are we actually facing an imminent Orwellian dystopia? What exactly is the ‘disease’ of the American system? Does the US indeed want nuclear war? In their appeals to uncertainty, intrigue, and fear, these ‘clickbaity’ examples may entice readers to unconsciously seek a tempting post-click dopamine payoff.

These examples are not outliers, but just a small sample from my dataset. In their similar search to maximize attention within specific networked communities, algorithmically driven media in both China and the US trigger human psychology by evoking pathos and collective fear. Rather than serving as benign research tools, these top-ranked search engine results frequently filter representations of the other nation through domestic biases and emotionally infused language to emphasize the myriad ways that their nation threatens our nation, with precious few alternative takes arising to contest the cacophony of mediated content keen on highlighting the sensational aspects of a hostile other. Consequentially, algorithmic nationalism provides oxygen to the wider processes of Sino-US enmity construction that currently threaten to derail the international collaboration our planet quite desperately needs.


As former Google engineer Tristan Harris has posited: ‘in an attention economy, there’s only so much attention, and the advertising business model always wants more. So, it becomes a race to the bottom of the brain stem.’ Baidu and Google can target the ‘brain stem’ in digitally imagined communities by streamlining provocative symbolic content about Sino-US relations that plays upon emotionally charged in-group subjectivities. Furthermore, as mentioned, I collected the above examples through anonymous searches without a preexisting data profile to draw from. We can likely assume that users with data profiles embedded within more politically radical and nationalistic ‘filter bubbles inside the Chinese and American bifurcated internets are commonly exposed to personalized content even heavier in tribalistic animus than the examples cited above.

The role of content creators themselves in these processes also deserves attention. As platforms erode the truth-telling power of the Fourth Estate, for a writer or editor interested in career promotion and doing their jobs well in current digital environs (i.e. by getting platform-based attention in viciously competitive 24-hour news cycles), it is no small feat to resist the incentive to publish clickbait and snark. Indeed, I have also not talked enough about other structures and hegemonic ‘Propaganda Models guiding and bounding these two mediated discourses of danger. More attention than I can levy here deserves to be given to how these algorithms reproduce ideological biases and antagonistic sentiments already present in both nations’ media ecologies and culturally constructed symbolic backdrops. While technology is not neutral, a critique of the pipeline cannot flourish without a critique of the source.

In the US, many of the symbolic resources these algorithms churn up originate from long-standing ideological tropes of ‘Yellow Peril’ that have guided American media and political elites to view China through the blurred lenses of ‘threat,’ ‘fear,’ and ‘fantasy.’ At a time of partisan polarization on almost all else, ‘China threat‘ discourse now flourishes with unique vigour as Washington’s policymakers frame unity in opposition to China as the essential impetus to enact basic feats of domestic governance like increasing research funding, investing in green energy, and rebuilding long-neglected infrastructure. Forceful opposition to China’s rise has been called ‘the last bipartisan issue left in Washington,’ with few exceptions to this consensus emerging from American lawmakers (though exceptions are far more prevalent amongst experts).

Meanwhile, China’s elite-led media discourses on America are problematically embedded in anti-imperial frames of ‘national humiliation that exaggerate America’s role in China’s 20th-century traumas. As China faces an economic slowdown and challenging demographic trends, the Chinese Communist Party sees such nationalistic framing as instrumental in maintaining its own ideational legitimacy. Indeed, the CCP is now seemingly directing domestic regulation of algorithms to more firmly reinforce the automation of ideological and historical ‘correctness’ online. While I cannot fully unpack the contexts, contingencies, and boundaries of these dually problematic discourses in China and the US here, it is clear that search results on Sino-US relations disseminated by Baidu and Google are both drawing from narratives, tropes, and assumptions deeply rooted in both nation’s domestic political vocabularies.

The danger of such hostile representations ultimately lies in their potential impact on the perceptions of Chinese and American citizens alike. Perceptions and misperceptions guide international politics. If these discourses of danger become more firmly rooted in domestic vocabularies, the poles of Chinese and American subjectivities may move to such extremes that the mutual will to cooperate on issues of global importance evaporates further. This is exceptionally troubling when existential transnational issues like climate change threaten all humans, regardless of the digitally imagined communities we primarily read, search, click, like, and share within. At a time when global cooperation seems to be the only way to avoid environmental collapse, it is worth calling out the destructive power of jingoistic discourses and the algorithmic processes that reinforce them. As Sino-US digital contestation evolves into the 21st century and threatens to further bifurcate our digital ecologies into ‘duopolistic digital worlds,’ we ought to consider the potential of both nations’ algorithmically amplified discourses of danger going global at a time when transnational solidarity is needed more than ever.

Disclosure Statement: Daniel E. Crain does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article and has disclosed no relevant affiliations beyond their academic appointment.

Share this article on Social Media

Full Citation Information:
Crain, D. E. (2021). Algorithmic nationalism: Tracking discourses of danger on Baidu and Google. PESA Agora.

Daniel E. Crain

Daniel E. Crain is a recent graduate of Peking University’s School of International Studies, a Zhixing China-US Fellow, and a co-director for the Beijing chapter of Young China Watchers (YCW). Daniel’s recent research has been published in Educational Philosophy and Theory and the Encyclopedia of Educational Innovation. His work focuses on the politics of digital media, critical IR theory and Sino-US relations.