Manufacturing Consent Revisited
Herman and Chomsky’s Manufacturing Consent (1988) remains a foundational critique of how media systems shape public opinion. Their Propaganda Model argues that news content is filtered through structural constraints—ownership, advertising, sourcing, flak, and ideology—resulting in a narrowed range of permissible debate. While the model was developed in the context of 20th-century mass media, its core logic remains relevant, particularly when reframed within contemporary platform capitalism (Srnicek, 2017) and surveillance capitalism (Zuboff, 2019).
From Newsrooms to Algorithms: The Propaganda Model Today
Digital platforms reproduce many of the structural pressures Herman and Chomsky identified, but through algorithmic and infrastructural mechanisms.

Example 1: Facebook’s 2018 Algorithm Change
When Facebook reweighted its News Feed to elevate “meaningful social interactions,” independent journalism organisations experienced dramatic declines in reach (Vaidhyanathan, 2018). This was framed as a pro-community shift, yet operationally reflected advertising and ownership filters, prioritising engagement and platform retention over public interest journalism.

Example 2: TikTok’s Moderation Practices
Leaked documents revealed TikTok’s suppression of content from users deemed “undesirable” due to political sensitivity, appearance, or socioeconomic indicators (Biddle et al., 2020). Although justified as “platform safety,” these practices operate as ideology and flak filters, but automated and opaque.

Example 3: YouTube’s Boosting of “Authoritative Sources”
During the COVID-19 pandemic, YouTube privileged institutional sources while downranking independent commentary (Google, 2020). While useful for countering misinformation, this reveals persistent sourcing dependency, privileging dominant institutional narratives (Anderson & Revers, 2018).


Across these platforms, visibility is governed less by editorial decisions and more by algorithmic curation, data collection, and optimisation logics.
Critical Reflection: Does the Theory Still Hold?
Critics argue that user-generated content challenges the Propaganda Model (Klaehn, 2009). Yet digital media’s seeming plurality masks deeper concentration of infrastructural power (Couldry & Mejias, 2019). Consent is now:
• Individualised (personalised feeds),
• Continuous (24/7 algorithmic optimisation),
• Behaviourally reinforced (predictive analytics shaping exposure).
In this sense, manufacturing consent has become more granular and ambient, operating through platform architecture rather than newsroom routine.
Reflective Insight: How My Own Feed Manufactures Consent
During the 2024 Hong Kong District Council elections, my feeds highlighted moderate, algorithm-friendly narratives while downplaying oppositional analysis. This was not intentional bias by journalists but a result of engagement-driven ranking systems favouring “safe,” widely shared content. What appears as personal preference is partly a product of algorithmic reinforcement (Tufekci, 2015).
It’s illustrates a central point of Herman and Chomsky’s theory: consent is shaped not by censorship but by patterned visibility.
Conclusion
Manufacturing consent persists, but now operates through the infrastructures of platform capitalism—algorithmic curation, data extraction, commercial metrics, and moderation systems. Herman and Chomsky’s core insight—that structural forces shape public consciousness—remains vital, but requires integration with contemporary research on algorithmic governance and digital political economy.
References
Anderson, C.W. & Revers, M. (2018) ‘From counter-power to counter-peasantry: The changing nature of newsroom power in the digital age’, Journalism, 19(1), pp. 22–39.
Biddle, S., Ribeiro, P. & Dias, L. (2020) Invisible censorship on TikTok. The Intercept, 16 March.
Chomsky, N. & Herman, E.S. (1988) Manufacturing Consent: The Political Economy of the Mass Media. New York: Pantheon.
Couldry, N. & Mejias, U. (2019) The Costs of Connection: How Data is Colonizing Human Life. Stanford: Stanford University Press.
Google (2020) Managing harmful misinformation on YouTube. YouTube Official Blog, 11 March.
Klaehn, J. (2009) ‘The propaganda model: Theoretical and methodological considerations’, Westminster Papers in Communication and Culture, 6(2), pp. 43–58.
Srnicek, N. (2017) Platform Capitalism. Cambridge: Polity Press.
Tufekci, Z. (2015) ‘Algorithmic harms beyond Facebook and Google’, Colorado Technology Law Journal, 13(1), pp. 203–218.
Vaidhyanathan, S. (2018) Antisocial Media: How Facebook Disconnects Us and Undermines Democracy. Oxford: Oxford University Press.
Zuboff, S. (2019) The Age of Surveillance Capitalism. London: Profile Books.

I appreciate how you included a relevant real-world example in your blog. What’s interesting to me is that algorithms employed by facebook believe that being informed via news sites on the platform isn’t a ‘meaningful’ social interaction, so it begs the question, what is? Because I would say that most content published on facebook is far from meaningful.
It is clearly that you post three examples about manufacturing consent. Also you do a good job connecting Herman and Chomsky’s original structural filters to contemporary algorithmic infrastructures. One area you might consider about is the tension between algorithmic optimisation and user agency.
Hi! This was an excellent read! I especially loved the way you chose different social media platforms and went through the changes in the algorithm and the dates when they occurred, because it shows how much social media has progressed since the early beginnings of the concept. Interestingly, algorithms deem the narratives of information from news sites as not “meaningful”, but social media platforms like Facebook often contain unchecked information that can be far from the truth.