Algorithmic bias refers to the situation where an automated system that should be “objective and fair” unconsciously favors certain individuals or groups when processing information. Although we often think that algorithms are cold and emotionless, they are actually deeply involved in the information we come into contact with every day, such as social media recommendations, search results, news push notifications, etc. Over time, it may even affect how we view the world and understand things. Barocas and Selbst (2016) pointed out that algorithmic bias may occur at almost every stage of the machine learning process, from data collection and selection to feature engineering – algorithmic bias is not random error but a structural problem.
In daily life, this deviation has already begun to affect us. For instance, the recommendation systems of social media will constantly push the content you “like” based on your clicking and browsing habits, allowing you to see increasingly homogeneous viewpoints but rarely different voices. This kind of “information cocoon” can lead people to mistakenly believe that the world is exactly as they see it, and may further cause social division. Search engines are similar: their ranking methods often reinforce the power and inequality that already exist in society, amplifying some voices while ignoring others. In other words, the content output by the algorithm is not as neutral as we think. Noble (2018) demonstrated through his research on the ranking mechanism of search engines that algorithms often reflect and reinforce the existing social power structure, which implies that the output of algorithms is not neutral.
Reducing algorithmic bias is not something that can be solved simply by making some technical adjustments. Indeed, enhancing data transparency, regularly checking algorithms, and treating “fairness” as an indicator during development are all very important. But these practices can only solve part of the problem. Because many biases are not generated by the algorithms themselves but rather stem from the long-standing inequality in society. The algorithm merely “magnifies” or “perpetuates” these problems. Therefore, to truly make algorithms fair, what we need is not only technology, but also long-term supervision, clear ethical norms, and an attitude of responsibility towards fairness jointly by organizations and society.
Reference list:
1.Barocas, S. and Selbst, A.D., 2016. Big data’s disparate impact. Calif. L. Rev., 104, p.671.
2.Noble, S.U., 2018. Algorithms of oppression: How search engines reinforce racism. In Algorithms of oppression. New York university press.

Hi!
In particular, the way you link Barocas & Selbst with Noble to demonstrate that bias is structural rather than incidental makes your explanation of algorithmic bias quite apparent. Your thesis regarding “information cocoons” struck me as especially interesting because it makes the effects of bias in daily life simple to comprehend. You may elaborate a little by providing a specific contemporary example (e.g., biassed facial recognition or TikTok’s For You Page shaping political content). The argument would feel even more solid as a result. Overall, your explanation is thoughtful, well organised, and highlights both the technical and social dimensions of algorithmic bias. Great work!
I think your article defines algorithmic bias very clearly. I agree with your argument that it should be understood as a structural problem rather than a random error, and your use of Barocas & Selbst (2016) and Noble (2018) provides a solid scholarly foundation. The everyday examples you give—from social media recommendation systems, information cocoons, to search engine ranking mechanisms—concretely show that algorithmic outputs are not neutral.