How Algorithms Shape Our Daily Decisions: Bias, Consequences and Accountability

Algorithms quietly shape many of the choices we make every day. We often think of data, AI tools, and digital systems as neutral machines—but in reality, they are built on human decisions, human data, and human assumptions. And because the data we feed them is never complete or perfectly representative, algorithms tend to repeat and even amplify the inequalities already present in society (Noble, 2018). As these systems become more prevalent, particularly in areas such as hiring, finance, and policing, the issue of algorithmic bias is becoming increasingly difficult to ignore.

Algorithmic Bias in Hiring and AI Interviews

AI-powered hiring tools started gaining popularity between 2016 and 2018. These systems analyse everything from your voice to your facial expressions, trying to predict whether you’re the “right fit” for a role (Hoffmann, 2019). But algorithmic judgment actually begins long before the interview even starts. The moment you upload your CV, automated systems start comparing your information with the profiles of previously successful employees.

A famous example of this went wrong is Amazon’s experimental hiring algorithm. The company trained the system using ten years of past hiring data—data that came mostly from men working in technical roles. As a result, the AI learned to favour male candidates. It began downgrading CVs that included the word “women’s”, like “women’s football team”, and even penalised graduates from women’s colleges. Once Amazon realised the model consistently scored men higher than equally qualified women, the project was abandoned (Dastin, 2018).

This case shows the main issue: if the history is biased, the algorithm will be biased too. For example, if men have historically been hired more often for certain roles, an AI might automatically score male applicants higher (Raghavan et al., 2020). Women with the same level of skill—or even stronger qualifications—may still receive a lower score simply because of what the system has learned from past data.

Balancing the Benefits and the Risks

There’s no doubt that algorithms make some tasks faster and more efficient. They can process huge amounts of information and help companies save time. But the risks are just as significant. AI cannot understand context, empathy, or intention. It can only work with the data it is given—data that is often incomplete or biased.

People are more than numbers on a spreadsheet or patterns in a database. That’s why algorithmic results should guide decisions, not define them.

Conclusion

Algorithms are becoming powerful decision-makers in our society, especially in recruitment. But these systems learn from the past, and if the past is unequal, the future they create will be too. Gender bias in hiring shows how easily algorithms can reinforce old patterns of discrimination. While AI offers speed and efficiency, it must be used responsibly. Transparency, regular evaluation, and strong oversight are essential. Ultimately, algorithms should support human judgement—not replace it.

References

Dastin, J. (2018) ‘Amazon scraps secret AI recruiting tool that showed bias against women’, Reuters, 10 October. Available at: https://www.reuters.com (Accessed: [insert access date]).

Hoffmann, A.L. (2019) ‘Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse’, Information, Communication & Society, 22(7), pp. 900–915.

Noble, S.U. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

Raghavan, M., Barocas, S., Kleinberg, J. and Levy, K. (2020) ‘Mitigating bias in algorithmic hiring: evaluating claims and practices’, in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. New York: ACM, pp. 469–481.

3 thoughts on “How Algorithms Shape Our Daily Decisions: Bias, Consequences and Accountability

  1. Hi Linda, I think you did a great job. This article is important in revealing how algorithms can replicate and amplify social biases behind the guise of neutrality, and how they can unfairly impact key areas such as recruitment, finance, and public security. At the same time, I hope you can, as before, include some real-world examples (preferably from recent years), rather than limiting yourself to older cases (such as the recruitment bias mentioned in the article). This can help readers more clearly understand the real-world impact of algorithmic bias in today’s society. Overall, though, well done. I would also like to know, in your view, if a company uses algorithms for recruitment screening but guarantees ‘faster speed / lower costs / broader candidate coverage,’ which should be prioritized between ‘efficiency’ and ‘fairness’? I am interested in your opinion.

  2. Hi,
    I really enjoyed reading your post about how algorithms shape our daily decisions. The way you explained that AI isn’t actually “neutral” but built on human choices and past data made it very easy to understand. The Amazon hiring example really stayed with me ,it’s scary how a tool that looks objective can still quietly push women down just because it has learned from a biased history.I also really liked the sentence “people are more than numbers on a spreadsheet” – it feels so true. It reminded me that even if algorithms are fast and efficient, they shouldn’t be the ones fully deciding our future. Your conclusion about transparency and human responsibility was very powerful, and it made me reflect more on how many decisions in our lives are already influenced by systems we don’t really see or question. 🌐✨

  3. HI,I read this article pointing out that algorithms are not neutral tools but may amplify historical biases, especially in key areas such as recruitment, which can have an unfair impact on vulnerable groups. The example of Amazon’s recruitment system shows that the inequality of the data itself directly leads to algorithmic discrimination. It makes me feel that while we enjoy efficiency, we must attach importance to transparency, supervision and human judgment, and avoid allowing technology to solidify and promote social injustice. I like your blog

Leave a Reply