Faculty & Research -The formal rationality of artificial intelligence-based algorithms and the problem of bias

The formal rationality of artificial intelligence-based algorithms and the problem of bias

This paper investigates why AI-based algorithms make biased decisions in complex contexts, even when data is largely bias-free. We argue that this is due to the fundamental difference between AI and human rationality in making sense of data. AI algorithms use formal rationality, a mechanical process of characterizing and judging data categories. Human rationality, on the other hand, is substantive, which means that it takes into account contextual nuances. We demonstrate this problem through rigorous text analysis and using an AI tool to simulate decision-making in complex contexts. We conclude by delineating the boundary conditions and limitations of leveraging formal rationality to automatize algorithmic decision-making.

Artificial intelligence (AI) and machine learning (ML) are increasingly being used to make decisions that affect our lives. From predicting recidivism rates to determining who gets a loan, AI-based algorithms are embedded in a wide range of applications. However, there is growing concern about the potential for AI to produce biased and discriminatory outcomes. For example, studies have shown that AI-based facial recognition systems are more likely to misidentify people of colour. AI-based hiring algorithms have also been shown to be biased against women and minorities.

In this paper, we investigate the problem of bias in AI-driven decision-making from a sociotechnical perspective. We draw on Max Weber’s notions of formal and substantive rationality to understand how the exclusive use of formal rationality in AI can exacerbate bias.

We argue that AI-based algorithms are limited in their ability to meaningfully interpret values, norms, and moral judgments inherent in real-world datasets. This is because they rely on formal rationality, which is based on mathematical optimization procedures, rather than substantive rationality, which is a values-based understanding and apprehension of context.

We make three contributions to the literature:

First, we empirically illustrate the incongruence of formal rationality with substantive rationality in real-world data sets. We show how AI-based algorithms ignore or rectify discrepancies to fit the logic of formal rationality, resulting in a loss of information and meaning. Second, we provide new insights into the performance of AI-based algorithms. We show how their exclusive reliance on formal rationality can lead to biases and poor decisions, even when the training data is unbiased. This is because formal rationality is unable to account for the complex social and cultural contexts in which AI-based algorithms are deployed. Third, we develop a 2 × 2matrix which delineates the adequacy of AI-based formal rationality and human-based substantive rationality in data analysis and decision-making. This matrix can be used to identify areas where AI-based systems are likely to produce biased and discriminatory outcomes.

Our findings have important implications for the design and use of AI-based systems. We argue that practitioners need to be aware of the limitations of formal rationality and take steps to mitigate the potential for bias. This may involve incorporating human-based substantive rationality into the design and development of AI-based systems or using AI-based systems in conjunction with human oversight. In sum, work contributes to a more informed and nuanced discussion about the potential for AI to produce biased and discriminatory outcomes.

For Dirk Schneckenberg: “Our study investigates the role of rationality inherent in algorithms and the limitations of AI-based algorithms in handling complex, real-life datasets. Our results emphasize the need for a more nuanced approach to AI development and the importance of considering the distinction between formal and substantive rationality. Our study findings specifically call for more precise specifications of boundaries for AI’s judgmental agency and decision authority for the further development of this powerful and disruptive technology.


We use text analysis and ML to study how AI algorithms are trained on data. We analyze 22 datasets of user comments, including those for chatbots in financial services. These real-world datasets are unstructured and contain complex language. We then use ML to understand how AI algorithms interpret data and how this interpretation affects their decisions. We also analyze an AI tool for moral judgment. Finally, we discuss the limitations of using formal rationality in AI and how it can lead to biases and poor decisions.

Applications and beneficiaries:

Our results suggest that AI-based algorithms are most effective when the data used for training contains a low level of substantive rationality. In these situations, algorithmic judgments may not require substantive rationality when deployed in the real world. These are ideal contexts for automation and can include tasks such as generating standard financial reports and management dashboards. However, when the dataset comprises more instances shaped by substantive rationality, but algorithms do not require substantive rationality, an additional layer of random human checks for quality assurance can be added.

For example, an unprecedented crisis that essentially makes models based on past data irrelevant would require human supervision to envision various scenarios and adapt prior models to the emerging situation. Similarly, tasks that require common sense, such as medical diagnosis, should be overseen by humans.

As AI advancements continue, human intervention will become increasingly necessary to control AI’s destructive and abusive potential and increase its creative potential. This is because recent advances in AI, such as ChatGPT, are built on large language models that leverage data with high substantive rationality. These models struggle with complex questions that require substantive rationality. Therefore, the management and governance of AI shift from integrating AI with human intelligence to clearly defining the roles and responsibilities of AI and human intervention. This is crucial in light of the declining effectiveness of formal rationality and an increase in substantive rationality in posed questions and context.

In summary, our study highlights the importance of considering rationality as a critical factor in understanding the limitations and potentials of AI-based algorithms.

Reference to the research

Nishant, R., Schneckenberg, D., & Ravishankar, M. (2023). The formal rationality of artificial intelligence-based algorithms and the problem of bias. Journal of Information Technology, https://doi.org/10.1177/02683962231176842

Link to media:



  • Dr Dirk Schneckenberg, Full Professor, Rennes School of Business,
  • Rohit Nishant, Associate Professor, FSA Université Laval
  • MN Ravishankar, Loughborough University