Artificial Intelligence (AI) systems are dynamic systems endowed with the ability to “identify, interpret, make inferences, and learn from data to achieve predetermined organisational and societal goals” (Mikalef & Gupta, 2021, p. 3). Nowadays, AI systems are continuously reshaped by their use thanks to their ability to learn from large datasets in an ongoing fashion (Grote et al., 2024). Whether looking at simple AI systems programmed with only a few parameters or more complex AI systems, the dynamism of these systems raises accountability issues because it may undermine transparency. Accordingly, the human-in-the-loop principle has gained prominence to ensure that humans “have control over a technical system to fulfill their accountability for its safe and effective performance” (Grote et al., p. 2). Nevertheless, different studies have suggested that humans-in-the loop may inadvertently ignite algorithmic discrimination if their decision-making process is subconsciously biased (Bauer et al., 2024). Likewise, recent studies show that novice AI users more frequently fail to discard incorrect advice, whereas more experienced AI users more often ignore correct AI advice (Jussupow et al., 2021). Clearly, blindly trusting the AI machine or mistrusting it can have tragic consequences in high-stakes environments such as healthcare, loan approval, hiring or criminal justice systems. Avoiding these tragic consequences calls for a mindful human-in-the-loop, that is, a responsible user who questions the AI output and does not take it at face value.
Drawing on generalised analytic induction, in this paper we explore the conditions that lead to the emergence of AI mindfulness. Generalised analytic induction is a descriptive technique that is “best understood as an aid to causal interpretation” (Ragin, 2023, p. 4). Compared to classic analytic induction, generalised analytic induction does not look for universal relationships, but for “modal configurations”, that is, “combinations of antecedent conditions that are relatively common in a set of cases exhibiting the outcome in question” (Ragin, 2023, p. 51).
Moreover, we consider the substantive domain for our empirical study to be the specific context of policing in England and Wales. Echoing Thatcher et al.’s (2018), we define AI mindfulness as a cognitive state whereby users are aware of their social context and are open to value-adding applications of AI tools. Our results lead to a typology of users showing that AI mindfulness emerges either when users distrust AI, perceive that AI is risky and find AI regulations unclear, or when they distrust AI, perceive AI as risky and find AI tools unexplainable. However, if AI users find AI regulations to be clear and AI tools to be explainable, they are more likely to take an ambivalent stance towards AI by both trusting it and perceiving it as risky at the same time. Accordingly, we uncover three distinct types of mindful AI users, namely, the AI-regulation sceptic, the AI-opposed user and the AI-ambivalent user. Our typology has wider implications for the responsible use of AI tools.
2025.
Artificial Intelligence, Mindfulness, Analytic induction, Policing, Qualitative Comparative Analysis (QCA)
Paper presented at the AOM2025 Symposium: "Advancing Knowledge on Artificial Intelligence in the Workplace" (https://journals.aom.org/doi/10.5465/AMPROC.2025.12800symposium)