The growing prevalence of artificial intelligence (AI) in our lives has brought the impact of AI-based decisions on human judgments to the forefront of academic scholarship and public debate. Despite growth in research on people’s receptivity towards AI, little is known about how interacting with AI shapes subsequent interactions among people. We explore this question in the context of unfair decisions determined by AI versus humans and focus on the spillover effects of experiencing such decisions on the propensity to act prosocially. Four experiments (combined N = 2425) show that receiving an unfair allocation by an AI (versus a human) actor leads to lower rates of prosocial behavior towards other humans in a subsequent decision—an effect we term AI-induced indifference. In Experiment 1, after receiving an unfair monetary allocation by an AI (versus a human) actor, people were less likely to act prosocially, defined as punishing an unfair human actor at a personal cost in a subsequent, unrelated decision. Experiments 2a and 2b provide evidence for the underlying mechanism: People blame AI actors less than their human counterparts for unfair behavior, decreasing people’s desire to subsequently sanction injustice by punishing the unfair actor. In an incentive-compatible design, Experiment 3 shows that AI-induced indifference manifests even when the initial unfair decision and subsequent interaction occur in different contexts. These findings illustrate the spillover effect of human-AI interaction on human-to-human interactions and suggest that interacting with unfair AI may desensitize people to the bad behavior of others, reducing their likelihood to act prosocially. Implications for future research are discussed.

AI-induced indifference: unfair AI reduces prosociality

Longoni, Chiara;
2024

Abstract

The growing prevalence of artificial intelligence (AI) in our lives has brought the impact of AI-based decisions on human judgments to the forefront of academic scholarship and public debate. Despite growth in research on people’s receptivity towards AI, little is known about how interacting with AI shapes subsequent interactions among people. We explore this question in the context of unfair decisions determined by AI versus humans and focus on the spillover effects of experiencing such decisions on the propensity to act prosocially. Four experiments (combined N = 2425) show that receiving an unfair allocation by an AI (versus a human) actor leads to lower rates of prosocial behavior towards other humans in a subsequent decision—an effect we term AI-induced indifference. In Experiment 1, after receiving an unfair monetary allocation by an AI (versus a human) actor, people were less likely to act prosocially, defined as punishing an unfair human actor at a personal cost in a subsequent, unrelated decision. Experiments 2a and 2b provide evidence for the underlying mechanism: People blame AI actors less than their human counterparts for unfair behavior, decreasing people’s desire to subsequently sanction injustice by punishing the unfair actor. In an incentive-compatible design, Experiment 3 shows that AI-induced indifference manifests even when the initial unfair decision and subsequent interaction occur in different contexts. These findings illustrate the spillover effect of human-AI interaction on human-to-human interactions and suggest that interacting with unfair AI may desensitize people to the bad behavior of others, reducing their likelihood to act prosocially. Implications for future research are discussed.
2024
2024
Zhang, Raina Zexuan; Kyung, Ellie J.; Longoni, Chiara; Cian, Luca; Mrkva, Kellen
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11565/4067576
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact