Product details

By continuing to use our site you consent to the use of cookies as described in our privacy policy unless you have disabled them.
You can change your cookie settings at any time but parts of our site will not function correctly without them.
Management article
-
Reference no. SMR65427
Published by: MIT Sloan School of Management
Published in: "MIT Sloan Management Review", 2024
Length: 5 pages

Abstract

Keeping humans in the loop with AI systems is meant to mitigate concerns about unintended consequences of AI systems and facilitate intervention when those systems make questionable recommendations. However, ongoing research is finding that when using an automated system, humans often fail to engage their sense of responsibility in favor of trusting the AI.

About

Abstract

Keeping humans in the loop with AI systems is meant to mitigate concerns about unintended consequences of AI systems and facilitate intervention when those systems make questionable recommendations. However, ongoing research is finding that when using an automated system, humans often fail to engage their sense of responsibility in favor of trusting the AI.

Related