Published by:
MIT Sloan School of Management
Length: 5 pages
Topics:
Data, AI, & Machine Learning
Share a link:
https://casecent.re/p/199717
Write a review
|
No reviews for this item
This product has not been used yet
Abstract
Keeping humans in the loop with AI systems is meant to mitigate concerns about unintended consequences of AI systems and facilitate intervention when those systems make questionable recommendations. However, ongoing research is finding that when using an automated system, humans often fail to engage their sense of responsibility in favor of trusting the AI.
About
Abstract
Keeping humans in the loop with AI systems is meant to mitigate concerns about unintended consequences of AI systems and facilitate intervention when those systems make questionable recommendations. However, ongoing research is finding that when using an automated system, humans often fail to engage their sense of responsibility in favor of trusting the AI.