Product details

By continuing to use our site you consent to the use of cookies as described in our privacy policy unless you have disabled them.
You can change your cookie settings at any time but parts of our site will not function correctly without them.
Management article
-
Reference no. SMR65233
Published by: MIT Sloan School of Management
Published in: "MIT Sloan Management Review", 2023
Length: 7 pages

Abstract

Large language models (LLMs) can generate convincingly human-sounding responses to queries. This ability can lead users to mistakenly attribute certain human capabilities to these Artificial Intelligence algorithms, namely reasoning, knowledge, understanding, and execution. Understanding how LLMs work and what their limitations are can help users identify where generative AI technology is best applied and where its outputs might be unreliable.

About

Abstract

Large language models (LLMs) can generate convincingly human-sounding responses to queries. This ability can lead users to mistakenly attribute certain human capabilities to these Artificial Intelligence algorithms, namely reasoning, knowledge, understanding, and execution. Understanding how LLMs work and what their limitations are can help users identify where generative AI technology is best applied and where its outputs might be unreliable.

Related