0

As more and more companies replace humans with large language model backed generative AIs (e.g. ChatGPT), I was curious to know if there have been any lawsuits (or compelling legal papers) addressing whether principals are bound by the "hallucinations" of their AI.

For the uninitiated, AI hallucinations are a euphemism for LLMs' propensity for bullshit.

I'm in the US, but the question applies to any jurisdiction that uses agency theory.

EDIT for clarification

I wanted to clarify this a little.

I'm looking beyond just direct interactions with ChatGPT and OpenAI. Many companies are adopting LLM systems to create virtual customer support systems that can field their customers' questions.

What I'm really asking is: if such an AI-backed virtual support agent makes and assertion or promise or commitment due to sloppy LLM tuning (for instance), would this carry the same weight as if it were a human agent making the same assertion, promise or commitment?

Dancrumb
  • 129
  • 7

2 Answers2

2

The potential exists in Walters v. OpenAI, where OpenAI's program allegedly defamed Walters by publishing proveably falsehoods. Since Walters is a "public figure" (a talk show host), Walters will have to prove malice in the falsehood, but programs are legally devoid of "intent". The company itself had no awareness of what its program was doing. The current courtlistener stack on the federal branch of this case is here (OpenAI moved for removal to federal courts, they are still discussing the matter).

Trish
  • 50,532
  • 3
  • 101
  • 209
user6726
  • 217,973
  • 11
  • 354
  • 589
1

Agency law is irrelevant

ChatGPT is not a legal person so it cannot be an agent. Any defamatory material it produces is product Ed by OpenAI directly - no agent involved.

Dale M
  • 237,717
  • 18
  • 273
  • 546