As more and more companies replace humans with large language model backed generative AIs (e.g. ChatGPT), I was curious to know if there have been any lawsuits (or compelling legal papers) addressing whether principals are bound by the "hallucinations" of their AI.
For the uninitiated, AI hallucinations are a euphemism for LLMs' propensity for bullshit.
I'm in the US, but the question applies to any jurisdiction that uses agency theory.
EDIT for clarification
I wanted to clarify this a little.
I'm looking beyond just direct interactions with ChatGPT and OpenAI. Many companies are adopting LLM systems to create virtual customer support systems that can field their customers' questions.
What I'm really asking is: if such an AI-backed virtual support agent makes and assertion or promise or commitment due to sloppy LLM tuning (for instance), would this carry the same weight as if it were a human agent making the same assertion, promise or commitment?