10

This question is inspired by recent news about some of the strange, out-of-control behavior from Microsoft's new Bing chat AI, but I am asking hypothetically here.

If an AI chatbot such as Bing Chat or ChatGPT said factually untrue things that did measurable harm to a real person's reputation, would that person have a case against the company that owns the chatbot for defamation? If not defamation, maybe something else? My understanding is that a key part of defamation is malicious intent, which does not really apply to a non-sentient piece of software. However, if the AI says something that does real harm to a persons reputation, couldn't the company be held responsible for this? What if the company was aware of the harm being done but chose not to take action? This seems similar to the situation where a company is held responsible for the words or actions of an employee.

plasticinsect
  • 203
  • 1
  • 6

2 Answers2

9

If an AI chatbot such as Bing Chat or ChatGPT said factually untrue things that did measurable harm to a real person's reputation, would that person have a case against the company that owns the chatbot for defamation?

There can be liability for defamation, although the circumstances would determine who the liable party is.

For instance, an owner's warning to the user about a risk of inaccuracies may have the effect of shifting to the user the issue of requisite degree of fault. See In re Lipsky, 460 S.W.3d 579, 593 (2015). The user ought to be judicious as to whether to publish the chatbot's output. Ordinarily, negligence suffices for liability in a scenario that involves special damages, i.e., concrete, ascertainable harm.

My understanding is that a key part of defamation is malicious intent, which does not really apply to a non-sentient piece of software.

Under defamation law, malice is not about feelings or emotional state. The term refers to reckless disregard for the truth or falsity or the statement or to publication despite publisher's awareness of the falsity of the satement. Id at 593.

Regardless, malice needs to be proved only if the plaintiff is a public figure or in claims of defamaton per se, where damage to a person's reputation is presumed (and hence the damage does not need to be proved).

What if the company was aware of the harm being done but chose not to take action?

The terms of use might protect the company against liability. Absent any such protections, the company might be liable because its awareness and inaction are tantamount to the aforementioned reckless disregard for the truth of its product's publications.

Iñaki Viggers
  • 45,677
  • 4
  • 72
  • 96
2

AI-generated text can easily be defamatory: that is simply a matter of content. Let's say the text is "John Q Smith murdered my parents", and that the statement is untrue. The scenario, as I understand it, is that Jones is chatting with a bot maintained by Omnicorp, and the bot utters the defamatory statement. A possible defense is that the literally false statement also cannot be taken to be believable – to be defamation the statement also has to be at least somewhat believable and not just random hyperbolic insulting. Since these bots are supposed to be fact-based (not ungrounded random text generators like Alex), I thing this defense would fail.

It may be necessary in that state to prove some degree of fault, viz that there was negligence. For example, a person who writes a defamatory statement in their personal locked-away diary is not automatically liable if a thief breaks in and distributes the diary to others. It is very likely that the court would find the bot-provider to be negligent in unleashing this defamation-machine on the public. It is utterly foreseeable that these programs will do all sorts of bad things, seemingly at random. Perhaps the foreseeability argument would be very slightly lessened a couple of months ago, at this point it is an obvious problem.

There is some chance that the bot-provider is not liable, in light of "Section 230" which says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider". If the bot is an information content provider, the platform operator is not liable. The bot is one if that entity "is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service". Needless to say, claims of "responsibility" are legally ill-defined in this context. It is not decided law what "responsibility" programs have for their actions. If the court finds that the program is not responsible, then the platform is relieved of liability as publisher.

The software creators are not liable for creating a machine with the capacity to create defamatory text, but the software creators could be the same as the platform-operators in a particular case.

Malicious intent is relevant for a subclass of defamation cases: defamation of public figures. You can defame famous people all you want, as long as you don't do so with malice. This is a special rule about public figures.

user6726
  • 217,973
  • 11
  • 354
  • 589