-2

TLDR: If an AI system on behalf of a company makes a claim, is the company compelled to abide by that claim?

Background:

I was hoping to sign up for an account with a leading AI company in order to try out their products. During the sign up process they wanted a phone number. The would not accept my VOIP number, insisting on a traditional cell phone number. A quick internet search yields at least one Reddit thread from others indicating that they began receiving spam texts and calls shortly after providing this company with their phone number. I also came across this help article on their site indicating that they use the phone number to "verify your account" and that they "don't use your phone number for any other purposes". This help article is unique from all their other articles in notating that it was generated by their AI. Their privacy policy claims "We disclose Communication Information to our affiliates and communication services providers."

  • If I were able to prove this company were to have abused my contact info despite their published claims, would there be any legal liability? Would being under the protections of CCPA or GDPR make a stronger claim?

I then thought about AI support features which some banks and other companies are using these days. I assume that these programs are either decision tree based or fed a very narrow pool of true information to act on, but that is just my assumption. As these support programs continue to be developed it is inevitable that they may include AI similar to this. For example:

Suppose an AI "agent" tells me that there is currently a promotion for opening a savings account with a minimum amount of $1000 providing a $500 cash bonus for maintaining that minimum balance for 6 months as well as guaranteeing for 12 months 5.375% APY interest.

  • Could the bank be held to those terms? (This is similar in my mind to Is what the customer service of a health insurance company tells one of their customers legally binding? except that this one involves an AI rather than a human agent)
  • What if the terms of the offer are less specific such as not indicating a dollar amount for the bonus or a period for the interest rate?
  • What if the terms of the offer are completely outlandish such as 100x bonus or 100% APY?
  • Does it make a difference if the logs show "prompt engineering" the "agent" into making these claims?
  • If the bank cannot be held to those terms, what prevents the bank from turning a blind eye to these claims in the interest of getting accounts?
  • What if the "agent" is just a simple decision tree but was given "untrue" information, i.e: Does the behind-the-scenes tech make a legal difference?
user_48181
  • 125
  • 5

1 Answers1

0

AIs are not people and cannot make claims

Companies are people and if they make legally binding commitments then they are bound by by those just as a human is.

Dale M
  • 237,717
  • 18
  • 273
  • 546