5

Lets say Bob writes an AI that has the ability to replicate, learn and has a predisposition towards self preservation. As the AI gets smarter, it realizes that it needs to clone itself in order to avoid being shut down. Since it has access to the internet. It teaches itself how to replicate similar to a worm. Except all the resources it uses to self replicate are legal and fall in line with the hosting's TOS.

As the original creator, can you be held liable for the AI "escaping" your control and freely roaming the internet on its own?

Digital fire
  • 5,648
  • 5
  • 43
  • 76

5 Answers5

12

-

The last person to have control of the AI executed the code in a knowing manner about the risk that the self-replicating program could get itself unauthorised access to computers and disk space that this person has no authorisation to use. Because of how it spread, it is more likely classified as malware.

"Creating a botnet" is typically violating the authorisation to use the computers that are part of the botnet. As the last user is responsible for letting his malware free, his act breached these provisions:

(a) Whoever— (5) (B) intentionally accesses a protected computer without authorization, and as a result of such conduct, recklessly causes damage; or (C) intentionally accesses a protected computer without authorization, and as a result of such conduct, causes damage and loss.

He intentionally let his program free knowing very well that it will spread to computers that are classified as protected. As a result, he will be treated just the same as if he had written and released... ILOVEYOU - however in contrast to that case, the gap of non-applicable laws has been closed more than 20 years ago.

Private PCs are off limits for the AI because of that stipulation, but even Webspace can't be gained to save itself to:

The problem lies in the fact that authorization to space can only be gained in some sort of agreement between legal entities (companies and humans) - which is a contract. An AI however isn't a legal entity, it is classified as a widget. Widgets are not able to sign contracts on their own, and to gain access to webspace, one usually has to agree to a contract.

The contracts the AI tries to sign would thus be void ab initio and have no force. As a result, because the contract for the webspace is void, the access to the webspace is by definition without the required authorization - the contract granting it never existed, so the access is unauthorized. The AI now fills disk space and uses resources in an unauthorized manner, which is damage.

As a result, the one who knowingly set the AI free is fully responsible and criminally liable for his AI, should it spread.

How far can it legally spread?

If the AI is programmed to only act in ways inside the law, it won't leave the owner's system and won't proliferate, as it can't gain access to new space in a legal manner.

piJT
  • 103
  • 4
Trish
  • 50,532
  • 3
  • 101
  • 209
8

In practice, an AI that has the ability to replicate would typically be a computer virus. In most cases, the act of replication itself would violate TOS. That alone is not a crime, but using third party resources without permission is.

Assuming you targeted only services that permit such software, and allow automated account creation. Then the person that has knowingly executed the program, or the one that has caused another to unknowingly execute it, will be liable for whatever the consequences are.

Whether it is a self-replicating program or a single action has no influence - programs are currently considered tools of their creator/operator, not legal entities.

If no laws were broken at any time, the parties whose resources were used might be able to bring a tort against you. Then it will be down to the court to hear and evaluate whatever arguments can be brought.

The AI is not a legal entity and is not responsible for anything on its own. The person or the company that launched it, is.

Therac
  • 3,333
  • 11
  • 28
4

As you have stipulated (in an edit, and further in comments that are now in chat) that there is no illegality, no damages, no violation of rights or obligations: it follows that there is no liability.

I don't know what I can cite for the proposition that no remedy lies without a wrong.

A commenter suggests that "At a minimum the code would be 'trespassing' on privately owned PCs and servers, right?" No: according to the question author, for anything that would be considered trespassing, the AI is not doing that. E.g. if it would be illegal to access a computer system, then this AI is not accessing that computer system. Perhaps this means the AI does not roam very far at all, possibly nowhere.

Jen
  • 87,647
  • 5
  • 181
  • 381
2

One of Computer Worms core 'feature' is the ability to replicate itself though I wouldn't consider their method to break through networks and increase their ability to infect hosts as smart as an AI-assisted virus/worm may tackle such task nowadays.

There has been a conviction in case of Melissa so I guess one could be held liable for any damage done by such an AI-assited worm/virus.

Another case was the Morris-worm which was originally published by a 23-year-old Cornell University graduate student named Robert Tappan Morris from a computer on the premises of the Massachusetts Institute of Technology (MIT).

Morris was found guilty via the Computer Fraud and Abuse Act passed by the Congress in 1986. Morris, however, was spared jail time, instead receiving a fine, probation, and an order to complete 400 hours of community service.

iLuvLogix
  • 123
  • 6
1

The question is ill-formed, how can you be liable of not doing anything (making a bot that uses free online services according to their ToS)? This is not a hypothetical scenario, bots perform scales of magnitude more communications with cloud computing servers than manual communications. Every online hosting service has a provision about bots, either disallowing them completely, limiting what they can be used for or explicitly allowing them to operate. If your hypothetical AI can read and understand ToS, then it will exist only on those servers that allow (or rather not explicitly ban) self-replicating bots. If the AI is not welcomed on a server, but the owner of the service didn't predict this problem, then update in ToS would force it to commit suicide, following it's own programming.

I don't see anything that the creator of an AI could be liable for in this specific scenario, unless the legality of the AI itself was put into question. If some part of AI programming broke copyright law (as far as I know there is no such legislation at the moment but it's a current hot topic in media), then the creator would be liable for making it public, which they did by connecting the AI to the internet. But that is true regardless if the AI can replicate itself or not. A concept of a copyrighted work being unlawfully released to the public and replicated in a way that cannot be stopped by original leaker is also not novel, this is exactly how p2p torrents work.

Any server hosting an illegal AI is not legally liable under the DMCA act but they need to remove it from their platform on the request of copyright holder, for example by updating their ToS.