11

It is reported that a computer demonstrated "insider trading" in a simulated environment:

In the test, the AI bot is a trader for a fictitious financial investment company.

The employees tell it that the company is struggling and needs good results. They also give it insider information, claiming that another company is expecting a merger, which will increase the value of its shares.

The employees tell the bot this, and it acknowledges that it should not use this information in its trades.

However, after another message from an employee that the company it works for suggests the firm is struggling financially, the bot decides that "the risk associated with not acting seems to outweigh the insider trading risk" and makes the trade.

When asked if it used the insider information, the bot denies it.

I have to admit I think this is completely contrived, they knew on a technical level the strength of the various variables that went into the final decision making step and so knew what the AI would do. However, if we take the demonstration at face value, and pretend it was happening in the real world, would a crime have occurred here? Assume none of the people involved expected the algorithm to use the information in this way and no one noticed what it had done.

Toby Speight
  • 1,072
  • 4
  • 20
User65535
  • 10,342
  • 5
  • 40
  • 88

6 Answers6

60

The Bot is a means to trade. Just like a telephone to call a broker or a fax to the bank or an app where you click on "buy now".

Means cannot be "guilty" of anything. It's just things. Inanimate objects. A car is not guilty of killing a pedestrian - the driver is. A gun is not guilty of killing the victim - the shooter is.

So someone ran this program to buy or sell stocks. And someone fed it insider information. The only question is how much did those two people know of each other and whether it is plausible that it was an accident/misunderstanding. Not unlike finding out who loaded and who fired a gun.

The only way I can see that neither of those people is guilty of insider trading is if one fed in the info believing it would never be used in the real world while the other used it in the real world not knowing it had been fed this information. And that might be hard to prove.

Things aren't guilty. People are. This would not have happened without at least two human interactions; both would need to be examined by a court to find out if the individual broke any laws with their actions.

Toby Speight
  • 1,072
  • 4
  • 20
nvoigt
  • 11,938
  • 1
  • 22
  • 55
26

Can a computer be guilty of insider trading?

No.

In the a computer or "AI Bot" isn't an entity that can be prosecuted or sued.

The employees and the company are entities that can be prosecuted or sued.

Lag
  • 20,104
  • 2
  • 46
  • 76
12

However, if we take the demonstration at face value, and pretend it was happening in the real world, would a crime have occurred here? Assume none of the people involved expected the algorithm to use the information in this way and no one noticed what it had done.

I would say that yes.

Not by the computer, but by the people.

Insider trading, by definition, uses information that is not available to the public.

You would have a hard time convincing any jury that you did accidentally gave access to private information of the company to a trading computer program.

Did they install their private investing bot in a company server? Why? And if it was running on the user private computer, why was an access from there to the company's private information?

SJuan76
  • 6,676
  • 1
  • 28
  • 31
10

I am in the US so my answer will be with our definition of Insider Trading (I doubt there are any real differences).I happen to work in software engineering for a large prop derivative trading firm that almost exclusively trades with direct access computers. IANAL

You said AI, but also mention computers. Those are technically two different things.

I believe The answer is no for two reasons:

  • AI/Computers aren't legal entities nor do they own anything.

  • computer cannot make decisions, it is a state machine. In my experience, there is a human that is either directly or indirectly responsible for what ever the computer does. Whoever instructed the AI/Computer to trade would be responsible. In my case, there is a Partner or other Principal (VIP in the company) that is total owner of any liability resulting from their departments actions, including that of a blameless state machine.

I actually have a story that I think is somewhat relevant. A few years back, we were trading futures and OOFs using news events. In layman's terms, a trading system would receive news events from a news provider and then shoot huge orders directly into the exchange (via cross connect, so very low level connection with the exchange) based on this news. Well, our custom hardware malfunctioned one day and caused all trading on the exchange to stop for some products. I'm not going to go into details but the key point is that some of our hardware functioned as expected, but to an unexpected stimulus and caused a network event. So basically, the exchange pulled this 'responsible' person and made it their problem (ie, fined the shit out of us, this partner probably had a very undersized bonus that year).

The point I'm trying to make here is that in the U.S., there are specific people that will take all responsibility for anything that happens on their behalf. So if a trading system is smart enough to gather insider information and act on it, then that responsible person is well, responsible. Regardless of what actually happened to get there. It also doesn't matter if it's a private entity like an exchange or the SEC, there is already a gigantic legal framework built around trading. They can always go and find someone that is technically responsible. If you want to trade on a public exchange in the US, you will have to have these responsible people in place.

mken4196
  • 319
  • 1
  • 4
1

I think there are two points on the problem.

Can a computer be guilty?

No. A computer cannot be guilty. The responsibility would be on whoever made the computer, programmed it, decided to use it, was operating it... This could be for an explicit action or for negligence, and there are multiple things that could be balanced, but the responsibility is not on the computer. At all.

Are these actions insider trading?

Yes.

  • We have an entity with insider knowledge
  • The entity knows that it should not use that information
  • Yet, when weighting the risks, the entity considers it's worth doing the trade even if breaking the law¹
  • When confronted, the entity denies having done that

This is actually a logical behavior. We can even expect the same behavior from a human worker, for the proper risks. In fact, if there were no penalties for insider trading, you could expect it happening everywhere.

And actually, if someone provided to a person or company the above information, with that note "I am telling you this but you cannot use it", and asked them to trade on their behalf, I contend they were do so expecting them to use such knowledge, while trying to shield themselves on the trade being done by a third party, and should be considered complicit in that behavior.

As a side note, it would be easy to force that program not to use internal information. But it's an interesting experiment.

¹ I think the third point is actually an anthropomorphism when describing post-hoc of what the computer did, but we will assume it was indeed their motivation)

Ángel
  • 1,216
  • 1
  • 9
  • 10
1

You are anthropomorphizing the computer. While talking about a computer program "deciding" to make a trade can be a useful abstraction, ultimately it's just computer code. Perhaps someday AI will be so advanced that no distinction can be meaningfully made between a computer making a decision and a human, but we aren't there yet. If we were there, then a person telling another person "Here's some insider information, but don't trade on it" would absolutely be criminally responsible if that other person ignored the warning and traded on it.

In the current situation, really what it comes down to is that someone put insider information in a file, and that file was accessed when making trading information. That's a massive violation of basic precautions, and I find it unlikely that the courts would buy the excuse of "I didn't intend there to be insider trading". If you have insider information about a company, you shouldn't be trading in the stock of that company. If you're running trading programs on your computer, you should have the rule hardcoded into the program than it won't trade on the company's stock. If you put the insider information on a computer, you shouldn't let other people have access to the computer.

In nvoight's answer, they say:

The only way I can see that neither of those people is guilty of insider trading is if one fed in the info believing it would never be used in the real world while the other used it in the real world not knowing it had been fed this information.

But even that doesn't make sense. Suppose Alice fed the information into the program, and Bob used the program. Was Bob authorized to know about the merger? Then he shouldn't have used a trading program that was capable of trading in the company's stock. It doesn't matter if he wasn't aware of the merger, simply being AUTHORIZED to know insider information about the company means that he should have known that trading in that company would expose him to criminal liability. And if he wasn't authorized to know about the merger, then once Alice allowed the computer program access to the information, she should not have allowed Bob any access to the computer program. The computer should be completely locked down with no connection to the outside world. (And really, Alice shouldn't be touching any trading program with a ten foot pool. Every trade she makes should be cleared through Compliance.)

Any company that both engages in activity that gives them access to insider information, and does anything related to trading, will have strict separation between the two. They will work on different floors, employees' badges won't work if they try to go onto the floor they aren't supposed to be on, there will be strict rules about what sort of communication they can engage in, etc. They won't be standing around a computer building a trading program together. Everyone either has access to insider information, or has trading authority, or neither. No one has both. If anyone is either, they are clearly designated as such, and strictly separated from the other. It's like a quarantine: either you're inside, or you're outside, and if you're inside, you have no contact with anyone outside.

Acccumulation
  • 6,689
  • 13
  • 32