12

It is in the news that a critic of Scientology has been receiving online abuse:

Over the past six months, Barnes-Ross has faced a barrage of abuse – with 6,000 posts targeted at him on X alone. At first, the posts mostly taunted him with insults, saying he looked like a “weirdo paedophile” and branding him a “rabid anti-religious bigot”. Others questioned his mental health, calling him “disturbed” and “unhinged”. “Face it … you are a schizophrenic,” said one.

This seems like illegal abuse to me, perhaps under the Communications Act 2003, Protection from Harassment Act 1997, Malicious Communications Act 1988 or the new Online Safety Act. It then goes on to quote both police and experts who indicated that it was relevant if these posts were made directly by individuals or by "bots", such that it would be somehow "less" if done by bots or humans:

While police initially dismissed the social media attacks as the work of “bots”

Three experts in digital influence said many of the abusive posts appeared to be part of an orchestrated human effort, rather than a bot campaign.

"Bots" are bits of code that interact with web sites. Bots that post thousands of criminally abusive messages to an individual are specifically targeted to do so by a human who has more access to skill and resources than any one posting individual in an "orchestrated human effort". I would have thought that the involvement of such automated methods would make the charges against the individual greater, and also the effect of legal action against that individual is likely to be greater as they can produce more content.

How would the use of bots alter the legality or prosecutability of online harassment? Why would the police dismiss it because of the use of bots rather than prioritise it?

User65535
  • 10,342
  • 5
  • 40
  • 88

4 Answers4

22

It is unlikely that the use of bots would affect the legality of the harassment. But it would likely impact the odds of being able to bring charges which could impact the prioritization of the investigation.

If one human is harassing another, there is a pretty reasonable chance that both are in the same country, subject to the same laws, and that investigators would be able to identify the harasser and bring charges. On the other hand, if a human is being harassed by a bot, it is very likely that the bot is running on servers in a foreign country and the investigators would be very unlikely to be able to identify someone within their jurisdiction that could ultimately be held accountable. Given those expectations, it would be prudent for investigators to prioritize investigations that are likely to yield arrests over investigations that are likely to end in frustration.

Justin Cave
  • 3,801
  • 1
  • 15
  • 22
15

The use of a bot to harass does not limit criminal liability. The bot is simply the means. It doesn't change the wrong.

See e.g. R. v. B.L.A., 2015 BCPC 203, para. 29:

On October 14, 2014 you used a software program called a “bot” to send Victim #9 218 text messages simultaneously ...

Jen
  • 87,647
  • 5
  • 181
  • 381
7

The Malicious Communications Act 1988 (as modified by the Criminal Justice and Police Act 2001) applies to England and Wales; it says:

In this section references to sending include references to delivering or transmitting and to causing to be sent, delivered or transmitted and “sender” shall be construed accordingly.

Using an automatic system to determine who should targeted and in what manner is clearly causing to be sent, so is considered equivalent to sending the same messages by hand.

As Justin Cave points out, use of an offshore bot network may make investigation more difficult, but the offence is not any lesser because of that. If anything, it should lead to a higher sentence if the harassment is greater than that which could be achieved by an individual composing the messages by hand.

Toby Speight
  • 1,072
  • 4
  • 20
5

I think what they might have intended is that the attack wasn't targeted specifically at Barnes-Ross. Rather, the bots were harassing lots of people, and he was just one of them. Similar to how spammers work -- they send mass emails to many thousands of people, none of whom are specifically targeted. They might not even know who will receive these mass insults.

Whether this actually justifies "dismissing" the activity, I don't know. But it may go to determining intent if the claim is that he was intentionally harassed.

This is different from using bots to make a targeted attack more intense, which is what you're describing. While that wouldn't necessarily affect liability, it would factor into the severity of the crime, which might weigh into the punishment if the offenders are convicted. An analogy would be the difference between giving someone a black eye and beating them to a pulp: both are assaults, but the latter is more severe and should produce a longer sentence.

With modern AI, things like this may become trickier to define. I can imagine programming bots to seek out users whose online activity looks like they're members of a specific group, and target them for harassment barrages.

Toby Speight
  • 1,072
  • 4
  • 20
Barmar
  • 8,504
  • 1
  • 27
  • 57