0

Lots of people are afraid of what strong AI could mean for the human race. Some people wish for a sort of "Asimov law" included in the AI code, but maybe we could go a bit more far with the UDHR.

So, Why is the Universal Declaration of Human Rights not included as statement of the A.I.?

As response to comment, response or edition:

The Universal Declaration of Human Rights is clear.

Homo sapiens sapiens (aka "mankind") needs some way to make sure AI evolution does not result in our extinction or enslavement to potentially superior algorithmic intelligences.

aurelien
  • 101
  • 6

1 Answers1

2

If I understand what you are asking, I think the simple answer would be that AI is nowhere near having demonstrated sentience, thus they do not qualify for any type of rights.

We won't have to "cross this bridge" until an AI demonstrates self-awareness and human-level-or-beyond intelligence, but it sure is interesting to think about!

(Also, the UDHR dates to the 1940's and seems to have had its last additions in 1966. Computers weren't very "smart" back then so likely no on was even considering the question ;)

Although you may also want to look at the grey good scenario, which posits inadvertent destruction of homo sapiens sapiens not as a factor not of too much, but of too little, intelligence.

The problem with an Asimov approach is highlighted by his book I, Robot, which is the potential pitfalls of pure logic. The philosophy of Neo-Luddism is preoccupied with these problems in relation to technology--specifically that the threats posed by technology cannot be predicted.

The problem with the UDHR today is that there is no algorithm smart enough to understand it--we're not even close. (There is something called the symbol grounding problem which demonstrates that meaning and understanding in relation to algorithms is still unsolved.)

DukeZhou
  • 6,209
  • 5
  • 27
  • 54