5

According to Wikipedia Artificial general intelligence(AGI)

Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can.

According to the below image (a screenshot of a picture from When will computer hardware match the human brain? (1998) by Hans Moravec, and Kurzweill also uses this diagram in his book The Singularity Is Near: When Humans Transcend Biology), today's artificial intelligence is the same as that of lizards.

enter image description here

Let's assume that within 10-20 years, we, humans, are successful in creating an AGI, that is, an AI with human-level intelligence and emotions.

At that point, could we destroy an AGI without its consent? Would this be considered murder?

nbro
  • 42,615
  • 12
  • 119
  • 217
Eka
  • 1,106
  • 8
  • 24

4 Answers4

6

Firstly, an AGI could conceivably exhibit all of the observable properties of intelligence without being conscious. Although that may seem counter-intuitive, at present we have no physical theory that allows us to detect consciousness (philosophically speaking, a 'Zombie' is indistinguishable from a non-Zombie - see the writing of Daniel Dennett and David Chalmers for more on this). Destroying a non-conscious entity has the same moral cost as destroying a chair.

Also, note that 'destroy' doesn't necessarily mean the same for entities with persistent substrate, i.e. their 'brain state' can be reversibly serialized to some other storage medium and/or multiple copies of them can co-exist. So if by 'destroy' we simply mean 'switch off', then an AGI might conceivably be reassured of a subsequent re-awakening. Douglas Hofstadter gives an interesting description of such an 'episodic consciousness' in "A Conversation with Einstein's Brain"

If by 'destroy', we mean 'irrevocably erase with no chance of re-awakening', then (unless we have a physical test which proves it is not conscious) destroying an entity with a seemingly human-level awareness is clearly morally tantamount to murder. To believe otherwise would be substrate-ist - a moral stance which may one day be seen as antiquated as racism.

NietzscheanAI
  • 7,286
  • 24
  • 38
1

Even if machines with true Artificial General Intelligence were created, their apparent intelligence would still be by definition artificial. The word simulation is a synonym and could be used to redefine AGI as Simulated General Intelligence.

Keeping that in mind, a machine that appears to be expressing emotions would only be the result of a series of complicated algorithms allowing a computer to assess the situation and respond in an intellectually appropriate manner based on external stimulus and conditions. Every possible action this machine could possibly make would be derived from a list of possible actions the machine is capable of, no matter how large the number of possible actions grows. The machine is still a series sensors, programmed instructions, and cycles of execution.

Destroying such a machine could potentially be the destruction of property if it wasn't owned by the person who destroyed it, but would it be murder? No.

A broken machine can potentially be rebuilt and reactivated if it is broken. It never really died; it was destroyed. A living being that is killed is really dead and cannot be rebuilt and made alive once again. These key differences lead me to agree with the previous answer and conclude that no, destroying an artificial intelligence without its consent would not be murder.

0

There is another problem besides consciousness, which is personal identity.

Consider an AI whose task is learning the discourse of a particular person during years, so as to simulate their conversation as closely as possible. After the death of the seed human, their artificial continuation can keep chatting with other humans (especially family members) using the ideas taught by the now dead person.

There is a strong link between such an intelligence and a human being, so it poses a problem harder than deleting a general chatbot with no individual attachment.

0

Yes we can, just like we sentence living human to death. Moral or not depends on the particular time point we are in history.

lpounng
  • 393
  • 1
  • 9