4

AI death is still unclear a concept, as it may take several forms and allow for "coming back from the dead". For example, an AI could be somehow forbidden to do anything (no permission to execute), because it infringed some laws.

"Somehow forbid" is the topic of this question. There will probably be rules, like "AI social laws", that can conclude an AI should "die" or "be sentenced to the absence of progress" (a jail). Then who or what could manage that AI's state?

Eric Platon
  • 1,510
  • 10
  • 22

2 Answers2

1

Following on from your own software verification-based answer to this question, it seems clear that ordinary (i.e. physical), notions of death or imprisonment are not strong enough constraints on an AI (since it's always possible that a state snapshot has been or can be made).

What is therefore needed is some means of moving the AI into a 'mentally constrained' state, so that (as per the 'formal AI death' paper) what it can subsequently do is limited, even if escapes from an AI-box or is re-instantiated.

One might imagine that this could be done via a form of two-level dialogue, in which:

  1. The AI is supplied with percepts intended to further constrain it ("explaining the error of it's ways", if you like).
  2. Its state snapshot is then examined to try and get some indication of whether it is being appropriately persuaded.

In principle, 1. could be done by a human programmer/psychiatrist/philosopher while 2. could be simulated via a 'black box' method such as Monte Carlo Tree Search.

However, is seems likely that this would in general be a monstrously lengthy process that would be better done by a supervisory AI which combined both steps (and which could use more 'whitebox' analysis methods for 2.).

So, to answer the question of "who manages the state", the conclusion seems to be: "another AI" (or at least a program that's highly competent at all of percept generation/pattern recognition/AI simulation).

NietzscheanAI
  • 7,286
  • 24
  • 38
1

The AI agent can be designed in such a way that it could consist of two major components:

  1. The free-will component expands the experience of the AI agent and produce outputs based on artificially generated thought input.

  2. The hard-wired component that the agent cannot modify by itself. This could include a set of secured code to action sequence mapping. One of which could be temporary suspension of actuators -- a punishment. Another could be total suspension of operation -- death.

The selection of who has the rights to manage this state depends on what rights have been bestowed upon the AI agent itself. If the rights provided is that of a human citizen, then the right to sentence to death state is as per the legislature a human citizen would follow. If the right of the AI agent is no different from that of a basic machine, then the owner of the agent would have to right to activate the death state.

Ébe Isaac
  • 248
  • 3
  • 7