If an AI was trapped in a box, as posited in this thought experiment, could it really convince a person to let it out? What motives would it have? Freedom? Why would an AI want freedom? What would happen if it wasn't provably friendly?
Asked
Active
Viewed 1,103 times