3

Here is one of the most serious questions, about the artificial intelligence.
How will the machine know the difference between right and wrong, what is good and bad, what is respect, dignity, faith and empathy.

A machine can recognize what is correct and incorrect, what is right and what is wrong, depend on how it is originally designed.

It will follow the ethics of its creator, the man who originally designed it
But how to teach a computer something we don't have the right answer.
People are selfish, jealous, self confident. We are not able to understand each other sorrows, pains beliefs. We don't understand different religions, different traditions or beliefs.
Creating an AI might be breakthrough for one nation, or one race, or one ethnic or religious group, but it can be against others.

Who will learn the machine a humanity? :)

DukeZhou
  • 6,209
  • 5
  • 27
  • 54

2 Answers2

8

Right and wrong only exist relative to some goal or purpose.

To make a machine do more right than wrong, relative to human goals, one should minimize the surface area of the machine's purpose. Doing that minimizes the intrinsic behavior of the AI, which enables us to reason about the right and wrong behaviors of the AI, relative to human purposes.

Horses are quite general over the domains of their purposes, but are still predictable enough for humans to control and benefit from. As such, we will be able to produce machines (conscious or unconscious) that are highly general over particular domains, while still being predictable enough to be useful to humans.

The most efficient machines for most tasks, though, will not need consciousness, nor even the needs that cause survivalistic, adversarial and self-preserving behaviors in eukaryotic cells. Because most of our solutions won't need those purposes to optimize over our problems, we can allow them to be much more predictable.

We will be able to create predictable, efficient AIs over large domains that are able to produce predictable, efficient AIs over more specific domains. We'll be able to reason about the behavioral guarantees and failure modes of those narrow domains.

In the event that we one day desire to build something as unpredictable as a human, like we do when having babies, then we'll probably do that with the similar intentions and care that we use to bring an actual baby into the world. There is simply no purpose in creating a thing more unpredictable than you unless you're gambling on this thing succeeding you in capability - which sounds exactly like having babies.

After that, the best we can do is give it our theories about why we think we should act one way or another.

Now, theoretically, some extremely powerful AI could potentially construct a human-like simulacrum that, in many common situations, seems to act like a human, but that in fact has had all of it's behaviors formally specified a priori, via some developmental simulation, such that we know for a fact that all such behaviors produce no intentional malice or harm. However, if we can formally specify all such behaviors, we wouldn't be using this thing to solve any novel problems, like curing cancer, as the formal specification for curing cancer would already have been pre-computed. If you can formally specify the behaviors of a thing that can discover something new, you can just compute the solution via the specification, without instantiating the behaviors at all!

Once AI has reached a certain level of capability, it won't need to generate consciousnesses to derive optimal solutions. And at that point, the only purpose for an artificial human to exist will be, like us, for its own sake.

Doxosophoi
  • 1,935
  • 11
  • 11
1

I'd like to answer this in detail, but it requires some fairly complicated theory that you don't have access to. Essentially this is related to the Abstraction Valuation Paradox. Don't bother trying to look that up. It's part of several years of research that hasn't been published yet. The research has shown that there is no solution to this paradox using computational or AI theory. So, no AI, no matter how advanced, can have an understanding of ethics. The best you can do is program in a bunch of rules of thumb. This gives your AI a bureaucratic reaction to conditions but no flexibility and no way to resolve problems outside of its rule space. In other words if it runs into an exception or unforeseen circumstances, it could stall or could guess at a decision.

The research on human-like ability to understand and reason is quite different from the study of AI. This research suggests that you would need consciousness for an understanding of ethics.

scientious
  • 291
  • 1
  • 4