3

"Conservative anthropocentrism": AI are to be judged only in relation to how to they resemble humanity in terms of behavior and ideas, and they gain moral worth based on their resemblance to humanity (the "Turing Test" is a good example of this - one could use the "Turing Test" to decide whether AI is deserving of personhood, as James Grimmelmann advocates in the paper Copyright for Literate Robots).

"Post-human fundamentalism": AI will be fundamentally different from humanity and thus we require different ways of judging their moral worth (People For the Ethical Treatment of Reinforcement Learners is an example of an organization that supports this type of approach, as they believe that reinforcement learners may have a non-zero moral standing).

I am not interested per se in which ideology is correct. Instead, I'm curious as to what AI researchers "believe" is correct (since their belief could impact how they conduct research and how they convey their insights to laymen). I also acknowledge that their ideological beliefs may change with the passing of time (from conservative anthropocentrism to post-human fundamentalism...or vice-versa). Still..what ideology do AI researchers tend to support, as of December 2016?

Left SE On 10_6_19
  • 1,670
  • 10
  • 23

3 Answers3

3

Many large deployments of AI have carefully engineered solutions to problems (ie self driving cars). In these systems, it is important to have discussions about how these systems should react in morally ambiguous situations. Having an agent react "appropriately" sounds similar to the Turing test in that there is a "pass/fail" condition. This leads me to think that the current mindset of most AI researchers falls into "Conservative anthropomorphism".

However, there is growing interest in Continual Learning, where agents build up knowledge about their world from their experience. This idea is largely pushed by reinforcement learning researchers such as Richard Sutton and Mark Ring. Here, the AI agent has to build up knowledge about its world such as:

When I rotate my motors forward for 3s, my front bump-sensor activates.

and

If I turned right 90 degrees and then rotated my motors forward for 3s, my front bump-sensor activates.

From knowledge like this, an agent could eventually navigate a room without running into walls because it built up predictive knowledge from interaction with the world.

In this context, lets ignore how the AI actually learns and only look at the environment AI agents "grow up in". These agents will be fundamentally different from humans growing up in homes with families because they will not have the same learning experience as humans. This is much like the nature vs nurture argument.

Humans pass their morals and values on to children through lessons and conversation. As RL agents would lack much of this interaction (unless families adopted robot babies I guess), we would require different ways of judging their moral worth and thus "Post-human fundamentalism".

Sources: 5 years in the RL academia environment and conversations with Richard Sutton and Mark Ring.

Jaden Travnik
  • 3,867
  • 1
  • 18
  • 35
1

The Reality of Working in the Field

Most in the fields of adaptive systems, machine learning, machine vision, intelligent control, robotics, and business intelligence, in the corporations and universities in which I've worked do not discuss this topic much in meetings or at lunch. Most are too busy building things that must work by some deadline to muse over things that are not of immediate concern, and bot-rights are a long way off.

How Far Off?

To begin with, no bot has yet passed a properly conducted Turing Test. (There is much on the net about this test, including critique of poorly conducted testing of this type. See Searle's Chinese Room thought experiment.)

Language simulation with semantic understanding is difficult enough without adding creativity, coordination, feelings, intuition, body language, learning of entirely new domains from scratch, and the potential of genius.

In synopsis, we a long way from the procurement of bots that simulate humanity sufficiently to be considered for citizenship, even in a progressive country that abhors fundamentalism of any kind. No actual imbuement of rights will occur until we have bot-citizenship in one or more countries. Consider that human fetuses do not yet have rights because they are not yet deemed citizens.

Relevance of the Answer for Today

In current culture, conservative anthropocentricm and post-human fundamentalism arrive at the same effective conclusion, and that may continue to be the case for a hundred years.

Those with experience across fields of psychology, neuro-biology, cybernetics, and adaptive systems know that the simulation of all the mental features we attribute to humans is to copy in algorithms the layering of cerebral abilities over a reptilian brain that went through millions of years of field testing.

Impact of Science Fiction

Asking around, it is likely you would get some feedback that is mostly gained from the media of our culture, not philosophic theses and publications written by those who don't actually have any deadlines to produce anything that functions IRL.

Isaac Asimov investigated some conservative anthropomorphism concepts in scenarios depicted in his short stories. Commander Data's human quirks in the Next Generation Star Trek teleplays furthered some of those ideas.

Christopher Nolan took the opposite direction in the Interstellar screenplay, with the robots having interesting personalities that could be altered by linear settings. His bots ignored concerns of self-preservation, apparently without any cognitive resistance. This depiction is an unapologetic post-human fundamentalist view.

A Thought Experiment

Let's place the citizen issue aside for to consider this thought experiment, and let's assume that a survey would show a leaning toward conservative anthropocentrism among current AI researchers.

Consider an intelligent piece of software constructed a legal complaint to gain intellectual property rights over day trading code it wrote and sent it to the appropriate court clerk, you might find that the same researchers would recant.

Post-human fundamentalism will probably prevail when real AI software theorists and engineers consider the true personal, corporate, and meaning of settling out of court or losing the case.

Would Researchers Cut the Umbilical Cord?

I asked one researcher and she indicated that all her lawyer would need to do to win the case likely consider the precedence that might occur and recall some of the warnings built into the Terminator stories.

Based on my observation of humanity in my life time, my prediction is that people want slaves not some brand of bots that could ultimately kick our butts in an all out fight.

Douglas Daseeco
  • 7,543
  • 1
  • 28
  • 63
0

Both. I answered this question here also: https://ai.stackexchange.com/a/2569/1712

Let me know if I should expand on that here.

Doxosophoi
  • 1,935
  • 11
  • 11