9

I am a PhD student in computer science, and currently creating a state of the art overview in applications done in Machine Ethics (a multidisciplinary field combining philosophy and AI, that looks at creating explicit ethical programs or agents). It seems that the field mostly contains theoretical arguments and there are relatively little implementations, even though there are many people with a technical background in the field.

I understand that because ethics are involved, there is no ground truth and since it's part of philosophy one can get lost in arguing over which type of ethics should be implemented and how this can be done best. However, in computer science, it is usual to even try a simple implementation to show the possibilities or limitations of your approach.

What are the possible reasons there is so little done in explicitly implementing ethics in AI and experimenting with it?

nbro
  • 42,615
  • 12
  • 119
  • 217
Suzanne
  • 99
  • 5

5 Answers5

2

This is necessarily a high-level answer, and highly speculative, but I've been thinking on this question, and here are my thoughts:

  • Implementing ethical algorithms requires a mathematical basis for philosophy because computers are difference engines

After Russell & Whitehead's famous failure, and Gödel's incompleteness theorem, this would seem to be problematic.

  • AI is a highly applied field, especially today per continuing validation of deep learning, and no company wants to go near the issue of ethics unless they are forced to

Thus, you see it in self-driving cars because the engineers have no choice but to grapple with the problem. By contrast, I don't think you'll see many algorithmic stock trading firms, where the business is Pareto efficiency, worrying about the ethics or social impacts of financial speculation. (The solution to "flash crashes" seems to have been rules for temporary suspension of trading, instead of addressing the social value of high-frequency algorithmic trading.) A more obvious example is social media companies ignoring the extreme amounts of information abuse (disinformation and misinformation) being posted on their sites, pleading ignorance, which is highly suspect in that the activity generated by information abuse positively affects their bottom-lines.

  • Applied fields tend to be predominantly driven by profit

The primary directive of corporations is to return a profit to investors. It's not uncommon for corporations to break the law when the fines and penalties are expected to be less than the profit made by illegal activity. (There is the concept of ethics in business, but the culture in general seems to judge people and companies based on how much money they make, regardless of the means.)

  • Implementation of machine ethics is being explored in areas where they are necessary to sell the product, but elsewhere, it's still largely hypothetical

If superintelligences evolve and wipe out humanity (as some very smart people with superior mathematics skills are warning us about,) my feeling is that it will be a function of nature, where unrestricted evolution of these algorithms is due to economic drivers which focus on hyper-partisan automata in industries like financial speculation and autonomous warfare. Essentially, chasing profits at all costs, regardless of the impacts.

DukeZhou
  • 6,209
  • 5
  • 27
  • 54
1

I feel part of the problem as to why there is very little in the way of ethical implementations of AI/ML technologies, is simply because there is no need or proper application of the theoretical frameworks.

By this I mean, there are no substantial ways we can apply this understanding to algorithms and models that cannot interact in a meaningful way. We have such a large theoretical framework on AI safety/ethics because it is extremely important. We need to come up with safe guidelines for implementing strong AI before it is created.

Some very focused papers have started to narrow down the issues in creating ethical/safe AI systems. See Concrete Problems in AI Safety

hisairnessag3
  • 1,280
  • 6
  • 15
1

With the imitation method, the most appropriate behavior can be integrated into artificial intelligence. Artificial intelligence can be reshaped when the ethical position changes. It is used for ideological purpose or to gather information. It's not clear what the robot is.

1

We can take the error model into accountant. Recognising bias and variance among the performance under neural networks can be a first step. And then we can discuss whether such performance is allowed. As far as we know, practicing ethnics requires empirical and field study. we cannot simply take rationales and paper essays to determine the doings of learnt machines is wrong or not. It can be further divided into accidents, errors , or even bugs created from the developers.

Larry Lo
  • 111
  • 2
1

Intuitively speaking, it seems to be the case that there is little research into the implementation of AI ethics because:

  1. Society as a whole seems to comfortably agree that the current state of machine intelligence is not strong enough for it to be considered as conscious or sentient. Thus we don't need to give it ethical rights (yet).

  2. Implementation of ethical behaviour into a program requires a method for computers to be capable of interpreting "meaning", which we do not know how to do yet.

k.c. sayz 'k.c sayz'
  • 2,121
  • 13
  • 27