2

A lot of people are claiming that we are at an inflection point, and machine learning/artificial intelligence will take off. This is in spite of the fact that for a long machine learning has stagnated.

What are the signals that indicate that machine learning is going to take off?

In general, how do you know that we are at an inflection point for a certain technology?

nbro
  • 42,615
  • 12
  • 119
  • 217
alpha_989
  • 129
  • 3

3 Answers3

2

This has happened in the past where people were really excited and saying things like we will have AI in decade or so. This is happening again. Not sure why people don't learn from history of AI. In both the cases what's happening is this - You develop a technique to solve a particular problem, you apply that technique, the technique seems to be general enough, people start to apply that same technique to various problems, people get excited that this is the silver bullet they were looking for, the technique starts to show its limitations and doesn't work for many problems, hype gets shattered, start over.

Ankur
  • 531
  • 2
  • 7
1

What are the signals that indicate that machine learning is going to take off?

We simply don't know until the consequences of the inflection point determine a remarkable difference between the before and after. In general terms every considerable reaction must be attributed to a particular cause.

One of the biggest limit that bounds artificial intelligence, and apparently makes it stagnating, is the greed of computational power involved in this field; and since the hardware technology improve much slower than the software does, AI remains confined in labs and data centers.

1

Part of the reason people are so excited about recent Machine Learning milestones is that AlphaGo demonstrated a reproducible method of managing mathematical and computational intractability.

Go is interesting because it's impossible to solve. It cannot be brute-forced no matter how fast processors get. Go is so complex humans had failed to produce AI that could win against a skilled human player. The fact that a computer could teach itself to do something humans couldn't teach it, and something with a complexity analogous to nature to boot, is pretty extraordinary.

Combinatorial games in particular are useful because, unlike nature where it may be impossible to track or even be aware of every variable, intractability can be generated out of a simple set of elements and rules, and outcomes can be definitively evaluated.

As proof-of-concepts for methods go, AlphaGo seems like a pretty strong one. It allows us to definitively say "Machine Learning works", puts a lot of emphasis on the field, and raises confidence on extending the method to real world problems.

Beyond that, it suggests a feedback loop in which programs can improve at at improving, unrestricted by human limitations. Increase in processing power is bounded by physical limitations, but algorithms are not.

DukeZhou
  • 6,209
  • 5
  • 27
  • 54