3

A little background... I’ve been on-and-off learning about data science for around a year or so, however, I started thinking about artificial intelligence a few years ago. I have a cursory understandings of some common concepts but still not much depth. When I first learned about deep learning, my automatic response was “that’s not how our minds do it.” Deep learning is obviously an important topic, but I’m trying to think outside the black box.

I think of deep learning as being “outside-in” in that a model has to rely on examples to understand (for lack of a better term) that some dataset is significant. However, our minds seem to know when something is significant in the absence of any prior knowledge of the thing (i.e., “inside-out”).

Here’s a thing:

enter image description here

I googled “IKEA hardware” to find that. The point is that you probably don’t know what this is or have any existing mental relationship between the image and anything else, but you can see that it’s something (or two somethings). I realize there is unsupervised learning, image segmentation, etc., which deal with finding order in unlabeled data, but I think this example illustrates the difference between the way we tend to think about machine learning/AI and how our minds actually work.

More examples:

1)

enter image description here

2)

enter image description here

3)

enter image description here

Let’s say that #1 is a stock chart. If I were viewing the chart and trying to detect a pattern, I might mentally simplify the chart down to #2. That is, the chart can be simplified into a horizontal segment and a rising segment.

For #3, let’s say this represents log(x). Even though it’s not a straight line, someone with no real math background could describe it as an upward slope that it decreasing as the line gets higher. That is, the line can still be reduced to a small number of simple ideas.

I think this simplification is the key to the gap between how our minds work and what currently exists in AI. I’m aware of Fourier transforms, polynomial regression, etc., but I think there’s a more general process of finding order in sensory data. Once we identify something orderly (i.e., something that can’t reasonably be random noise), we label it as a thing and then our mental network establishes relationships between it and other things, higher order concepts, etc.

I’ve been trying to think about how to use decision trees to find pockets of order in data (to no avail yet - I haven’t figured out to apply it to all of the scenarios above), but I’m wondering if there are any other techniques or schools of thought that align with the general theory.

SuperCodeBrah
  • 273
  • 1
  • 12

1 Answers1

2

It sounds like you are interested in the ideas of intrinsic motivation and attention in the context of machine learning. These are big topics, and the subject of much active research.

Intrinsic motivation says that the key to identifying interesting patterns and skills that are worth learning is to give the agent some intrinsic reason to learn to do new things. This is not dissimilar from what humans have: learning new things, and improving or exercising our capabilities to the fullest is what Aristotle identified as the good life. There are thus good reasons to think that intrinsic motivation for AI might solve the problem you identify. Current research in this domain is exploring different mathematical ways to represent intrinsic motivation.

Attention was the subject of a large burst of research in deep neural networks during the last few years. Here's a recent talk from AWS at ICML that provides a good overview. The idea here is that an agent can learn both a reasonable mapping from inputs to outputs for some problem, and a separate mapping that describes how different future inputs should "activate" certain parts of the input/output mapping that the agent has learned. Essentially attention-driven models include a second component that learns which features of the input to pay attention to when engaging in certain kinds of tasks.

John Doucette
  • 9,452
  • 1
  • 19
  • 52