8

Whenever I read any book about neural networks or machine learning, their introductory chapter says that we haven't been able to replicate the brain's power due to its massive parallelism.

Now, in modern times, transistors have been reduced to the size of nanometers, much smaller than the nerve cell. Also, we can easily build very large supercomputers.

  • Computers have much larger memories than brains.
  • Computes can communicate faster than brains (clock pulse in nanoseconds).
  • Computers can be of arbitrarily large size.

So, my question is: why cannot we replicate the brain's parallelism if not its information processing ability (since the brain is still not well understood) even with such advanced technology? What exactly is the obstacle we are facing?

nbro
  • 42,615
  • 12
  • 119
  • 217

3 Answers3

5

One probable hardware limiting factor is internal bandwidth. A human brain has $10^{15}$ synapses. Even if each is only exchanging a few bits of information per second, that's on the order of $10^{15}$ bytes/sec internal bandwidth. A fast GPU (like those used to train neural networks) might approach $10^{11}$ bytes/sec of internal bandwidth. You could have 10,000 of these together to get something close to the total internal bandwidth of the human brain, but the interconnects between the nodes would be relatively slow, and would bottleneck the flow of information between different parts of the "brain."

Another limitation might be raw processing power. A modern GPU has maybe 5,000 math units. Each unit has a cycle time of ~1 ns, and might require ~1000 cycles to do the equivalent processing work one neuron does in ~1/10 second (this value is totally pulled from the air; we don't really know the most efficient way to match brain processing in silicon). So, a single GPU might be able to match $5 \times 10^8$ neurons in real-time. You would optimally need 200 of them to match the processing power of the brain.

This back-of-the-envelope calculation shows that internal bandwidth is probably a more severe constraint.

nbro
  • 42,615
  • 12
  • 119
  • 217
antlersoft
  • 607
  • 4
  • 6
3

This has been my field of research. I've seen the previous answers that suggest that we don't have sufficient computational power, but this is not entirely true.

The computational estimate for the human brain ranges from 10 petaFLOPS ($1 \times 10^{16}$) to 1 exaFLOPS ($1 \times 10^{18}$). Let's use the most conservative number. The TaihuLight can do 90 petaFLOPS which is $9 \times 10^{16}$.

We see that the human brain is perhaps 11x more powerful. So, if the computational theory of mind were true, then TaiHuLight should be able to match the reasoning ability of an animal about 1/11th as intelligent.

If we look at a neural cortex list, the squirrel monkey has about 1/12th the number of neurons in its cerebral cortex as a human. With AI, we cannot match the reasoning ability of a squirrel monkey.

A dog has about 1/30th the number of neurons. With AI, we cannot match the reasoning ability of a dog.

A brown rat has about 1/500th the number of neurons. With AI, we cannot match the reasoning ability of a rat.

This gets us down to 2 petaFLOPS or 2,000 teraFLOPS. There are 67 supercomputers worldwide that should be capable of matching this.

A mouse has half the number of neurons as a brown rat. There are 190 supercomputers that should be able to match its reasoning ability.

A frog or non-schooling fish is about 1/5th of this. All of the top 500 supercomputers are 2.5x as powerful as this. Yet, none is capable of matching these animals.

What exactly is the obstacle we are facing?

The problem is that a cognitive system cannot be defined using only Church-Turing. AI should be capable of matching non-cognitive animals like arthropods, roundworms, and flatworms but not larger fish or most reptiles.

I guess I need to give more concrete examples. The NEST system has demonstrated 1 second of operation of 520 million neurons and 5.8 trillion synapses in 5.2 minutes on the 5 petaFLOPS BlueGene/Q. The current thinking is that, if they could scale the system by 200 to an exaFLOPS, then they could simulate the human cerebral cortex at the same 1/300th normal speed. This might sound reasonable, but it doesn't actually make sense.

A mouse has 1/1000th as many neurons as a human cortex. So this same system should be capable today of simulating a mouse brain at 1/60th normal speed. So, why aren't they doing it?

nbro
  • 42,615
  • 12
  • 119
  • 217
scientious
  • 291
  • 1
  • 4
1

Short answer: nobody knows. Long answer: all strong-AI works. However, to write something useful to the o.p., say that the question contains several implicit statements, analyze them could be useful to clarify the issue:

a) why thing that 1 transistor has the same functionality than 1 neuron ? Some obvious differences: a transistor has 3 legs, each neuron has around 7000 synapses; a transistor has 3 layers of material, a neuron is a full micro-machine with thousands of components; each synapses itself is a switch, connected to one or more other cells, and can produce different kinds of signals (activation/inhibitory, frequencies, amplitude, ...).

b) compare memory amounts: the amount of memory in a person equivalent to the one in a computer s 0 bytes, we are not able to remember any thing forever and without distortion. Human memory is symbolic, temporal, associative, influenced by body and feelings, ... . Something totally different than computers one.

c) all previous are about "hardware": if we analyze software and training, differences are even bigger. Even assume than intelligence is placed only in the brain, forgetting the role of hormonal system, senses, ... is a simplification not yet proof.

In conclusion: human mind is totally different from a computer, we are far to understand it, and more far to replicate it.

From the start of computers age, the idea than intelligence will appear when the amounts of memory, process power, ... reaches some threshold has became false.

pasaba por aqui
  • 1,313
  • 7
  • 21