Questions tagged [neural-turing-machine]

For questions related to the neural Turing machine model, proposed in "Neural Turing Machines" (2014) by Alex Graves et al.

6 questions
13
votes
1 answer

How would DeepMind's new differentiable neural computer scale?

DeepMind just published a paper about a differentiable neural computer, which basically combines a neural network with a memory. The idea is to teach the neural network to create and recall useful explicit memories for a certain task. This…
12
votes
2 answers

How much of Deep Mind's work is actually reproducible?

DeepMind has published a lot of works on deep learning in the last years, most of them are state-of-the-art on their respective tasks. But how much of this work has actually been reproduced by the AI community? For instance, the Neural Turing…
4
votes
0 answers

How does the memory mechanism (reading and writing) work in a neural Turing machine?

In neural Turing machine (NTM), reading memory is represented as \begin{align} r_t \leftarrow \sum\limits_i^R w_t(i) \mathcal{M}_t(i) \tag{2} \end{align} and writing to memory is represented as Step1: Erase \begin{align} \mathcal{M}_t^{erased}(i)…
Eka
  • 1,106
  • 8
  • 24
2
votes
1 answer

Has deep learning discovered new algorithms?

Has deep learning discovered any heretofore unknown algorithms? Goodfellow et al. give an example of learning XOR, and the Universal Approximation Theorem seems to imply that deep learning might be able to represent any algorithm (or Turing…
2
votes
0 answers

What is a location-based addressing in a neural Turing machine?

In the neural Turing machine (NTM), the content-based addressing and location-based addressing is used for memory addressing. Content-based addressing is similar to the attention-based model, weighting each row of memory which shows the importance…
1
vote
1 answer

Reasoning behind performance improvement with hopfield networks

In the paper Hopfield networks is all you need, the authors mention that their modern Hopfield network layers are a good replacement for pooling, GRU, LSTM, and attention layers, and tend to outperform them in various tasks. I understand that they…