1

In spike-based neural networks, there is a learning rule called STDP (Spike-Timing-Dependent Plasticity). It's a completely unsupervised learning rule that works continuously when data is fed into the network.

I've been trying to find learning rules that work like STDP, but can be applied to a numerical-based network with multiple layers, and complex connections (e.g., multi-layer perceptron). I found some interesting techniques such as: Principal Component Analysis, Kohonen's Self-Organizing Map, Competitive Learning, Oja's Rule, and the Hebbian rule.

But none of these can be applied to a deeper neural network with multiple layers and more complex connections.

I'm looking for a learning rule that works unsupervised and online, similar to STDP, but that can be applied to traditional neural networks.

aarong
  • 26
  • 2

0 Answers0