Say I have some inputs to my model that are continuous variables. What is a good way of representing them in input neurons so that the network can represent and learn various functions effectively?
A standard approach is to simply re-scale the input so that it has mean=0 and stddev=1.
I came up with this alternative scheme.
Use $n$ input neurons, each representing some value $x_1, \ldots, x_n$. When we have a value $x=x_i$ it's represented by setting the value of the corresponding neuron to 1 and others to 0. When $x_i < x < x_{i+1}$, set the two neighboring neurons to a value between 0 and 1 so as to linearly interpolate between the representations for $x_i$ and $x_{i+1}$.
It seems like it would allow the network to more directly treat different ranges of $x$ differently, and thus learn various non-linear functions of $x$ easier. Also it automatically makes the inputs fall in the range $[0, 1]$ without the need for re-scaling.
Does this scheme make sense? Does it have a name?
It seems like keywords like "binning", "triangular kernels" are relevant, but I haven't found a reference that actually describes this scheme.