I learned that when CNN filters are defined, they are initialized with random weights and bias(Im not sure about bias).
Then as learning step goes on, the weight values change and each filter makes its own feature map.
What I don't understand is that, if filter is initialized with random values, is there any chance that
- different filters make the same feature map
- feature map varies every time it repeats.
It seems little unefficient to initialize the weights of every filter randomly. More precisely, I think (in most of the networks) the number of filters is too small to get meaningful features.
Is the second case is why CNN has randomness?