I am using the following perceptron formula $\text{step}\left(\sum(w_ix_i)-\theta \right)$.
Is $\theta$ supposed to be updated in a perceptron, like the weights $w_i$? If so, what is the formula for this?
I'm trying to make the perceptron learn AND and OR, but without updating $\theta$, I don't feel like it's possible to learn the case where both inputs are $0$. They will, of course, be independent of the weights, and therefore the output will be $\text{step}(-\theta)$, meaning $\theta$ (which has a random value) alone will determine the output.