This is actually an insightful question. It's not obvious why indeed, it doesn't hold that:
$$ \color{red}{\vec{\omega} \times \vec{\omega} = \dot{\vec{\omega}}}\ , $$
the answer is that the way we use the notation $(\vec{\omega}\ \times)$ implies that we don't want to treat $\vec{\omega}$ exactly like any other vector, because it is better understood as a linear operator, and the vector $\vec{\omega}$ is just a convenient way to represent this linear operator.
So let's explicitly denote this operator by $\boldsymbol{\Omega}$. The way this linear operator is defined is so that, it sends any given basis vector $\hat{n}_i$, to its time derivative:
$$ \boldsymbol{\Omega}\ \hat{n}_i = \dot{\hat{n}}_i $$
Now, it just happens that in $\mathbb{R}^3$, this linear operator can be written as a skew symmetric $3\times 3$ matrix. This it turns out implies, that we can write the action of this linear operator by a unique vector $\vec\omega$ such that:
$$ \boldsymbol{\Omega}\ \hat{n}_i = \vec{\omega} \times \hat{n}_i $$
Now, let's further note, that the equation for a general vector $\vec{v} = \sum_i v^i\hat{n}_i$:
$$ \dot{\vec{v}} = \vec{\omega}\times\vec{v} $$
Is, by the above definition, only true for a vector going through a pure rotation, i.e. only its basis vectors $\hat{n}_i(t)$ change with respect to time, not its components. The above equation does not hold true if the components also change (it requires another term to correct for that).
The trouble then is, that when we write $\dot{\vec{\omega}}$ we want just the opposite. We are considering that for the case of $\vec{\omega}$, only the components can change, we don't want to differentiate its basis vectors. This becomes clear once we note that, if we express $\boldsymbol{\Omega}$ in components $\Omega_{ij}$:
$$ \dot{\boldsymbol{\Omega}} = \dot{\Omega}_{ij}, $$
so that this gives us another linear operator. Now, it's easy to see that the time derivative of a skew symmetric matrix remains skew symmetric. So it just gives us another unique vector in the case of $\mathbb{R}^3$ such that:
$$ \dot{\boldsymbol{\Omega}} \equiv \dot{\vec{\omega}}\ \times \equiv \vec{\alpha} \times $$
So that in $\mathbb{R}^3$, the components $\dot{\Omega}_{ij}$ happen again to be equivalent to the components of a unique vector $\vec{\alpha}$, which is easy to see satisfies $\vec{\alpha}=\dot{\vec{\omega}}$. I hope this helps understand why we apply the product rule in this apparently inconsistent manner when we compute the second time derivative of a position vector (which is often done in textbooks to derive the Centrifugal, Coriolis and Euler terms).
I highly recommend also referring to this excellent answer by contributor @peek-a-boo, where you can find a very rigorous treatment of this same subject.