In the proof of No-Free-Lunch Theorem from the book Understanding Machine Learning: From Theory to Algorithms Cambridge University Press, p.37-38, the author wrote:
Let $C$ be a subset of the domain set $\mathcal{X}$ of size $2m$.
... There are $k = (2m)^{m}$ possible sequences of m examples from $C$.
I don't understand why the author came up with the number $k=(2m)^{m}$, my calculation turns out to be:
$$ \begin{align}C(2m, m)& = \frac{(2m)!}{m!(2m-m)!} \\& =\frac{(2m)!}{m!m!} \end{align}$$
And it doesn't come any close to $(2m)^{m}$ at all?