There are inconsistencies in the definition of an ergodic Markov Chain while de concepts of irreducibility, aperiodicity and recurrence are well grounded.
Thus, I think it is better not to employ this term too much and keep the word "ergodic" for the well-known theorem.
According to the paper you cite,
An MDP is ergodic if the
Markov chain induced by any policy π is both irreducible
and aperiodic, which means any state is reachable from
any other state by following a suitable policy.
So what can we say about that?
Let $\mathcal{X}$ be the state space.
If the chain is irreducible, then whether all states are recurrent, or all states are transient.
But if $dim(\mathcal{X}) < \infty$, then the second case is impossible.
Thus if the chain is irreducible and $dim(\mathcal{X}) < \infty$, then the chain is recurrent.
Now, If the chain is irreducible and recurrent, there exists a unique (up to a normalisation factor) stationary measure $\mu$.
Also, if the chain is recurrent, irreducible and aperiodic, then there is a $n_0$ such that for any $n>n_0$, $P_n(x, y) > 0$ for any states $x, y \in \mathcal{X}$, where $P_n$ is the $n$ steps transition probability kernel. This means any state $x \in \mathcal{X}$ is reachable from any other state $y \in \mathcal{X}$.
Sources: [1]
[1]: J-F. Le Gall - Intégration, Probabilités et Processus Aléatoires (2006)