Short answer: It always behooves oneself to learn the classical version of a theory before the quantum one. Many many people (including but certainly not limited to OP) could have their problems vanish by following this advice.
Medium answer (Wikipedia links): Error propagation is always done using Jacobian matrices that contain all of the partial derivatives of the new parameters with respect to all of the old ones. The Fisher information matrix also transforms using the Jacobian, but containing the derivatives of the old parameters with respect to the new ones, which nicely agrees with the inversion in the Cramér–Rao bound. This is all classical but of course it carries forward to quantum too and parts of it are even in the Liu paper in question.
Long answer: One should never expect a measurement with one outcome $\langle \hat{M}\rangle$ to be useful for estimating two or more parameters $\boldsymbol{\theta}=(\theta_1,\theta_2,\cdots)^\top$. Instead, one uses a measurement with multiple possible outcomes, in quantum theory given by a POVM with elements $\hat{M}_i$ and outcome probabilities given by the Born rule $$p_i(\boldsymbol{\theta})=\mathrm{Tr}(\rho_{\boldsymbol{\theta}}\hat{M}_i).$$ These can be used to compute the classical Fisher information matrix for a particular measurement, or one can use the world of quantum Fisher information matrices to get a bound on the Fisher information matrix that may or may not correspond to a physically attainable measurement (discussed in many places including the Liu paper).
When there are more new parameters than those measured, like $\theta_1$ and $\theta_2$ when only $M_1=\langle \hat{M}_1\rangle$ is measured, we can compute elements of the Jacobian matrix
$$J_{ij}=\frac{\partial M_i}{\partial \theta_j} \quad\Rightarrow J=\begin{pmatrix}\frac{\partial M_1}{\partial \theta_1}&\frac{\partial M_1}{\partial \theta_2}\\
\frac{\partial M_2}{\partial \theta_1}&\frac{\partial M_2}{\partial \theta_2}\end{pmatrix}=\begin{pmatrix}\frac{\partial M_1}{\partial \theta_1}&\frac{\partial M_1}{\partial \theta_2}\\
0&0\end{pmatrix},$$ where the second row vanishes when we actually don't have any other measurement outcome so we must make up a fictitious quantity "$M_2$" that always takes the same value regardless of the actual parameters. The Fisher or quantum Fisher information matrix when expressed in terms of the measurement outcomes $\mathbf{M}=(M_1,M_2,\cdots)$ transforms to those expressed in terms of $\boldsymbol{\theta}$ via
$$F(\boldsymbol{\theta})=J^\top F(\mathbf{M})J.$$ If you invert that matrix, you get a singularity! The Cramér-Rao bound says
$$\mathrm{Cov}(\boldsymbol{\theta},\boldsymbol{\theta}^\top)=F(\boldsymbol{\theta})^{-1}=J^{-1}F(\mathbf{M})^{-1} J^{-1\top}.$$ The inverse of $J$ is infinite. This means that the variance of any $\theta_i$ and the covariances between any pair $\mathrm{Cov}(\theta_i,\theta_j)$ will be infinite because there is no possible method for determining any single parameter when $M_1$ depends on multiple parameters and the other $M_i$ depend on none of them (or are nonexistent, which is the same thing as existing and not depending on any of the parameters).
Going in the other direction is no problem: if you measure $\theta_1$ and $\theta_2$ and want to infer $M_1$, life is good. Or, if you have two things $M_1$ and $M_2$ that you measure and they each depend on $\theta_1$ and $\theta_2$ in a functionally independent way, the Jacobian will not be singular and you will be happy. You can read papers about singularities in quantum Fisher information matrices, which can occur either because $J$ is singular (not invertible) or because $F(\mathbf{M})$ itself is singular. The former implies you're looking at the wrong parameters, the latter implies your state or your measurement strategy is insufficient.
Now, there is much more you can do once you know properties of the quantum Fisher information matrix. Yes, the central limit theorem still holds for all Fisher information matrices, so repeated measurements tend to give you an overall $N$ scaling for Fisher information matrices such that every covariance between every parameter of interest will go down by $1/N$. And yes, you can find quantum strategies that make each of those go down by $1/N^2$. It is possible to find states that have some elements of the QFI scale with $N$ and others with $N^2$, so all extremes and middling grounds are possible. The canonical examples of multiparameter estimation in quantum theory are multiphase estimation and rotation sensing. Hopefully this answer is sufficient to get you started on the rest!