In the context of a single phase estimation problem of a quantum photonics experiment (related post). For example consider a 3-photon quantum circuit (such as the Mach-Zehnder which depends on some phase shift operator which encodes a parameter $\theta$) with a photon counting measurement (two detectors) at the end of the circuit with measurement probabilities:
- P(0,2): 0 photons detected in Detector 1, 2 photons detected in Detector 2.
- P(1,1): 1 photon detected in each detector.
- P(2,0): 2 photons detected in Detector 1, and none in Detector 2.
Consider that at a given time we carry out $\nu$ total measurements. We will get some set of measurement outcomes {$m_{02},~m_{11},~m_{20}$}, where $\nu = m_{02}+m_{11}+m_{20}$. We can define the corresponding likelihood function $L(\theta|\text{data})$ by: $$L(\theta|\text{data}):= P(0,2)^{m_{02}}P(1,1)^{m_{11}}P(2,0)^{m_{20}}.$$
Using Baye's rule, given some assumed prior, $P(\theta)$ and the likelihood function $L(\theta | \text{data})$ above, we can update the posterior (our knowledge about the distribution of the phase $\theta$) iteratively, from the data of the measurement outcomes, by $$P(\theta| \text{data}) = \frac{L(\theta| \text{data})P(\theta)}{P(\text{data})} ,$$ where $P(\text{data})$ is the normalization constant. We can instead consider many (of the order of 100) unnormalized Log-Posterior updates of the form $$\log(P(\theta| \text{data})) = \log(L(\theta| \text{data}) + \log(P(\theta)).$$ Where after each update round $\log(P(\theta| \text{data}))$ replaces $\log(P(\theta))$ for the next update round. This is commonly done for more efficient numerical simulation. We can then consider the MAP estimator (to estimate the encoded phase $\theta$) which is defined as $$\hat{\theta}:= \text{max arg}_{\theta}\log(\theta|\text{data}).$$
If we consider the sample variance $(\Delta \hat{\theta})^2$ of the estimation of the true $\theta$ by carrying out this process $\mu$ independent times (with $\nu$ measurements for every 100 update rounds), yielding a set of $\mu$ different estimation values, am I correct that based on the Cramer-Rao bound we expect that, for $\mu \to \infty$, the sample variance of this set of estimations for an efficient estimator should converge to $\frac{1}{\nu \cdot \text{FI}}$, where FI is the Fisher information? An inefficient estimator should yield some sample variance $\geq \frac{1}{\nu \cdot \text{FI}}$. Are these expectations valid? Thanks for your time and assistance.