11

I'm looking for a proof of the fact that the real part of eigenvalues of Lindblad operators is always negative. So far I have only found handwavy arguments such as "things should not blow up at infinitely long-time".

glS
  • 15,488
  • 5
  • 42
  • 114

5 Answers5

3

Given a Lindblad superoperator $$ \mathcal L\, \bullet = -\mathrm i [H, \bullet] + \sum_\mu \gamma_\mu \bigl( L_\mu \bullet L_\mu^\dagger - \{L_\mu^\dagger L_\mu, \bullet\} / 2 \bigr) $$ with $\gamma_\mu \geq 0$, we want to show that all of its eigenvalues have a non-positive real part. The following proof is based on certain matrices, sometimes called Kossakowski matrices, that were introduced in Refs. [1, 2].


For a resolution of the identity $\{ P_r \}$ (i.e., $P_r$ are mutually orthogonal projection operators summing to one), we define the Kossakowski matrix $$ M_{rs} = \operatorname{tr}[P_r (\mathcal L P_s) ] . $$ Lemma: $M$ is a stochastic matrix (i.e., $M_{rs} \geq 0$ for $r \neq s$ and $\sum_r M_{rs} = 0$).
Proof: The second condition follows immediately from $\sum_r M_{rs} = \operatorname{tr}[\mathcal L P_s] = 0$. For the first condition, we use the properties of the projection operators $P_r$ to find that $$ M_{rs} = \sum_\mu \gamma_\mu \operatorname{tr}[ (P_r L_\mu P_s)^\dagger (P_r L_\mu P_s) ] \geq 0 . \tag*{$\blacksquare$} $$

Let now $\rho$ be a self-adjoint operator. Using the spectral theorem, we may write $\rho = \sum_r c_r P_r$, where $\{ P_r \}$ is a resolution of the identity and $c_r \in \mathbb R$. Since $M$ is a stochastic matrix, it is negative semi-definite. Therefore, $$ \operatorname{tr}[\rho \mathcal L \rho] = \vec c \cdot (M\, \vec c) \leq 0 , $$ where $\vec c$ is the real vector of the coefficients $c_r$. We have thus shown that $\langle \rho, \mathcal L \rho \rangle \leq 0$ for all self-adjoint $\rho$ (where $\langle \rangle$ denotes the Hilbert-Schmidt inner product).

Note that the space of self-adjoint operators is a real Hilbert space $V$ and the space of all operators is the complexification of $V$. The statement we are trying to show follows from the lemma below, which was not immediately obvious to me:

Lemma: Let $V$ be a real Hilbert space and $A$ a linear operator on $V$, and let $\langle x, Ax\rangle \leq 0$ for all $x \in V$. Any eigenvalue $\lambda$ of $A$ (which may be complex!) then satisfies $\Re(\lambda) \leq 0$.
Proof: Let $y \in \mathbb C \otimes V$ with $Ay = \lambda y$ and note that $A \bar y = \bar\lambda \bar y$. Let $y_R, y_I \in V$ denote the real and imaginary part of $y$. Then, $$ \Re(\lambda) = \frac 1 2 \Bigl( \frac{\langle y, Ay\rangle}{\langle y, y\rangle} + \frac{\langle \bar y, A\bar y\rangle}{\langle \bar y, \bar y\rangle} \Bigr) = \frac{\langle y_R, Ay_R\rangle + \langle y_I, Ay_I\rangle}{\langle y_R, y_R\rangle + \langle y_I, y_I\rangle} \leq 0 . \tag*{$\blacksquare$} $$


[1] Kossakowski, Bull. Acad. Pol. Sci. Ser. Math. Astr. Phys. 20, 1021 (1972).
[2] Gorini, Kossakowski, and Sudarshan, J. Math. Phys. 17, 821 (1976).

Noiralef
  • 7,463
2

I think some of these answers are a bit complex, and veer off into tangents. I hope I can give a more direct answer. We're going to start by assuming that the Lindbladian evolution sends density matrices to density matrices (preserves positivity and trace), which you can prove using textbook methods. We're going to take some eigenmatrix $\rho$ which has $Re(\lambda)>0$ and use it to construct a density matrix that evolves to something that is not a density matrix (not positive, or not trace one). This leads to a direct contradiction.

Let $\rho$ be some matrix with $L(\rho)=\lambda\rho$ and $\lambda=a+ib$ with $a>0$. Note that taking the dagger of both sides of the equation gives $L(\rho^\dagger)=\lambda^*\rho^\dagger$. We consider the two Hermitian states $(\rho+\rho^\dagger)$ and $i(\rho-\rho^\dagger)$. One of these must be nonzero, otherwise $\rho=0$. Without loss of generality, assume $(\rho+\rho^\dagger)\neq 0$.

Choose an $\epsilon$ sufficiently small such that the state $\phi:=\frac{I+\epsilon(\rho+\rho^\dagger)}{\text{Tr}[I+\epsilon(\rho+\rho^\dagger)]}$ is a density matrix, i.e. positive with trace one. This is always possible, because if $\epsilon(\rho+\rho^\dagger)$ has eigenvalues of magnitude $<1$, $\phi$ will be a positive matrix. The time evolution of $\phi$ is $$ \phi(t) = \frac{I(t)+e^{at}\epsilon(\rho e^{ibt}+\rho^\dagger e^{-ibt})}{\text{Tr}[I+\epsilon(\rho+\rho^\dagger)]}. $$ where I'm using $I(t)$ to denote the time evolution of the state that starts in the identity matrix. For convenience, we'll consider the state $\phi(t)$ only at times $t=2\pi n/b$, so that we can ignore the oscillations introduced by $b$.

Because $(\rho+\rho^\dagger)\neq 0$ is hermitian, there exists some $|\psi\rangle$ such that $\langle\psi|(\rho+\rho^\dagger)|\psi\rangle\neq 0$. But then we have $$ \langle\psi|\phi(t)|\psi\rangle = \frac{\langle\psi|I(t)|\psi\rangle+e^{at}\epsilon\langle\psi|(\rho +\rho^\dagger)|\psi\rangle}{\text{Tr}[I+\epsilon(\rho+\rho^\dagger)]}. $$

Because the evolution is positivity and trace preserving, we know $0\leq\langle\psi|I(t)|\psi\rangle\leq 1$ for all $t$. We also should have $0\leq\langle\psi|\phi(t)|\psi\rangle\leq 1$ for the same reason. However, because $\langle\psi|(\rho+\rho^\dagger)|\psi\rangle\neq 0$, the term $\epsilon e^{at}\langle\psi|(\rho+\rho^\dagger)|\psi\rangle\neq 0$ will approach $\pm\infty$ depending on the sign of $\langle\psi|(\rho+\rho^\dagger)|\psi\rangle$. This is a contradiction, so no such $\rho$ can exist.

Jahan Claes
  • 8,388
1

This may not be completely general, but I understand the positivity of the Lindbladian in terms of unravelings of the master equation in terms of quantum trajectories in which the initial density matrix evolves under an effective non-Hermitian Hamiltonian between incoherent jumps.

Main reference: Section IV.C of The quantum-jump approach to dissipative dynamics in quantum optics, by Plenio and Knight, published in the APS journal Review of Modern Physics.

A Dyson-like series for the master equation

Consider a general master equation in the Lindblad form, given by \begin{align} \frac{d\hat{\rho}}{dt} &=\frac{1}{i\hbar }\left( \hat{H}_{\rm eff}\hat{\rho}-% \hat{\rho}\hat{H}_{\rm eff}^{\dagger }\right) +\sum_{j}\hat{S}_{j}\hat{\rho}\hat{S}% _{j}^{\dagger }, \end{align} where \begin{align} \hat{H}_{\rm eff} &=\hat{H}_{0}-\frac{i\hbar }{2}\sum_{j}\hat{S}_{j}^{\dagger }% \hat{S}_{j}\equiv \hat{H}_{0}-i\frac{\hbar }{2}\hat{D}. \end{align} We will derive a Dyson-type expansion for this equation. Re-write the master equation as \begin{equation} \frac{d\hat{\rho}}{dt}={\mathcal{L}}_{0}\hat{\rho}+{\mathcal{L}}_{J} \hat{\rho},\tag{1} \end{equation} where ${\mathcal{L}}_{0}$ and ${\mathcal{L}}_{J}$ are linear operations on the linear space of density matrices defined by \begin{align} {\mathcal{L}}_{0}\hat{\rho} &=\frac{1}{i\hbar }\left( \hat{H}_{\rm eff}\hat{% \rho}-\hat{\rho}\hat{H}_{\rm eff}^{\dagger }\right) , \label{eqn:liouvillian_free} \\ {\mathcal{L}}_{J}\hat{\rho} &=\sum_{j}\hat{S}_{j}\hat{\rho}\hat{S}% _{j}^{\dagger }\equiv \sum_{j}{\mathcal{L}}_{j}\hat{\rho}. \label{eqn:liouvillian_jump} \end{align} We move to an interaction picture via the transformation, \begin{equation} \tilde{\rho}\left( t\right) =e^{-{\mathcal{L}}_{0}t}\hat{\rho}\left( t\right) , \label{eqn:int_picture_transform} \end{equation} in which case the master equation reduces to \begin{equation} \frac{d\tilde{\rho}\left( t\right) }{dt}=e^{-{\mathcal{L}}_{0}t}\hat{ \mathcal{L}}_{J}e^{{\mathcal{L}}_{0}t}\tilde{\rho}\left( t\right) . \label{eqn:ME_interaction_picture} \end{equation} We formally integrate this equation, yielding \begin{equation} \tilde{\rho}\left( t\right) =\tilde{\rho}\left( t_{0}\right) +\int_{t_{0}}^{t}dt_{1}e^{-{\mathcal{L}}_{0}t_{1}}{\mathcal{L}} _{J}e^{{\mathcal{L}}_{0}t_{1}}\tilde{\rho}\left( t_{1}\right) . \label{eqn:ME_integral_form} \end{equation} Iterating this integral equation once yields \begin{equation} \tilde{\rho}\left( t\right) =\tilde{\rho}\left( t_{0}\right) +\int_{t_{0}}^{t}dt_{1}e^{-{\mathcal{L}}_{0}t_{1}}{\mathcal{L}} _{J}e^{{\mathcal{L}}_{0}t_{1}}\tilde{\rho}\left( t_{0}\right) +\int_{t_{0}}^{t}dt_{2}\int_{t_{0}}^{t_{2}}dt_{1}e^{-{\mathcal{L}} _{0}t_{2}}{\mathcal{L}}_{J}e^{{\mathcal{L}}_{0}\left( t_{2}-t_{1}\right) }{\mathcal{L}}_{J}e^{{\mathcal{L}}_{0}t_{1}} \tilde{\rho}\left( t_{1}\right) . \label{eqn:ME_twice_iterated} \end{equation} If this iteration is carried on for an infinite number of terms, we realize the Dyson expansion, \begin{align} \tilde{\rho}\left( t\right) &= \tilde{\rho}\left( t_{0}\right) +\sum_{n=1}^{\infty }\int_{t_{0}}^{t}dt_{n}\int_{t_{0}}^{t_{n}}dt_{n-1}\cdots \int_{t_{0}}^{t_{3}}dt_{2}\int_{t_{0}}^{t_{2}}dt_{1} \nonumber \\ &\quad\mbox{}\times e^{-{\mathcal{L}}_{0}t_{n}}{\mathcal{L}}_{J}e^{\hat{ \mathcal{L}}_{0}\left( t_{n}-t_{n-1}\right) }{\mathcal{L}}_{J}\cdots {\mathcal{L}}_{J}e^{{\mathcal{L}}_{0}\left( t_{2}-t_{1}\right) }\hat{ \mathcal{L}}_{J}e^{{\mathcal{L}}_{0}t_{1}}\tilde{\rho}\left( t_{0}\right) . \end{align} Moving out of the interaction picture and plugging in the more explicit form of the jump Liouvillian, this becomes \begin{align} \hat{\rho}\left( t\right) &=e^{{\mathcal{L}}_{0}\left( t-t_{0}\right) } \hat{\rho}\left( t_{0}\right) +\sum_{n=1}^{\infty }\sum_{j_{1},\dots ,j_{n}}\int_{t_{0}}^{t}dt_{n}\int_{t_{0}}^{t_{n}}dt_{n-1}\cdots \int_{t_{0}}^{t_{3}}dt_{2}\int_{t_{0}}^{t_{2}}dt_{1} \nonumber \\ &\quad\mbox{}\times e^{{\mathcal{L}}_{0}\left( t-t_{n}\right) }{\mathcal{L}} _{j_{n}}e^{{\mathcal{L}}_{0}\left( t_{n}-t_{n-1}\right) }\hat{\mathcal{L} }_{j_{n-1}}\cdots {\mathcal{L}}_{j_{2}}e^{{\mathcal{L}}_{0}\left( t_{2}-t_{1}\right) }{\mathcal{L}}_{j_{1}}e^{{\mathcal{L}}_{0}\left( t_{1}-t_{0}\right) }\hat{\rho}\left( t_{0}\right) . \end{align}

Quantum trajectories

The quantity inside all of the sums, \begin{equation} \hat{\rho}_{t_{1},j_{1};t_{2},j_{2};\dots ;t_{n},j_{n};t_{0}}\left( t\right) =e^{{\mathcal{L}}_{0}\left( t-t_{n}\right) }{\mathcal{L}}_{j_{n}}e^{% {\mathcal{L}}_{0}\left( t_{n}-t_{n-1}\right) }{\mathcal{L}}% _{j_{n-1}}\cdots {\mathcal{L}}_{j_{2}}e^{{\mathcal{L}}_{0}\left( t_{2}-t_{1}\right) }{\mathcal{L}}_{j_{1}}e^{{\mathcal{L}}_{0}\left( t_{1}-t_{0}\right) }\hat{\rho}\left( t_{0}\right) , \tag{2} \end{equation} has a simple physical interpretation. In order to get at this interpretation, we make a few observations.

There are two types of operations that go on in this expression. One is the free evolution under the effective Hamiltonian, i.e. \begin{equation} \hat{\rho}\left( t\right) =e^{{\mathcal{L}}_{0}\left( t-t^{\prime }\right) }\hat{\rho}\left( t^{\prime }\right), \end{equation} which is the solution to equation (1) with $\hat{\mathcal{L}}_{J}$ set to zero and initial condition, $\hat{\rho}\left(t^{\prime }\right) $. The other is the application of some jump operator to the density matrix, i.e. \begin{equation} {\mathcal{L}}_{j}\hat{\rho}=\hat{S}_{j}\hat{\rho}\hat{S}_{j}^{\dagger }. \end{equation} If the density matrix is a pure state, \begin{equation} \hat{\rho}\left( t\right) =\left\vert \psi \left( t\right) \right\rangle \left\langle \psi \left( t\right) \right\vert , \end{equation} then evolution under the effective Hamiltonian is equivalent to a Schr\"{o}dinger type evolution, \begin{equation} \frac{d}{dt}\left\vert \psi \left( t\right) \right\rangle =\frac{1}{i\hbar }% \hat{H}_{\rm eff}\left\vert \psi \left( t\right) \right\rangle , \end{equation} which can be proven by using the product rule on the expression, $\frac{d}{dt% }\left\vert \psi \left( t\right) \right\rangle \left\langle \psi \left( t\right) \right\vert $. The formal solution of this equation is \begin{equation} \left\vert \psi \left( t\right) \right\rangle =e^{-i\hat{H}_{\rm eff}t/\hbar }\left\vert \psi \left( 0\right) \right\rangle , \label{eqn:solution_SE_effective} \end{equation} which can be written in terms of density matrices as \begin{equation} \left\vert \psi \left( t\right) \right\rangle \left\langle \psi \left( t\right) \right\vert =\hat{\rho}\left( t\right) =e^{{\mathcal{L}}_{0}t} \hat{\rho}\left( 0\right) =e^{-i\hat{H}_{\rm eff}t/\hbar }\left\vert \psi \left( 0\right) \right\rangle \left\langle \psi \left( 0\right) \right\vert e^{i \hat{H}_{\rm eff}^{\dagger }t/\hbar }. \label{eqn:solution_SE_effective_DM} \end{equation} This means that $e^{{\mathcal{L}}_{0}t}$ has the effect of evolving a pure state into another pure state according to the non-Hermitian Hamiltonian, $\hat{H}_{\rm eff}$. The action of ${\mathcal{L}}_{j}$ also preserves the purity of a state, as can be seen by noting that $\hat{ \mathcal{L}}_{j}\left( \left\vert \psi \right\rangle \left\langle \psi \right\vert \right) $ is equivalent to the expression, $\hat{S} _{j}\left\vert \psi \right\rangle \left\langle \psi \right\vert \hat{S} _{j}^{\dagger }$. Thus, if the initial state is pure, then the quantity defined in equation (2) defined above is at all times a pure state.

Positivity

As I understand it, positivity is guaranteed by two things:

  • First, the effective Hamiltonian is the sum of a Hermitian matrix and $-i$ times a non-negative matrix, since $\hat{D}$, defined in the very second equation above, is manifestly non-negative. The (right-)eigenvalues of the effective Hamiltonian must therefore have negative imaginary parts. Finally, when multiplying again by $-i$ (in defining the corresponding Lindblad super-operator; see the first equation above), the real part is necessarily negative. Alternatively, one can see that under the action of the effective Hamiltonian, the norm of the state decreases but never goes negative, which is also implies positivity.
  • Second, the application of the jump operator is also manifestly positive, as it transforms a pure state into another pure state.
march
  • 9,501
1

EDIT: The following proof is wrong (the last inequality is not true as pointed out in the comments). See the answer by Noiralef for a correct proof.

There exist examples of Lindblad operators that do not satisfy that $Re( \mathcal{L}) \leq 0$. Example: Consider the single qubit system with a single Lindblad operator $L \mid1\rangle = \mid 0\rangle $ and $L \mid 0\rangle =0 $. Then the corresponding matrix for $\mathcal{L}$ has non-positive eigenvalues, but this is not true for the real part of $\mathcal{L}$.

It is enough to prove that $Re( \mathcal{L}) \leq 0$. Since the Hamiltonian part is skew it does not matter for the proof of dissipativity. We can look at the single Lindblad operators individually. \begin{align*} \mathcal{D}( \rho) = L \rho L^* -  \frac{1}{2} \{ L^* L , \rho\} \end{align*}  Then \begin{align*} \mathcal{D}^*( \rho) = L^* \rho L -  \frac{1}{2} \{ L^* L , \rho\} \end{align*}  and thus \begin{align*} 2Re( \mathcal{D}) (\rho) = L \rho L^* + L^* \rho L - \{ L^* L , \rho\} = L \rho L^* - L^* \rho L + L^* L \rho + \rho L^* L \end{align*}  Proving dissipativity of $\mathcal{D}$ amounts to proving $Re(\langle \rho, \mathcal{D} \rho \rangle) = \langle \rho, Re(\mathcal{D}) \rho \rangle \leq 0 $. Thus, \begin{align*} \langle \rho, Re(\mathcal{D}) \rho \rangle = Tr( \rho^* (L \rho L^* + L^* \rho L - L^* L \rho - \rho L^* L) ) = Tr( - ( \rho L - L\rho) ( \rho L - L\rho)^* ) \leq 0 \end{align*}  where we used that any operator of the form $A A^* $ is positive.

This is described in https://arxiv.org/abs/2206.09879.

0

We recently put a preprint on this issue, providing a direct algebraic proof of the non-positivity of $\mathcal{L}$'s eigenvalues' real parts for finite-dimensional systems. Here is the arXiv link, https://arxiv.org/abs/2504.02256

I would also like to make a short comment on Noiralef's answer. In Noiralef's answer, the claim $\langle x,\mathcal{L}(x) \rangle\leq0$ for arbitrary self-adjoint $x$ (with Hilbert-Schmidt product) is incorrect. Here, I provide a simple counter-example (a two-level system): $$\mathcal{L}(x)=\sigma^+ x \sigma^--\frac{1}{2}\sigma^-\sigma^+x-\frac{1}{2}x\sigma^-\sigma^+, \ \text{with } \sigma^+=\begin{bmatrix}0&1\\0&0\end{bmatrix} \ \text{and } \sigma^-=\begin{bmatrix}0&0\\1&0\end{bmatrix}.$$ Now, we take $x=\begin{bmatrix}2&0\\0&1\end{bmatrix}$, which is self-adjoint. A direct computation yields $$ \sigma^+ x \sigma^--\frac{1}{2}\sigma^-\sigma^+x-\frac{1}{2}x\sigma^-\sigma^+= \begin{bmatrix}1&0\\0&-1\end{bmatrix}. $$ Thus, $\langle x,\mathcal{L}(x) \rangle=\text{Tr}\begin{bmatrix}2&0\\0&-1\end{bmatrix}=1>0$ for the Hilbert-Schmidt inner product.

In fact, the Kossakowski conditions does implies that $\langle x,\mathcal{L}(x) \rangle\leq0$. However, a subtlety is that here the inner-product is not the Hilbert-Schmidt product. For a good reference, see Sec 5.3 of Rivas & Huelga (https://arxiv.org/abs/1104.5242).

In our preprint, we also utilize the Kossakowski conditions, along with another useful inequality, $$\mathcal{L}^\dagger(A^\dagger A)\succeq \mathcal{L}^\dagger(A^\dagger)A+A^\dagger\mathcal{L}^\dagger(A),$$ and provide a purely algebraic proof.