2

A good $[[n, k, d]]$ qLDPC code is one where $\lim_{n \rightarrow \infty} \frac{k}{n} > 0$ and $k = O(n)$ and $\lim_{n \rightarrow \infty} \frac{d}{n} > 0$ and $d = O(n)$. A CSS code is one where we take two classical codes $C_1$ and $C_2$ with parity check matrices $H_1$ and $H_2$ such that $H_1 H_2^T = 0$. The intuition is that we can now use $C_1$ for $X$ errors and $C_2$ for $Z$ errors.

Suppose our classical codes are themselves LDPC codes. Then $C_1$ has distance $d_1 = O(n)$ and $C_2$ has distance $d_2 = O(n)$ as well. $H_1$ and $H_2$ are supposed to be sparse. Now, we run into a contradiction. We see that $H_2^T$ generates the codewords of $C_1$ because $H_1 H_2^T = 0$. But the code $C_1$ has distance $O(n)$ - hence its codewords cannot be sparse.

I have seen this argument in a talk but is it a formal no-go theorem? And if so, what is the intuition behind how to work around this no-go and create good qLDPC codes? Some naive ideas are

  1. Make CSS codes out of classical codes that are not both LDPC
  2. Don't try to make good qLDPC codes through the CSS route

I'm looking for the intuition behind the newer constructions in 2020 and later.

user1936752
  • 3,311
  • 1
  • 9
  • 24

3 Answers3

2
  1. You don't want to do this. If $H_X$ and $H_Z$ are not sparse, then your quantum code is not LDPC.
  2. You could make your quantum LDPC codes non-CSS, but this doesn't help much. It turns out that a non-CSS code can be converted into a CSS code with the same distance and double the number of qubits, so this doesn't affect whether or not you can find a "good" code.

Here's the actual answer: You make your CSS code out of two classical LDPC codes $H_X$ and $H_Z$. These must be very bad classical codes, because they have to have low distance if we want $H_XH_Z^T=0$ for $H_Z$ sparse. Yet somehow, they must have a high distance when considered as quantum codes. This is possible because the distance of a quantum code has nothing to do with the distance of $H_X$ or $H_Z$. The distance of a quantum code is given by $$ d_Z = \min\{|c|: c\in \ker(H_X)\setminus\text{Im}(H_Z^T)\}\qquad d_X = \min\{|c|: c\in \ker(H_Z)\setminus\text{Im}(H_X^T)\} $$ In other words, the distance to $Z$ errors is not given by the minimum codeword of $H_X$, but by the minimum codeword of $H_X$ that is not a $Z$ stabilizer, and similarly for $d_X$.

It is very hard to generate bad low-distance codes that stitch together into high-distance CSS quantum codes, which is why the constructions for good quantum LDPC codes are rather involved.

Jahan Claes
  • 1,092
  • 6
  • 13
1

I am assuming that you are referencing this argument from Nikolas Breuckmann's talk. There, if I recall correctly, he is talking about naive random constructions of CSS qLDPC codes from classical LDPC codes. You circumvent this issue by constructing the codes in a specific way (and there are many, many ways to do this).

You still use classical LDPC codes + you satisfy CSS criterion.

Rohan Mehta
  • 591
  • 1
  • 10
1

Yes, it is true that if you take a classical LDPC code, then by design every binary strong that is a null vector of the parity check matrix has high weight, and therefore the rows of the the Z-parity check in the CSS code must have high weight (and are therefore not LDPC).

Roughly speaking, the way that people get around it is by not just setting the X parity check to be a good classical LDPC code. Instead, you use a classical LDPC code as a starting point, but then fiddle around with it to create the space so that there are some null vectors of low weight, which can form your Z parity check. The skill in doing that fiddle is to significantly add to the distance against $X$ errors without too significantly decreasing the distance against $Z$ errors, or killing the number of logical qubits you're storing. The standard starting point for such a construction is called a Tanner code. But then, to make the quantum code good, you need to add something else to the construction as well.

DaftWullie
  • 62,671
  • 4
  • 55
  • 140