1

Bivariate Bicycle (BB) codes were introduced in [1]. These families of QEC codes are defined via parity check matrices containing a two block structure of the form $H_{X} = (A|B)$ and $H_{Z} = (B^{T}|A^{T})$. The matrices $A$ and $B$ are both defined as polynomials of the form e.g. $A = x^3 + y^2 + y$, where each variable corresponds to a square matrix. These variables are defined as $x = S_{l} \otimes \mathbf{I}_{m}$ and $y = \mathbf{I}_{l} \otimes S_{m}$, where $S_{k}$ are cyclic matrices, meaning both $x$ and $y$ have dimensions ($lm$ x $lm$).

Counting the number of columns in $H_{X,Z}$, we obtain there are a total of $n=2lm$ data qubits. Adding the number of rows of $H_{X}$ and $H_{Z}$, we obtain a total of $n=2lm$ checks. The number of logicals supported by a specific code is given by $k = n - \textrm{rank}(H_{X}) - \textrm{rank}(H_{Z})$. Assuming that parameters have been chosen such that the code hosts at least one logical qubits, we conclude that the parity-check matrices $H_{X,Z}$ are thus reducible.

Question:

Why are the matrices never reduced in the description of BB codes? In particular, for the Gross code $[[144,12,12]]$ example, a whole circuit is presented in [1], based on the structure of $H_{X,Z}$, which measures $n$ stabilizers. Is there any advantage in measuring more stabilizers than necessary? e.g. does that make it easier to build the syndrome extraction circuit (or other structures associated with the code?)? Does it make decoding easier?...Is this something standard for qLDPC codes?

Refs:

2 Answers2

3

The toric code also has a redundant X stabilizer and a redundant Z stabilizer. Removing them doesn't break the code... but you probably shouldn't do it.

First of all, although the redundant stabilizers are products of the other stabilizers, those products are huge. When measurements are noisy, this basically means you don't have access to the stabilizer via the big product. It doesn't matter that it should in principle be equal; in practice you have no real ability to reveal its value until after error correction. Which means it doesn't help do the error correction.

Second of all, it breaks the translation symmetry of the code. There is a simplicity to the toric code, where every location looks like every other location. If you punch out a stabilizer, the locations suddenly differ from each other. If you're writing python code to generate the circuit, you'll need an extra condition for this stabilizer. If you're writing a decoder for the toric code, you'll need to be able to deal with some spacelike errors having one symptom instead of two. Everything gets just a little bit harder.

The same would be true of bicycle codes. You don't actually have access to the redundant stabilizers after they are removed, via the product that is equal to them, because that product involves too many noisy measurements. And all of a sudden part of the code looks different from the other parts and requires special treatment.

Lastly: single shot quantum codes will typically involve doing lots of highly cross-redundant measurements. Otherwise a single measurement error could cause them to fail. So there are benefits to going even more redundant.

Craig Gidney
  • 44,299
  • 1
  • 41
  • 116
2

It's certainly much simpler to describe the parity check matrices in terms of circulant matrices. Even beyond BB codes, there are others that use such a technique to define quantum codes. There may be redundancy in the resulting parity check matrix due to this property, but it is a much easier formalism to define a family of quantum codes, and for analytic purposes, explicitly keeping your quasi-cyclicity properties can be useful.

If you wanted, you could just remove the redundant rows in the parity check matrix, but there are reasons you might want to keep them. For one, the qLDPC condition tells you that the syndrome extraction circuit to measure all checks simultaneously parallelizes quite nicely. This means that you can get multiple pieces of information corresponding to particular checks in constant time. This is quite nice because you tend to have to repeat syndrome extraction $O(d)$ times, so having some parallelized quantum checks gets you "multiple rounds" of syndrome data quicker. This can go as far as to define so-called metachecks on some quantum codes, which are classical checks on the measurement outcomes of some quantum checks. If you have a very robust set of metachecks, you could sometimes get a property called "soundness," which lets you get away with single-shot syndrome extraction.

Rohan Mehta
  • 591
  • 1
  • 10