Skip to content

A Quantum Connection For Matrix Rank

June 4, 2019


A new paper with Chaowen Guan

Chaowen Guan is a PhD student at Buffalo. After a busy end to the Spring 2019 term at UB, we are getting time to write about our paper, “Stabilizer Circuits, Quadratic Forms, and Computing Matrix Rank.”

Today we emphasize new connections we have found between simulating special quantum circuits and computing matrix rank over the field {\mathbb{F}_2}.

The quantum circuits involved have been known as polynomial-time solvable since 1998. They are not universal but form important building blocks of quantum systems people intend to build. They impact the problem of showing quantum circuits are more powerful than classical circuits—the quantum advantage problem—in terms of how much harder quantum stuff must be added to them.

The question is: How efficiently can we simulate these special circuits? Our answer improves the bound from order-{n^3} to {n^{\omega}}, where {\omega} here means the current best-known exponent for multiplying {n \times n} matrices (over {\mathbb{F}_2} or any field). Today {\omega} stands at {2.3728\dots}. The non-quantum problem of counting solutions to a quadratic polynomial {f(x_1,\dots,x_n)} modulo 2 is likewise improved from the {O(n^3)} shown by Andrzej Ehrenfeucht and Marek Karpinski to {O(n^\omega)}.

This comes at a price, however, because the matrix multiplication algorithms that optimize the exponent are galactic. In this post we’ll emphasize what is not galactic: reductions to and from the problem of computing matrix rank that run in linear time—meaning {O(n^2)} time for dense matrices—except for the need to check a yes/no condition in one of them. All this builds on the algebraic methods in our paper last year with Amlan Chakrabarti of the University of Calcutta.

Chaowen has contributed a post and some other materials for this blog. His work first came up in a post three years ago that saluted Dick and Kathryn’s wedding. Today is their third anniversary—so this post also comes with happy anniversary wishes.

Strong Simulation Problems

We have covered quantum algorithms several times. We discussed stabilizer circuits in an early post on the work with Amlan and covered them more recently in connection with the work of Jin-Yi Cai’s group. Suffice it to say that stabilizer circuits—which extend Clifford circuits by allowing intermediate measurement gates—form the most salient case that classical computers can simulate in polynomial time.

The simulation time is sometimes cited as {O(n^2)} going back to a 2004 paper by Scott Aaronson and Daniel Gottesman, but there is a catch: this is only for one measurement of one qubit. For general (non-sparse) instances, all of various other algorithms need order-{n^2} time to re-organize their data structures after each single-qubit measurement. This is so even if one merely wants to measure all {n} qubits in one shot: the time becomes {O(n^3)}. This is one case of what is generally called a strong simulation. It is precisely this time that Chaowen and I improved to {O(n^\omega)}.

In wider contexts, strong simulation of a quantum circuit {C} means the ability to compute the probability of a given output to high precision. When the input and output are both in {\{0,1\}^n} we may suppose both are {0^n} since we can prepend and append {\mathsf{NOT}} gates to {C}. Then strong simulation means computing the amplitude {\langle 0^n |C| 0^n \rangle} (or computing {|\langle 0^n |C| 0^n \rangle|^2} which is the output probability) to {n}-place precision. It doesn’t take much for this to be {\mathsf{NP}}-hard, often {\mathsf{\#P}}-complete. If we take the Clifford generating set

\displaystyle  \mathsf{H} = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix},\quad \mathsf{CZ} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{bmatrix},\quad \mathsf{S} = \begin{bmatrix} 1 & 0 \\ 0 & i \end{bmatrix},

then we can get universal circuits by adding any any one of the following gates:

\displaystyle  \mathsf{T} = \begin{bmatrix} 1 & 0 \\ 0 & \sqrt{i} \end{bmatrix},\quad \mathsf{CS} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & i \end{bmatrix},\quad \mathsf{Tof} = \mathit{diag}(1,1,1,1,1,1,\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}).

In the last one we’ve portrayed the {8 \times 8} matrix of the Toffoli gate as being block-diagonal. We will later consider block-diagonal matrices permuted so that all {2 \times 2} “blocks” are at upper left.

There is much recent literature on trying to simulate circuits with limited numbers of non-Clifford gates, and on how many such gates may be needed for exponential lower bounds—even just to tell whether {\langle 0^n |C| 0^n \rangle \neq 0}. This plays against a wider context of efforts toward quantum advantage. Chaowen and I have been trying to apply algebraic-geometric techniques for new lower bounds at the high end, but this time we found new upper bounds at the low end.

From Matrix Rank to Quantum

It is not known how to compute the rank {rk(A)} of a dense matrix {A} in better than matrix-multiplication time, even over {\mathbb{F}_2}. We may suppose {A} is square and symmetric, since we can always form the block matrix

\displaystyle  A' = \begin{bmatrix} 0 & A^\top \\ A & 0 \end{bmatrix}

and then {rk(A) = \frac{1}{2}rk(A')}. In the case of {\mathbb{F}_2}, {A'} is the adjacency matrix {A_G} of an undirected bipartite graph {G}. The rank of {A_G} for any undirected graph {G = (V,E)} must be even. Whereas the rank of the {|V| \times |E|} vertex-edge incidence matrix always equals {n} minus the number of connected components of {G}, less is known about characterizing {rk(A_G)}. Our first main theorem brings quantum strong simulation into the picture. Let {N} stand for {n^2}.

Theorem 1 Given any {A \in \mathbb{F}_2^{n \times n}} we can construct in {O(N)} time a stabilizer circuit {C} on {2n} qubits such that

\displaystyle  rk(A) = \log_2(|\langle 0^{2n} |C| 0^{2n} \rangle|).

One interpretation is that if you believe matrix rank is a “mildly hard” function (with regard to {O(N)}-time computability) then predicting the result of measuring all the qubits in a stabilizer circuit is also “mildly hard.” Such mild hardness would represent a gap between the {O(n^2)} time for weak simulation and the time for strong simulation. Such gaps have been noted and proved for extensions of stabilizer circuits but those are between “polynomial” and an intractable hardness notion.

One can also view Theorem 1 as a possible avenue toward computing matrix rank without doing either matrix multiplication or Gaussian elimination. This is the view Chaowen and I have had all along.

From Quantum to Rank

The distinguishing point of our converse reduction to the rank {r} is knowledge of normal forms that depend on {r} where one can use the knowledge to delay or avoid computing them explicitly. The normal forms are for polynomials {f_C} associated to quantum circuits {C} in our earlier work. Stabilizer circuits yield {f_C} as a classical quadratic form over {\mathbb{Z}_4}, the integers modulo {4}. That is, all cross terms {x_i x_j} in {f_C} have even coefficients—here, {0} or {2}. Thus quantum computing enters a debate that occupied Carl Gauss and others over two hundred years ago:

Should every homogeneous quadratic polynomial {f(x_1,\dots,x_n)} with integer coefficients be called a quadratic form, or only those whose cross terms {c_{i,j}x_i x_j} all have even coefficients {c_{i,j}}?

The point of even coefficients is that they enable having a symmetric {n \times n} integer matrix {S} such that

\displaystyle  f(x) = x^\top S x

for all {x}. Without that condition, {S} might only be half-integral. This old difference turns out to mirror that between universal quantum computing and classical, because the non-Clifford {\mathsf{CS}}-gate noted above yields circuits {C} whose {f_C} over {\mathbb{Z}_4} have terms {x_i x_j} and/or {3 x_i x_j}. While counting solutions in {\mathbb{Z}_4^n} for those polynomials is in {\mathsf{P}}, counting their binary solutions is {\mathsf{\#P}}-complete—an amazing dichotomy we expounded here.

We hasten to add that for {k = 2} the classical forms coincide with those over {\mathbb{Z}_{2^k}} whose nonzero cross terms all have coefficient {2^{k-1}}. Those are called affine in the work by Jin-Yi and others noted above, and our above-mentioned post noted his 2017 paper with Heng Guo and Tyson Williams giving another proof of polynomial-time simulation of stabilizer circuits via {f_C} being affine. Our work improving the polynomial bounds, however, draws on a 2009 paper by Kai-Uwe Schmidt and further theory of classical quadratic forms. This paper uses work going back to 1938 that decomposes a classical (affine) quadratic form {f} over {\mathsf{Z}_4} further as

\displaystyle  f(x) = f_0(x) + 2(x \bullet v) \quad\text{with}\quad f_0(x) = x^\top B x, \ \ \ \ \ (1)

for binary arguments {x}. Here {v} is a binary vector with {v_i = 1} if {S[i,i] = 2} or {S[i,i] = 3}, {v_i =0} otherwise, and the operations including the inner product {\bullet} are mod-2 except that the final {+} is in {\mathbb{Z}_4}. Then {f} is alternating if the diagonal of {B} is all-zero, non-alternating otherwise. Now take {r} to be the rank of {B}. The key normal-form lemma is:

Lemma 2 There is a change of basis to {y_1,\dots,y_n} such that if {f} is non-alternating then {f_0} is transformed to

\displaystyle  f'_0(y) = y_1 + y_2 + \cdots + y_r,

whereas if {f} is alternating then {r} is even and {f_0} is transformed to

\displaystyle  f'_0(y) = 2y_1 y_2 + 2y_3 y_4 + \cdots + 2y_{r-1} y_r.

In either case, there is a binary vector {w} so that {f(y) = f'_0(y) + 2(y \bullet w)} for all {y}.

The point is that to evaluate the quantum circuit {C}, we don’t need to evaluate {f_C}, but can make inferences about the structure of the solution sets to {f_C(x) = a} for {a = 0,1,2,3 \pmod{4}}, where {x \in \{0,1\}^n}. Given the knowledge of {r}, the normal form goes a long way to this. The vector {w} is also needed, but the fact of its having only {n} bits gives hope of finding it in {O(n^2) = O(N)} time. That—plus an analysis of the normal form {f'_0,w} itself of course—would complete an {O(N)}-time reduction from computing the amplitude to computing {r}.

The Needed Piece—For Now

Chaowen took the lead all through the Fall 2018 term in trying multiple attacks. In the non-alternating case, the change of basis converts {B} into a diagonal matrix {D} over {\mathbb{F}_2}. In the alternating case, the same process makes {D} a block-diagonal matrix of the kind we mentioned above. The conversion {D = Q B Q^\top} in both cases also yields {w}. Of course {Q} can be computed by Gaussian elimination in {O(n^3)} time, but this is what we wanted to avoid.

After poring over older literature on {n^\omega}-time methods, including a 1974 paper by James Bunch and John Hopcroft (see also this), we found a paper from last year by Jean-Guillaume Dumas and Clément Pernet that gives exactly what we needed: an LDU-type decomposition that yields {D} in {O(n^\omega)} time. We only needed to apply the change-of-basis analysis in Schmidt’s paper to this decomposition and combine with the normal-form analysis to establish our algorithm for computing the amplitude {\langle 0^n |C| 0^n \rangle}:

  1. Convert {C} to the classical quadratic form {f_C} with matrix {S} over {\mathbb{Z}_4} and associate the {n \times n} matrix {B} over {\mathbb{Z}_2} as above. This needs only {O(N)} time.

  2. Compute the Dumas-Pernet decomposition {B = PLDL^\top P^\top} over {\mathbb{Z}_2} where {P} is a permutation matrix, {L} is lower-triangular, and {D} is block-diagonal with blocks that are either {1 \times 1} or {2 \times 2}. Of course, this involves computing the rank {r} of {B} and takes {O(n^\omega)} time. Think of it as {D = QBQ^\top}. This takes {O(n^\omega)} time—indeed, {O(n^2 r^{\omega - 2})} time according to Dumas and Pernet.

  3. Compute {D' = Q S Q^\top} over {\mathbb{Z}_4}. This, too, takes {O(n^\omega)} time.

  4. If any diagonal {1 \times 1} block of the original {D} has become {2} in {D'}, output {\langle 0^n |C| 0^n \rangle = 0}. Else, {\langle 0^n |C| 0^n \rangle} is nonzero and we have enough information about {D} and {w} to find it—in only {O(n)} time, in fact.

This proves our main theorem:

Theorem 3 For stabilizer circuits {C}, {\langle 0^n |C| 0^n \rangle} is computable in {O(n^\omega)} time. So is counting binary solutions to a classical quadratic form over {\mathbb{Z}_4}, or any quadratic polynomial mod 2.

Because we use the decomposition, the above is not a clean {O(N)}-time reduction to computing {r}. It does not make Theorem 3 into a linear-time equivalence. By further analysis, however, we show that the only impediment is needing {D'} in step 4 of our algorithm to tell whether {\langle 0^n |C| 0^n \rangle = 0}. If we are promised that it is nonzero, then we obtain the probability {|\langle 0^n |C| 0^n \rangle|^2} in {O(N)} time from {r} alone. This is actually where the power of Chaowen’s analysis of the normal forms is brightest and neatest. We will devote further posts to this and to illuminating further connections in graph and matroid theory.

A Three-Part Example

Consider the following quantum circuit {C}. OK, this is a very low-tech drawing. Besides the six Hadamard gates it has two {\mathsf{CZ}} gates, which are shown as simple bars since they are symmetric:

By the rules given here, the three Hadamard gates at left introduce “nondeterministic variables” {x_1,x_2,x_3}. The three Hadamard gates at right also give nondeterministic variables, but they are immediately equated to the output variables {z_1,z_2,z_3} so we skip them. The polynomial {q_C} is

\displaystyle  2u_1 x_1 + 2u_2 x_2 + 2u_3 x_3 + 2x_1 x_2 + 2 x_2 x_3 + 2 x_1 z_1 + 2 x_2 z_2 + 2 x_3 z_3.

Upon substituting {0} for all of {u_1,u_2,u_3} and {z_1,z_2,z_3} this gives simply {f(x) = 2x_1 x_2 + 2 x_2 x_3}. This is an alternating form with

\displaystyle  S = B = \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix},

which is the adjacency matrix of the path graph of length 2 on {n = 3} vertices. Gaussian elimination does not need any prior swaps, so the permutation matrix {P} in the decomposition is the identity and we get

\displaystyle  Q = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{bmatrix}, \quad\text{giving}\quad D = QBQ^\top = \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}

as the block-diagonal matrix over {\mathbb{Z}_2}. Now we re-compute the products over {\mathbb{Z}_4} to get

\displaystyle  Q S Q^{\top} \!\!=\!\! \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{bmatrix} \!\cdot\! \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix} \!\cdot Q^\top \!=\! \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 2 & 0 \end{bmatrix} \!\cdot\! \begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \!=\! \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 2 \\ 0 & 2 & 0 \end{bmatrix} \!=\! D'.

Now {D'} has entries that are {2} but they are off-diagonal, and hence cancel when {y^\top D' y} is computed in the {y}-basis. Since {w} is likewise the zero vector, this gives the transformed form as

\displaystyle  f'_0(y_1,y_2,y_3) = 2y_1 y_2.

It is easy to compute that {f'_0} has six values of 0 and two values of 2, which gives the amplitude as the difference {6 - 2} divided by the square root of {2^6}, so {\frac{1}{2}}, The probability of getting {000} as the result of the measurement is {\frac{1}{4}}.

Now suppose we insert a {\mathsf{Z}}-gate {\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}} on the first qubit to make a new circuit {C_2}. Since {\mathsf{Z}} and {\mathsf{CZ}} are diagonal in the standard basis it does not matter where between the Hadamard gates it goes, say:



After substituting zeroes the form over {\mathbb{Z}} is {g = 2x_1 x_2 + 2 x_2 x_3 + 2x_1^2}. This gives

\displaystyle  S = \begin{bmatrix} 2 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix},\quad v = (1,0,0), \quad B = \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix}.

The matrix {B} is the same as in the first example, hence so are the matrices {Q} and {D} and the alternating status of {g}. The difference made by {v} and the resulting {w} makes itself felt when we re-compute over {\mathbb{Z}_4}:

\displaystyle  Q S Q^{\top} \!\!=\!\! \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{bmatrix} \!\cdot\! \begin{bmatrix} 2 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix} \!\cdot Q^\top \!=\! \begin{bmatrix} 2 & 1 & 0 \\ 1 & 0 & 1 \\ 2 & 2 & 0 \end{bmatrix} \!\cdot\! \begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \!=\! \begin{bmatrix} 2 & 1 & 2 \\ 1 & 0 & 2 \\ 2 & 2 & 2 \end{bmatrix} \!=\! D'.

Well, {D'} is far from diagonal—perhaps we shouldn’t use that name—but again the off-diagonal {2}s are innocuous so we really have

\displaystyle  D'' = \begin{bmatrix} 2 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 2 \end{bmatrix}.

The {2} at upper left does not zero out the amplitude, because it is within a {2 \times 2} block. The {2} at lower right, however, constitutes a {1 \times 1} block of {D''}, so it signifies that {000} is not a possible measurement outcome. Essentially what has happened is that in the {y}-basis the form has become

\displaystyle  g'(y) = 2y_1^2 + 2y_1 y_2 + 2y_3^2.

The isolated term in {y_3} contributes {+2} mod {4} to half the {0}{1} assignments so as to cancel the other half, leaving a difference of {0} in the numerator of the amplitude.

For the third example, let us insert a phase gate {\mathsf{S}} after the {\mathsf{Z}} to make a circuit {C_3}:



The {\mathsf{ZS}} combination is the same as {\mathsf{S^*}}, the adjoint (and inverse) of {\mathsf{S}}. Now after substitutions we have {h_{C_3}(x) = 2x_1 x_2 + 2 x_2 x_3 + 3x_1^2}, giving:

\displaystyle  S = \begin{bmatrix} 3 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix},\quad v = (1,0,0), \quad B = \begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix}.

Note that {B} is still a 0-1 matrix. This {B} has full rank. Again it helps our exposition that {B} is diagonalizable without swaps (and that the inverse of an invertible lower-triangular matrix is lower-triangular), so we can find {QBQ^\top = D = I} with

\displaystyle  Q = \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \end{bmatrix}.

In the {y}-basis we get {h'(y) = y_1 + y_2 + y_3 + 2(y \bullet w)} for some {w}. To test for zero amplitude—before we know what {w} is—we compute in {\mathbb{Z}_4}:

\displaystyle  Q S Q^{\top} \!\!=\!\! \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 0 & 1 & 1 \end{bmatrix} \!\cdot\!\begin{bmatrix} 3 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix} \!\cdot Q^{\top} \!=\! \begin{bmatrix} 3 & 1 & 0 \\ 0 & 1 & 1 \\ 0 & 2 & 1 \end{bmatrix} \!\cdot\! \begin{bmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} \!=\! \begin{bmatrix} 3 & 0 & 0 \\ 0 & 1 & 2 \\ 0 & 2 & 3 \end{bmatrix} \!=\! D'.

Again we can ignore the off-diagonal {2}‘s. There is no {2} on the main diagonal, so we know the amplitude is non-zero. To compute it, we only need the information on the diagonal, which tells us {h'_0(y) = y_1 + y_2 + y_3} and {w = (1,0,1)} in the transformed basis. Note that we could have written {h'_0(y)} down the moment we learned that {B} has rank {3} over {\mathbb{F}_2}, so {w} is the only rigmarole. The final analysis—using a recursion detailed in the appendix of our paper—gives the amplitude as

\displaystyle  \frac{2 - 2i}{8} = \frac{1 - i}{4},

and so the probability of the output {000} is {\frac{1}{8}}.

We remark finally that {w} is generally not the same as {Qv}. To see where it comes from, let us now compute {Q B Q^{\top}} (not {QSQ^\top}) over {\mathbb{Z}_4} to get {QBQ^\top = D + 2U}. Then

\displaystyle  \begin{array}{rcl}  f(x) &=& x^\top B x + 2x^\top v = x^\top Q^{-1} (D+2U) (Q^\top)^{-1} x + 2x^\top (Q^{-1} Q) v\\ &=& y^\top (D + 2U) y + 2 y^\top Qv, \end{array}

where {y = (Q^\top)^{-1} x}. Now off-diagonal elements in {2U} will cancel when taking {2 y^\top U y} modulo 4, so we need only retain the diagonal {u} of {U} as a binary vector. Since {y} is binary, {y^\top \mathit{diag}(u) y = y^\top u}. This finally gives

\displaystyle  f(x) = y^\top D y + 2y^\top (u + Qv) = y^\top D y + 2(y \bullet w)

with {w = u + Qv}. In the third example we have {Qv = (1,1,1)} and

\displaystyle  QBQ^{\top} = \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 0 & 1 & 1 \end{bmatrix} \cdot \begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix} \cdot Q^\top = \begin{bmatrix} 1 & 1 & 0 \\ 2 & 1 & 1 \\ 2 & 2 & 1 \end{bmatrix} \cdot \begin{bmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 2 & 2 \\ 2 & 3 & 0 \\ 2 & 0 & 1 \end{bmatrix}.

The diagonal gives {u = (0,1,0)} and so {w = u + Qv \pmod{2} = (1,0,1)}. This agrees with what we read off above by comparing {D'} with {D}. There is a different worked-out example for the triangle graph on three vertices in the paper.

Looking Ahead

Chaowen and I continue to be interested in shortcuts to computing the amplitude and/or probability. Here we take a cue from how Volker Strassen titled his famous 1969 paper on matrix multiplication:

“Gaussian Elimination is not Optimal.”

We would like to find cases where we can say, “Matrix Multiplication is not Optimal.” In view of recent papers blunting efforts to show {\omega = 2}—see this post—the question may shift to which computations may not need the full power of matrix multiplication and be achievable in {O(n^2)} time after all. This applies to computing the rank (over {\mathbb{Z}_2}) itself, and the question extends to sparse cases like those considered in the paper, “Fast Matrix Rank Algorithms and Applications,” by Ho Yee Cheung, Tsz Chiu Kwok, and Lap Chi Lau.

The second circuit in the above example corresponds to a graph with a self-loop at node 1—or, depending on how one counts incidence of self-loops in undirected graphs, one could call it a double self-loop. It exemplifies circuits used to create quantum graph states, and those circuits are representative of stabilizer circuits in general. The third circuit can be said to have a “triple loop,” or maybe better, a “3/2-loop”—while if the original {\mathsf{Z}}-gate were a single {\mathsf{S}}-gate giving the form {x_1^2 + 2x_1 x_2 + 2 x_2 x_3}, we would face the ambiguity of calling it a “loop” or a “half-loop.” Sorting this out properly needs going beyond graph theory. In upcoming posts, Chaowen and I will say more about how all this yields new problems in graph theory and new connections between quantum computing and matroid theory.

Open Problems

What do our results say about the problem of computing the rank of a matrix, and possibly separating it from dependence on matrix multiplication?

We hope that we have begun to convey how our paper uncovers a lot of fun computational mathematics. We are grateful for communications from people we’ve approached (some acknowledged in our paper) about possible known connections, but there may be more we don’t know. Our next posts will say more about combinatorial aspects of quantum circuits.

[fixed name]

7 Comments leave one →
  1. June 5, 2019 2:07 am

    His name is Clément Pernet, not “Clemens” Pernet ;-).

  2. Colet permalink
    June 5, 2019 11:53 am

    Does this mean that there is a O(n^2) time quantum algorithm for dense matrix rank?

    • June 5, 2019 5:54 pm

      I am afraid that it doesn’t imply that immediately, because the reduction is still to computing the probability classically. But I think this is a really good question, and it would be interesting to design quantum algorithms for matrix operations.

Trackbacks

  1. Net-Zero Graphs | Gödel's Lost Letter and P=NP
  2. A Matroid-Quantum Connection | Gödel's Lost Letter and P=NP
  3. Quantum Graph Theory | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading