Skip to content

New, Old, Ancient Results

February 27, 2021


Nonexistence theorems and attempts at lower bounds

Cropped from src

Joshua Grochow is an assistant professor in Computer Science and Mathematics at the University of Colorado at Boulder. He was a student of Ketan Mulmuley and Lance Fortnow at Chicago; his dissertation and some subsequent papers did much to widen the horizons of the “Geometric Complexity Theory” (GCT) program. He is also a gifted expositor.

Today we will highlight some of his work and some of his exposition of new and old theorems.

An ancient one is Stephen Mahaney’s famous theorem on the nonexistence of sparse {\mathsf{NP}}-complete sets (unless {\mathsf{NP = P}}). Grochow discusses a simpler proof of the theorem by Manindra Agrawal and gives some further impacts on GCT.

A recent one with Youming Qiao is on an old topic and is in the 2021 Innovations in Theoretical Computer Science conference. It is titled, “On the Complexity of Isomorphism Problems for Tensors, Groups, and Polynomials I: Tensor Isomorphism-Completeness,” and grows out of a 2019 paper by the same authors.

This came to my attention through communications with Grochow’s student, Michael Levet. Indeed, Levet is the reason for my putting this all together. He raised through email some questions about an ancient result of mine on group isomorphism. I reported previously:

Long ago Bob Tarjan and Zeke Zalcstein and I made a simple observation: Group isomorphism could be done in time

\displaystyle  n^{\log_{2} n + O(1)}.

This relies on the easy-to-prove fact that every group has at most {\log_{2} n} generators. We have discussed this idea earlier here.

Levet raised an issue about related observations of mine—ones that were misleading at best. I think he has a good point and we are still trying to unravel exactly what I meant back then. I applaud him for reading ancient stuff, for trying to extend it, and for working on such problems. I wish him well.

No Three In a Row

While Levet and I work that out and think about Grochow’s paper on isomorphism problems with Qiao, Ken and I want to highlight a different expository paper by Grochow on news from 2016 that we covered then. Grochow’s paper appeared in the AMS Bulletin and is titled, “New Applications Of The Polynomial Method: The Cap Set Conjecture And Beyond.”

To lead in to the subject, here is a problem from 1917 by the English puzzlemaster Henry Dudeney titled, “A Puzzle With Pawns”:

Place two pawns in the middle of the chess- board, one at Q 4 and the other at K 5. Now, place the remaining fourteen pawns (sixteen in all) so that no three shall be in a straight line in any possible direction. Note that I purposely do not say queens, because by the words ” any possible direction ” I go beyond attacks on diagonals. The pawns must be regarded as mere points in space — at the centres of the squares.

Sixteen is obviously the maximum possible for a standard {8 \times 8} chessboard because a seventeenth pawn would make three in some row and some column. For an {r \times r} board, the limit is {2r} by similar reasoning—this is an example of the pigeonhole principle which we just mentioned.

It is possible to achieve the maximum for all {r} up to {46} and then the only other cases known are {48}, {50}, and {52}. That’s it. Here are solutions for {r = 10} and {r = 52}. The latter was found by Achim Flammenkamp, whose page has encyclopedic information. On the former, the pieces are positioned on gridpoints like stones in Go, which seems a better context for this problem than chess.

Composite of src1, src2

The conjecture is not only that a {2n}-size solution exists for only finitely many {n}, but also that the maximum size for all sufficiently large {r} is bounded by {cr} with {c < 2}, indeed, with {c < 1.815}. It is known that {(1.5-\epsilon_r)r} stones can always be placed with no three collinear, where the {\epsilon_r} depends on the closeness of a prime to {\frac{r}{2}}.

The problem can be taken to dimensions {n \ge 3} that are beyond the plane. We can also extend what is meant by a “line” via various notions of wrapping-around. Then the question is how close the maximum size can stay to being linear in the size of the space—as {n} and/or the size {r} of an individual dimension increase.

Not As Easy As Tic-Tac-Toe

The theme of the no-three-in-a-line problem is fundamental to combinatorics. There are tons of problems of the form:

How many objects can one place, so that no pattern of some certain type exists?

In {n}-dimensional space the smallest {r} of interest is {r = 3}. This means playing on higher-dimensional versions of the {3 \times 3} grid and {3 \times 3 \times 3} cube. Then the only Euclidean lines are the kind we know from tic-tac-toe: straight across or down, or diagonal.

For dimension {n \geq 3} there are other kinds of diagonals, such as within a face or through the center of the cube, but they all win at {n}-dimensional tic-tac-toe. So the problem becomes: what is the maximum number of moves you can make by yourself without creating a win at tic-tac-toe? The cap-set problem adds a twist by extending the notion of what is a line. It is like playing tic-tac-toe on a floor of {3 \times 3} tiles where a play in one tile is replicated in all of them. Then you can make a line by playing in a corner and in the middle of the two opposite edges, as shown at left in the following diagram (original drawing).

The four orange O’s at right have no 3-in-a-line even with this extended notion of line. Note that the four blank cells in the top two rows also avoid putting 3 in a line. Four is the maximum, however: it is not possible to have a drawn game in extended {3 \times 3} tic-tac-toe.

The theorem proved by Jordan Ellenberg and Dion Gijswijt in 2016 is that the upper bound is not only a vanishing fraction of the size {3^n} of the space as {n} grows, it is bounded by {c^n} where {c < 3}. Namely:

Theorem 1 Every cap set in the {3^n}-cube has size at most {2.756^n}.

Using Polynomials

There is a simple way to express the extended notion of “line” that works for all dimensions {n}: Number the coordinates of each dimension {0,1,2}. Make the space {\{0,1,2\}^n} with addition modulo {q = 3}, that is, make it {\mathbb{Z}_3^n}. Then the condition for three points {A,B,C} to be in a line is simply

\displaystyle  A + B = 2C.

It is easy to write polynomial equations over the field {\mathbb{F}_3} to express the property of a set having such a line. What was unexpected, until Ernie Croot, Vsevolod Lev, and Péter Pál Pach solved a related problem with {q = 4}, was that there would be

“an ingeniously simple way to split the polynomial[s] into pieces with smaller exponents, which led to a bound on the size of collections with no [lines].”

The quotation comes from an article by Erica Klarreich for Quanta right then in 2016. A 2016 AMS Feature column by David Austin covers how to make this say a set {S} of points is a cap set modulo 3:

\displaystyle  S \uplus S \cap 2S = \emptyset,

where we (not Austin) write {S \uplus S} to mean the set of sums {a + b} where {a,b \in S} and {b \neq a}. If there is an element {r} in the intersection then {a + b = r = 2c}, and since {3c = 0}, we get {a + b + c = 0} with {a,b,c} in {S} and all distinct, a contradiction. (If {c = b} then {a + 2b = 0}, so {a = b}.) Let {m_d} stand for the number of monomials of degree at most {d} in the {n} variables. The key first insight is:

Lemma 2 If a polynomial {p(x_1,\dots,x_n)} of degree {d} vanishes on {S\uplus S}, then {p(x) = 0} for all but at most {2m_{d/2}} points of {2S}.

One could first try to interpret this as saying that {S \uplus S} “looks like” {2S} to polynomials {p} of “low” degree {d}. However, if {d} stays low relative to {|S|} then the “if” part would hold vacuously, opposing the goal of bounding {|S|} and making the whole idea self-defeating. In fact, the important tension comes when {d} is intermediate: {d = (q-1)n/3}, which for {q = 3} makes {d/2 = n/3} and {d = 2n/3} neatly occupy the middle of the range {\{1,\dots,n\}}.

The proof also uses the trick that if a product of two monomials has degree {d} then one of them must have degree at most {\lfloor d/2\rfloor}. As I (Ken writing these sections) wrote about it back in 2016, this reminds of Roman Smolensky’s degree-halving trick in his celebrated 1987 theorem on lower bounds for mod-{p} versus mod-{q}. This trick, however, runs from {n} to {n/2} for all moduli.

In any event, the 2016 papers were a new form of the polynomial method that led to striking new results. What Grochow’s survey does for us now is bring out wider implications of this ingenuity.

Applications in Complexity

Grochow’s four application areas in section 4 of his survey are:

  1. Progress on various forms of `sunflower’ conjectures.

  2. Barriers to attempts to show that the exponent of matrix multiplication is {2}.

  3. Removing edges to make graphs triangle-free.

  4. Matrix rigidity and lower bounds.

We say a little more about the last of these. For any {N \times N} matrix {A} and {r \leq N} define the rigidity {R_A(r)} to be the minimum number of entries in which {A} differs from some matrix of rank (at most) {r}. The highest possible rigidity for rank {r} is {(N - r)^2}, since zeroing out an {(n-r)\times(n-r)} block leaves a matrix of rank at most {r}. Sufficiently random matrices meet this upper bound with high probability, but the best lower bounds for explicit families of matrices are {\Omega(\frac{N^2}{r}\log\frac{N}{r})}, which is only quasi-linear when {r} is close to {N}. The question is whether we can inch this up to {\Omega(n^{1+\epsilon})} for some {\epsilon > 0}.

Definition 3 A family of matrices {A_N} is significantly rigid if there is an {\epsilon > 0} such that taking {r = \frac{N}{\log\log N}} makes {R_{A_N}(r) = \Omega(N^{1+\epsilon})}.

The interest in this definition comes from a lack of lower bounds on linear algebraic circuits computing natural families {A_N} of linear transformations that seems even more extreme than our lack of super-linear lower bounds on Boolean circuits, nor better than {\Omega(N\log N)} for general algebraic circuits computing polynomials in {N} variables of degree {B^{O(1)}}. It is still consistent with our knowledge that every natural family {A_N} can be computed by linear algebraic circuits of {O(N)} size and {O(\log N)} depth. Leslie Valiant in 1977 proved the following sufficient condition to improve this state of affairs.

Theorem 4 Every significantly rigid family {A_N} cannot be computed by linear algebraic circuits of linear size and logarithmic depth.

So for coming on half a century the question has been:

Can we construct a natural explicit family of significantly rigid matrices?

Beliefs that the Hadamard matrices provided such a family were refuted by Josh Alman and Ryan Williams at STOC 2017, and known results for Vandermonde matrices do not have {r} close enough to {n}.

One hope had been to derive such matrices from explicit functions {f_n(x_1,\dots,x_n)} over {\mathbb{Z}_p} for {p} prime by taking {N = p^n} and defining

\displaystyle  A_{f_n}[\vec{x},\vec{y}] = f(x_1 + y_1,\dots,x_n + y_n).

Unfortunately, the polynomial method for cap sets shows that no such attempt can work. Zeev Dvir and Benjamin Edelman proved that no matter how {f} and {\epsilon} are chosen, there is {\delta > 0} such that for all large enough {n},

\displaystyle  R_{A_{f_n}}(N^{1-\delta}) < n^{1+\epsilon}.

This means we cannot get {R_{A_{f_n}}(r) = \Omega(n^{1+\Omega(1)})} for {r = \frac{N}{\log\log N}}, indeed, far from it. What is most curious to us is that for matrix multiplication, the cap-set related technique frustrates a better complexity upper bound, whereas here it frustrates a better lower bound.

Open Problems

What further applications can we find for the polynomial method?

2 Comments leave one →
  1. February 28, 2021 2:14 am

    I cannot resist pointing out that Josh Grochow was a postdoc at the Santa Fe Institute.

  2. PandaItchpress permalink
    March 4, 2021 1:41 am

    I wonder how Mohammed Ali and Einstein became successful?

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading