Skip to content

What Is The Object?

September 22, 2012


A tale of two approaches to solve open problems

Jeff Kahn and Gil Kalai are among the top problem solvers in the world, and each has won notable prizes in his great career. Gil just recently visited Georgia Tech, and gave a wonderful talk on the interplay of Boolean functions and harmonic analysis. Gil also is involved in our ongoing debate on the feasibility of quantum computers.

Today Ken and I wish to talk about two approaches to problem solving. Neither is better than the other, but they are fundamentally different.

I thought about this topic while listening to Gil’s talk. My favorite among his solved problems is the one he did with Jeff, resolving the Borsuk Conjecture. In 1932 Karol Borsuk asked the following question for which he had proved “yes” in the case {n=2}:

Die folgende Frage bleibt offen: Lässt sich jede beschränkte Teilmenge {E} des Raumes {\mathbb{R}^n} in {(n + 1)} Mengen zerlegen, von denen jede einen kleineren Durchmesser als {E} hat?

Ken finds Wikipedia’s translation to be optimal:

The following question remains open: Can every bounded subset {E} of the space {\mathbb{R}^n} be partitioned into {(n + 1)} sets, each of which has a smaller diameter than {E}?

Gil and Jeff showed that the widely-believed positive answer {n+1} was off a bit—the correct answer is exponential in {n}, actually it is at least {1.2^{\sqrt n}} for large enough {n}.

The Guess

Suppose one wishes to solve some open problem X. There are many ways to attack a problem, as many ways as there are problems. One critical issue for all open problems is the guess: do you try to prove X is true or try to prove X is false? On exams we often say:

Consider the problem (…) Prove that it is true.

Unfortunately real-world problems are not so labeled. They are more like exam questions of the form:

Consider the problem (…) Prove that it is true or supply a counter-example.

These questions are much harder, and I usually avoid them on exams I create for my students. On a exam if you guess the wrong way you may waste valuable time, which could make you miss other questions, and perhaps fail the exam.

On an open problem—not an exam problem—you do need to make the guess. Are you going to try to prove X or try to disprove X? For some problems we all seem to believe that we know the right guess, but I am not so sure. Just be aware that a wrong guess means that you will never succeed in getting a proof. You can hope that even with a wrong guess your “failure” will still lead to insights that may advance the field in general, and perhaps lead to a resolution of X anyway.

We have previously blogged about cases where the math and theory community guessed wrong. The problems in question were solved only after a few very smart people guessed the right way, since initially almost all of the community believed in the other direction. For Gil and Jeff on the Borsuk conjecture it was important that they first guessed right. Yes they were clever and had a deep insight into the problem, but making the right guess strikes me as coming before having that insight.

Help In Guessing Right

As stated, the conjecture was about arbitrary subsets of {\mathbb{R}^n}. Ken and I know people with brilliant visualization of objects in {\mathbb{R}^n} for high {n}—for instance Harold Kuhn “saw” a tangency in 36-dimensional space from an example involving {6 \times 6} matrices in Ken’s Princeton senior thesis—but we are not among them, and perhaps Jeff and Gil are not either. However, as noted on the first page of their paper (linked from here), there was a re-formulation of a major case of Borsuk’s conjecture:

Conjecture. Let {K} be a family of {k}-subsets of {\{1, 2, ..., n\}} such that every two members of {K} have {t} elements in common. Then {K} can be partitioned into {n} parts so that in each part every two members have {(t + 1)} elements in common.

This is now stated in terms of the objects that a combinatorialist like Gil or Jeff understands well: discrete sets and systems of subsets and partitions, vectors and codes… Phrasing it this way primed their intuition that this was false. Once they found a construction for a strong counterexample, their proof was short. In the paper they quote Moby Dick on how they were led to it:

However contracted, that definition is the result of expanded meditation.

Then number theory, part of the landscape of any discrete mathematician, put a finishing touch on their exponential estimate by an application of the Prime Number Theorem.

Hence we will talk about what approach you take after you make your guess. This may come hand-in-hand with the insight, but importantly it also often comes before it. It may also be classed as a matter of attitude, of how much novelty one expects is required to reach the end. But we’ll reference the classic divide between “procedural” and “object-oriented” programming languages.

The Procedural Approach

Okay, so you have guessed that X is true, and you start out to try to prove X. One method is what I think of as the standard procedure. You bring to bear on X all methods, lemmas, tricks, ideas, and more that you know that seem to be related to X. If you are lucky you will eventually see a pattern that will led to a proof.

The proof may be short, it may have modest length, it may be high-level, it may be long, it may be quite technical. The point is, you execute it as a series of actions based on objects you know and tools you try to sharpen. In any case you will be happy: you have solved your problem. Congrats. Time to move on to another question.

The Object Approach

Sometimes the existing tools are insufficient to solve a problem, even if you have guessed right. You might feel you should work on another problem. The object-based approach basically agrees.

In object-oriented programming, the initial emphasis is not so much to do what you want your ultimate program to do, but rather to describe and model all the data you will be dealing with. Often this means creating new or compound objects for the data, or making a new class hierarchy of which the concepts you initially considered are only a part. You may do a lot of extra work, coding objects that are not used in the final product. However, it is often easier to do modeling of information than to process it. Once you have modeled a lot of things, then you can start to think about exactly what you want to do with them.

In some applications, such as simulation and game design, this is called world-building. Ken and I are thinking about this because Shinichi Mochizuki’s new claimed proof of the ABC Conjecture–and much more—appears to be a huge example. Here is a quotation by Minhyong Kim from the New York Times article, which we linked from our own item:

“He’s taking what we know about numbers and addition and multiplication and really taking them apart,” Dr. Kim said. “He creates a whole new language—you could say a whole new universe of mathematical objects—in order to say something about the usual universe.”

An ABC of New Objects?

Many others commenting around the Web say they are having trouble because they cannot understand Mochizuki’s procedures—and this is because they have to learn his new objects first. Here is part of what Jordan Ellenberg says about them:

Mochizuki argues that it is too limiting to think about “the category of schemes over Spec Z,” as we are accustomed to do. … Mochizuki argues that [this is] insufficiently rich to handle certain problems in Diophantine geometry. He wants us instead to think about what he calls the “species” of schemes over Spec Z, where a scheme in this sense is not an abstract object in a category, but something cut out by a formula. In some sense this view is more classical than the conventional one, in which we tend to feel good about ourselves if we can “remove coordinates” and think about objects and arrows without implicitly applying a forgetful functor and referring to the object as a space with a Zariski topology or—ptui!—a set of points.

Well nothing can be more “real” than a set of points, but so much conceptual building has been done on top of them already that they can be deprecated. Also when you take a first look at Mochizuki’s outlines, the ABC Conjecture is barely prominent—so many other things are defined and dealt with first, in 512 total pages.

Some Problems That Birthed New Objects

Let us think back to a few cases where a proof introduced a new kind of object that spurred research in itself, perhaps even eclipsing the problem the prover originally solved.

    • The problem: Show 5th-degree equations can/cannot be solved in closed form.

    • The objects: Groups.
    • The problem: Show {\mathsf{NC^1}} can/cannot be simulated by bounded-width branching programs.

    • The objects: Programs over groups.
    • The problem: Riemann Hypothesis extended to finite fields.

    • The objects: Varieties over finite fields, and more…
    • The problem: Improve constant-degree expanders; de-randomize logspace?

    • The objects: Zig-zag graph products.
    • The problem: Simulate uses of quantum interference in classical (random) polynomial time.

    • The objects: Matchgates.

We speculate that Andrew Wiles’ work on his brilliant solution of the Fermat Equation is probably not of this type. He used all the tools he could from advanced number theory, but did not really create new objects. Thus we are certainly not knocking the “procedural” approach, just noting that the expansive meditative alternative is sometimes needed.

Declarative or Functional or Logic-based?

Procedural and object-oriented are not necessarily opposed concepts; for instance C++ and related languages use both. In programming language theory the concepts most directed contrasted to procedural is declarative, which is allied with the idea of functional which was cultivated in John Backus’ famous 1978 Turing Award speech and article, and with logic programming languages such as Prolog.

The idea of these, however, is more that it should suffice to give a full logical specification of a problem, and then procedures for solving it will largely take care of themselves. There is some echo of this idea in specifications for PolyMath problems, and the notion of a “Watson” for math comes in. However, we preferred to focus on the imperative to create and describe new objects.

Open Problems

What are some other good examples of problems that are most known today for creating new objects?

31 Comments leave one →
  1. September 22, 2012 10:12 pm

    A bit of a “Siamese Post” situation…by rights the other head should be Alexander Grothendieck, so as to preface the discussion of Mochizuki’s new work by defining Spec and schemes. He is after all the living master of mathematical object creation, and we intended to write about this. But that description in-tandem with Dick’s point about “guessing” would have made this too long a post. Time and events also intervene—for instance the titles on “quantum” and “chocolate” last fall were supposed to segue into a post on a paper by Karl Svozil on “quantum chocolate balls”, but the “Omega” developments more-than-filled the required time to write it…

    • Serge permalink
      September 23, 2012 3:44 pm

      I’d love to read your account of this topic: being no specialists, you’d make it particularly clear. 🙂

      In computer science the emphasis is on algorithms. Maybe this is the reason why there are so many unsolved problems: lack of appropriate objects…

  2. John Sidles permalink
    September 23, 2012 9:12 am

    \bigstar The problem  Are two groups, each given as a multiplication table, isomorphic?

    \bigstar The objects  The mathematical attributes of naturality and universality.

    \bigstar The key references  Saunders Mac Lane and Samuel Eilenberg, Natural isomorphisms in group theory (1942) and General theory of natural equivalences (1945).

    \bigstar The sequelae  Alexander Grothendieck (and many others)/algebraic geometry, Vladimir Arnold (and many others)/geometric dynamics, and now Shinichi Mochizuki (and soon-to-be many others?)/diophantine geometry.

    \bigstar The confection  A highly-rated question on the MathOverflow stackexchange is “What does the adjective “natural” actually mean?” In this regard, it is notable that Shinichi Mochizuki’s preprint employs the word “natural” and its derivatives on more than six hundred occasions. Amazing. Or perhaps, yikes! 🙂

    \bigstar A debatable proposition  A strategically crucial theme of the 21st century STEM enterprise is the extension of canonical 20th century texts, not by appending chapters of new technical findings, but by prepending chapters that restate 20th century postulates in the natural/universal/global language of the naturalized/universalized/globalized 21st century STEM enterprise.

    Within this worldview, researchers like Grothendieck, Eilenberg, Mac Lane, Arnold, Mochizuki (and many others) are regarded as visionary 20th century explorers of mathematical territories that tens of thousands of 21st century STEM researchers now are moving to settle, along pioneering trails of mathematical naturality and universality.

    \bigstar What does this mean?  It means that in coming decades, one extra year will be added to undergraduate STEM degree programs. During this year students will focus upon the mathematical ideals of naturality and universality, so as to empower students to construct for themselves the “missing first chapters” of the 20th century’s canonical STEM texts.

    • September 23, 2012 12:14 pm

      Dear John, let me raise my objection for adding one extra year to undergraduate programs of any kind, for any purpose.

      • John Sidles permalink
        September 23, 2012 3:11 pm

        Dear Gil, it is true that many universities that formerly had five-year STEM curricula — Cornell University, for example — have switched to four-year curricula … for reasons that you could articulate at least as clearly as I.

        Perhaps a reasonable resolution is to be found in a highly-rated on-line Amazon Review of Spivak’s Calculus On Manifolds

        The Mathematician’s Calculus

        When you are in college, the standard calculus courses will teach you the material useful to engineers. If you want to become a mathematician (pure or applied), you must pretty much forget the material in these courses and start over.

        That’s where you need Spivak’s “Calculus on Manifolds”.

        Spivak knows you learned calculus the wrong way and devotes the first three chapters in setting things right.

        Along the way he clears all the confusion arising from inconsistent notation between partial derivatives, total derivatives, Laplacians, and the like.

        Therefore, my modest proposal is that STEM degree curricula be universally reduced to three years, in which students study only the initial chapters of the 20th century’s canonical STEM textbooks … and those initial chapters are rewritten with a Spivak-style emphasis upon mathematical naturality and universality.

        Would naturality-centric three-year STEM curricula yield a better overall education than today’s four-year STEM curricula? Perhaps not. Yet plausibly, naturality-centric STEM curricula would be comparable in result … and be deeper and cheaper and faster!

        Oh yes … and more fun too!   🙂

      • September 23, 2012 7:26 pm

        Wow!

  3. September 23, 2012 10:52 am

    I am wondering if the procedural/object-oriented approaches are equivalent to the classification of mathematicians between problem solvers and theory builders by Tim Gower ?

    Problem solvers attack a problem from the “bottom-up”, by combining and pushing known
    mathematical tools to their limit.

    Theory builders will first develop a powerful and rather abstract theory. They will then trivialize the initial problem by showing that it is a special case of some lemma..

  4. September 23, 2012 2:03 pm

    Computational lower bounds -> Circuit models for computation.

  5. Uli Fahrenberg permalink
    September 23, 2012 5:19 pm

    Pip says it already, kind of, and also “Me” above, citing Tim Gowers: the distinction you make between the procedural and object-oriented approach is the same Grothendieck makes between “nut crackers” and “the raising sea”; see e.g. McLarty’s take on this here: http://www.math.jussieu.fr/~leila/grothendieckcircle/mclarty1.pdf

  6. Serge permalink
    September 24, 2012 6:45 am

    Very interesting article. In fact schemes owed as much to Grothendieck as to some of his colleagues: they were already in the air. But his mastering of functorial ideas made him the right man in the right place: he alone could make them become the main object of algebraic geometry.

  7. anonymous permalink
    September 24, 2012 12:52 pm

    Zig-zag product does not improve constant-degree expanders. It gives a construction that is easy to analyze.

    • September 24, 2012 9:47 pm

      True the bounds are not sharpened, but I meant “improve” in the construction complexity (“explicitness”) and generally, per the tenor of Avi’s 2000 DIMACS talk which I heard on them. Agreed with Nick Mancuso above; not sure about what “1ly1” says below—in any event “new way of thinking” goes with the idea of sculpting new objects. With Grothendieck there are also topoi.

      • September 25, 2012 1:06 pm

        I was saying varieties were already used. Your point that objects to your third question were ‘Varieties over finite fields,’. That is incorrect. The proper objects were a Weil Cohomology theory. I am unsure how you will frame a philosophy or line of thought into an object framework. If so, may be for every question can one ‘search’ for such an object????

      • September 25, 2012 1:09 pm

        One can learn the proofs of the Weil conjectures without knowing Topoi

  8. September 24, 2012 2:09 pm

    Things are not as simple as you are mentioning. For example, Riemann Hypothesis extended to finite fields. It is absolutely INCORRECT to call varieties over finite fields as a relevant object. It is true Weil worked with varieties proved his conjectures for curves. But varieties were already defined in the 19th century and Zariski topology was already there by 1950. Zariksi topology was not fine enough. It was Grothendieck’s invention and definitions of open sets via covers and his relative point of view which changed staus quo paving the way for etale cohomology. It is not just putting a new object and things and the engine is created. It is a new way of thinking. That is why people call only a few teachers in history of humanity a master of thinking and Grothendieck was one of them.

    • Serge permalink
      September 24, 2012 3:02 pm

      I more or less agree regarding Weil’s viewpoint… but if you look at Serre’s definition of varieties – the first to make use of the structure sheaf – you might find out it wasn’t so difficult to make the next move by replacing finite type reduced algebras by arbitrary commutative rings. Moreover it was Cartier’s initial idea – not Grothendieck’s. In my humble opinion, his genius rather consisted in foreseeing immediately how much water he could draw from this well… and *that* was a hell of a lot!

      • September 24, 2012 3:51 pm

        Pardon me. I was under he assumption that it was Grothendieck who brought the idea of using covers for open sets. In hindsight, may be that was the only thing that could have been done with the technology at that time.

      • Serge permalink
        September 24, 2012 4:07 pm

        I agree with “master of thinking” to qualify Grothendieck. He’s almost the same age as Serre and their styles are quite complementary. Maybe we could compare them to Hardy-Litllewood… or at least Turing-von Neumann! 🙂

      • September 24, 2012 5:04 pm

        I would probably disagree with the comparison. The fields of endeavors also have to be taken into account. Before G, AG was a curiosity and unfashionable subject done by selected mathematicians. After Weil and most importantly G, the whole field go a lift. The techniques, scope and depth of approaches are probably incomparable. G stands alone atleast until Voevodsky and may be now Mochizuki.

      • Serge permalink
        September 24, 2012 5:49 pm

        I certainly am partial to S as I was lucky enough to attend some of his lectures at Collège de France. I promise you he didn’t usurp his Fields Medal. 🙂 He was on the way to prove Weil’s conjectures but Deligne was faster. He did have a long and fruitful collaboration with G on AG. However it was G who actually created a new world.

      • September 25, 2012 1:33 am

        Lucky you.

  9. September 24, 2012 4:00 pm

    It is very pleasing to see the Borsuk problem mentioned here. Some years ago The problem and me were featured in a post on surprises in mathematics and theory and I immediately showed it to my wife and children (and a few friends) who were, like me, very pleased. Unlike some other forms of recognition which require some tedious explanation this was immediately clear. Scientific blogging is a new thing that we are examining, but maybe mathematical blogs can make it easier to get connected with your family about your work. (This sounds good, but then again, maybe it is instrumental for progress in mathematics that mathematicians cannot get connected with their families regarding their work?) Anyway, I thought to use this opportunity to tell a few things about Borsuk’s problem itself. I write about the problem from time to time on my blog, but it will be nice to bring some problems to the attention of readers in this big-league blog.

    A very surprising feature of our disproof was how easy the proof is. I think we were a little disappointed since we felt that we could have cracked the problem even if it had been a little harder. We had an earlier result: a disproof of a strong form of the case $t=1$ of Larman’s conjecture (this strong form was conjectured by Seymour) that was much much harder. Larman’s conjecture (the conjecture about set systems outlined in the post) is still open and very interesting for t=1. It is a dual of a sort to the famous  Erdős–Faber–Lovász conjecture. Larman also asked if the assertion of Borsuk’s conjecture holds for 2-distance sets, and this question is also still open.

    Of course, our disproof of Larman’s (and hence Borsuk’s) conjecture relies on the Frankl-Wilson theorem, a wonderful result which has various remarkable applications. You can use instead the Frankl-Rodl theorem, which is also a wonderful result.

    So let me mention two questions which are still outstanding:

    Is there an exponential (really exponential) lower bound?

    Can we replace 1.2^{\sqrt d} by c^d for c>1? A result of Schramm tells you that you can cover every convex sets in R^d with (3/2)^{d/2} sets of smaller diameter (and Bourgain and Lindenstrauss showed that you can take these sets to be even balls). But we do not have a matching lower bound.

    One good direction for an exponential lower bound following a suggestion by Yuval Peled and some discussion with him and Simon Litsyn is based on algebraic-geometry codes. Algebraic geometry codes were invented by Goppa and have the remarkable property of beating the Gilbert-Varshamov bounds for large alphabets. One remarkable property they have is that they can have exponentially many words of minimal weight, and most likely (although we need to check it) also exponentially many words of maximum weight. To be in business we need an analog for the Frankl Wilson theorem. This looks hard but perhaps not impossible.

    Can the Borsuk Conjecture be saved?

    We know that Borsuk’s conjecture is incorrect but can we say that in some sense counterexamples are abnormal? There is a proposal for how to save the conjecture which is easier to describe for a finite set of points. Let A be a finite collection of points in R^d with maximum distance 1. We say that this is a genuine set of diameter 1 if for every perturbation of the distances between pair of points of distance one we can perturb the vertices to realize the perturbed distances. The saved Borsuk conjecture asserts that a genuine finite set of points in R^d of diameter 1 can be covered by d+1 sets of smaller diameter.

  10. September 28, 2012 3:23 am

    I think you exaggerate the extent to which a yes/no problem becomes harder if you don’t know the answer. Suppose, for example, that the problem is of the form, “Is it the case that Q(X) for every X such that P(X)?” and that the answer happens to be no. Does it matter if your initial hunch is that the answer is yes and you attempt to prove the statement? Often it hardly matters at all. You start trying to prove that P(X) implies Q(X), and in the course of doing so you find some relatively simple condition R(X) that you are not given and that seems to be needed to get the proof to work. And often it turns out that if you pick the most natural example you can think of of an X such that R(X) does not hold (but P(X) does), then it will be a counterexample to the original problem. I know you allowed for this possibility in the post — I just feel that you downplayed it too much. I think I read somewhere Gil saying slightly provocatively that trying to prove a statement and trying to prove its negation are more or less the same thing. In many instances I agree with that.

Trackbacks

  1. Server Shopping; eBonds; AMD Smells like Freedom; German Traders like Rules; Panic of the Industry Pros; and Guessing « Pink Iguana
  2. Why We Lose Sleep Some Nights « Gödel’s Lost Letter and P=NP
  3. Mounting or Solving Open Problems « Gödel’s Lost Letter and P=NP
  4. Psst—It’s The Definition | Gödel's Lost Letter and P=NP
  5. Alexander Grothendieck 1928–2014 | Gödel's Lost Letter and P=NP
  6. Open Problems That Might Be Easy | Gödel's Lost Letter and P=NP
  7. How to Check a Proof | Gödel's Lost Letter and P=NP
  8. Michael Cohen 1992-2017 and Vladimir Voevodsky 1966–2017 | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading