Skip to content

Cook’s Class Contains Pi

March 15, 2009


Cook’s class and the BBP algorithm for Pi

images8

Steve Cook won the Turing Award for his famous paper on SAT that started the P=NP problem. He has done many other wonderful things in many areas of computational complexity, I especially love his work on proof systms. I am proud that my Cook number is {1}.

Cook is also famous for having a complexity class named after him: {\mathsf {SC}}. Steve’s Class is the set of languages accepted in polynomial time and poly-logarithmic space. Think of {\mathsf {SC}} as the set of languages that can be recognized efficiently, but with very small space. Today as problems studied begin to contain vast amounts of data, as the size of objects that we wish to understand grows, Steve’s Class has become even more important.

When I was an assistant professor at Yale University, David Dobkin and I would often argue about what is better having {X} named after you or having {Y} named after you. For example, would you like a constant (Euler) or an inequality (Jensen) or an equation (Schrödinger) named for you, and so on? We finally decided the best was a “universe”. If you do not know the universe we were thinking about it is called Herbrand’s Universe named after the logician Jacques Herbrand. Having a complexity class named after you, like Steve has, is still pretty cool.

Computing {\pi}

Computing {\pi} fast involves two ideas. First, a fast converging summation and second the use of efficient algorithms for high precision arithmetic operations. For example, the methods use series like the following, due to S. Ramanujan,

\displaystyle  \frac{1}{\pi} = \frac{2\sqrt 2}{9801} \sum_{k=0}^{\infty} \frac{(4k)!(1103+26390k)}{(k!)^{4}396^{4k}}.

These and other remarkable formulas are the basis of modern computations of {\pi}: you might ask how does one find such formulas? I have no idea, that is definitely not my area of expertise.

The problem with these fast algorithms is not speed, they run in almost linear in the number of bits that you want of {\pi}. The problem is the amount of space that they require. Prior to 1995, algorithms used linear space. So if you wanted to compute the first trillion bits of {\pi}, the computation cost was reasonable; however, the space cost was huge.

In 1995, David Bailey, Peter Borwein, Simon Plouffe discovered a beautiful algorithm that changed all this. Their algorithm–called BBP–computes the {n^{th}} bit of the number {\pi=3.14\dots} without first computing any of the previous bits. This went against the conventional wisdom . Before their work most thought that you needed to compute the earlier bits of {\pi} to get the later bits. They shattered the conventional wisdom. If you have read any other parts of my blog, then you know this is one of my main ideas. We all guess wrong sometimes. What about P=NP?

From the viewpoint of complexity theory they showed that the problem of computing the {n^{th}} bit of {\pi} is in {\mathsf{SC}}. The BBP algorithm has an interesting history and some controversy surrounding it. See this for a view from Plouffe about it . I only point this out to show you that the story behind results can be quite interesting. Indeed.

The BBP Algorithm

There are two things about the BBP algorithm that I find intriguing and want to share with you:

  • The algorithm uses the exponential trick that we sometimes take for granted as computer scientists;
  • The algorithm has a bug. It is not really an algorithm. Well it is almost an algorithm.

The first point is that the algorithm uses the following:

Lemma 1 The value of {a^{M}} modulo {N} can be computed in time polynomial in the number of bits in {a,M,N}.

This well known fact–to computer scientists–is of central importance to many areas. It is used in cryptography, in number theory, and elsewhere. It is interesting that they felt the need to explain the
exponential trick in some detail . For example, they not only give the detailed code of this trick, they give the following example:

\displaystyle 3^{17} = 3((((3^{2})^{2})^{2})^{2})

“thus producing the result in only 5 multiplications, instead of the usual 16”. I would like to talk more about this important idea in a later post.

Let’s now turn to finally explaining how their algorithm works. Rather than compute the bits of {\pi} it is easier to explain the algorithm for the simpler case of the number {\ln 2}. We first note that

\displaystyle  \ln 2 = \sum_{k=1}^{\infty} \frac{1}{k2^{k}}

is a well known summation–to those who know such sums. Their idea is this: to compute the {n^{th}} bit of a number {\alpha} one needs only to compute the fractional part of {2^{n}\alpha}. Denote the fractional part of {x} by {\{ x \}}. Suppose that {\beta = \{ 2^{n}\alpha \} }. Now let

\displaystyle  \alpha = m+ \sum_{k=0}^{\infty} \frac{a_{k}}{2^{k}}

where {m} in an integer and {a_{k} \in \{0,1\}}. Then, {\beta} is equal to

\displaystyle  \{ \frac{a_{n+1}}{2} + \frac{a_{n+2}}{2^{2}} + \dots \}

since all the other terms are integers: we use that { \{ x + 1 \} = \{ x \}}. Suppose that {\beta  \le 0.5 - \epsilon }, then {a_{n+1} = 0}; if {\beta  \ge 0.5 + \epsilon }, then {a_{n+1} = 1}. Here \epsilon will be selected later on.

Thus, to find the {n^{th}} bit of {\ln 2} we need only compute the factional part of {2^{n}\ln 2}:

\displaystyle  \{ 2^{n} \ln 2 \} = \sum_{k=1}^{n} \{ \frac{2^{n-k}}{k} \} + \sum_{k=n+1}^{\infty} \{ \frac{2^{n-k}}{k} \}

Call the first sum {A} and the second {B}. They key insight is that each of these sums can be computed to high precision in polynomial time and small space.

The sum {A} is equal to,

\displaystyle  \sum_{k=1}^{n} \frac{a_{k}}{k}

for {a_{k} \equiv 2^{n-k} \bmod k}. Note, this uses the fact that

\displaystyle  \{ \frac{x}{k} \} = \{ \frac{x \bmod k}{k} \}.

Clearly, this can be computed in polynomial time to {p} bits of precision in space at most {p}. Note, this is where they need to use the exponential trick: it is important that the values of {a_{k}} are small and easy to compute. The sum {B} is even easier to compute. It is almost a geometric series so the terms decay off fast: thus, to compute it to {p} bits of precision is easy in polynomial time and in {p} space.

They then have the factional part of {2^{n} \ln 2}: if it is larger than {0.5+\epsilon} they know that the {n^{th}} bit is {1}; if it is smaller than {0.5-\epsilon} they know that the {n^{th}} bit is {0}. A very pretty idea.

A “small detail” what is the value of {p \text{ and } \epsilon}: recall {p} is the precision of the calculation. In practice they have found that {p=128} is quite enough, with \epsilon set to about 2^{-p}. From a complexity point of view we need to argue that making {p} poly-logarithmic in {n} suffices. They do not supply this argument. We will get back to this issue shortly, it is the reason the algorithm may not work.

What About {\pi}

The BBP method is the same for {\pi}, they replace the summation for {\ln 2} by the following summation for {\pi}:

\displaystyle  \pi = \sum_{k=0}^{\infty} \big( \frac{4}{8k+1} - \frac{2}{8k+4} - \frac{1}{8k+5} - \frac{1}{8k+6} \big).

Part of the exciting story of their algorithm is how they found this summation. It was not known to anyone, they had to go out and discover the summation. How they did this is a long story that I will talk about another day. They used guessing and a clever search method based on lattice algorithms.

Note, the structure of their summation is of the same type as that for {\ln 2}. It particular, it is of the form:

\displaystyle  \sum_{k=0}^{\infty} \frac{r(k)}{s(k)b^{k}}

where {r(k)} and {s(k)} are polynomials and {b>1} is an integer. Such sums all can be used to get the {n^{th}} {b} base digit of the value of the sum.

A Bug?

The beautiful algorithm of BBP does not actually work. The problem is simple. In a nutshell: what if the fractional part they get lies between 0.5 - \epsilon and 0.5 +\epsilon. Is the bit 0 or is it 1? They cannot tell. This is the problem. Of course they could increase the precision and lower \epsilon, but how far do they have to go? In practice, as I said earlier, this issue seems to be a non-issue. However, to prove that their algorithm is in \mathsf{SC} requires a bound. They do not have one.

The problem has to do with long runs of 1‘s in the number they are computing the bits of. For example, suppose that {\ln 2} had a huge number of {1}‘s in a row somewhere in its binary expansion. Then, how could we tell what is the correct bit? The fractional part must be determined whether is larger or smaller than {0.5}: what happens if it is extremely close to {0.5}? The answer is this is the bug in the algorithm. The problem is this:

\displaystyle 0.0111111111111111111\dots

vs

\displaystyle 0.1000000000000000\dots

how do you tell them apart if the run of {1} is very long? The following remakable quote is in Wikipedia:

”There is a vanishingly small possibility that a particular computation targets a section that is subject to small round-off errors being carried for many digits (e.g. in base 10, a long string of zeros or string of 9’s within the constant) and that the error will propagate to the most significant digit, but being near this situation is obvious in the final value produced.”

Excuse me, this is nonsense. The statement “vanishingly small possibility” is a statement of probability theory. There is nothing random here at all. I agree that it seems that the problem may never arise in practice, but that is different. The last point of the quote is correct. If the value of the fractional part is very close to {0.5} we will know that we are in trouble. But that means that the BBP is not really an algorithm.

Open Problems

The obvious open problem is fix BBP and make it a real algorithm. I have a strong belief that the following should hold:

“Theorem” 2 Computing the {n^{th}} bit of {\pi} is in {\mathsf{SC}}.

The plan of a proof is to use their basic BBP method. However, when one detects a long run of {1}‘s, then use the fact that {\pi} satisfies a transcendence measure. . This means that {\pi} satisfies:

\displaystyle  \mid \pi - \frac{r}{s} \mid > \frac{1}{s^{c}}

for integers {r,s} large enough and some {c>0}. I believe that this can be used to show that the above theorem will work. Roughly, a long run of {1}‘s would violate the above inequality for \pi.

I may write a post working out the details of this argument, or you can take a try and work it out.

32 Comments leave one →
  1. Alex Maslin permalink
    March 15, 2009 8:55 pm

    There’s a typo in the Ramanujan series for 1/π : the numerator in the sum should be (4k)!(1103 + 26390k) , not (4k!)(1103 + 26390k) .

    • rjlipton permalink*
      March 17, 2009 4:14 pm

      Thanks, will fix as soon as can…rjlipton (dick)

  2. March 15, 2009 9:36 pm

    You should definitely take out your frustrations on the offending page by editing it!

    From reading up about Irrationality Measure and from assuming that the authors of the paper knew about it, it seems you would need another ‘trick’ to prove theorem 2 in addition to any details, do you have one in mind?

    Happy pi-boxing day!

  3. Jeremy H permalink
    March 15, 2009 10:01 pm

    It seems that bounding the length of the runs by a transcendence measure is not sufficient (at least not in the straightforward way).

    Suppose that we want to find bit $n$, and the run of 1s goes from $n$ to $j$. Then the error is roughly $2^{-j}$, but $s$ has size roughly $2^n$. We therefore get that $cn > j$, which gives us linear space instead of the desired poly-logarithmic space.

    • rjlipton permalink*
      March 17, 2009 4:17 pm

      I left out the details. What I think may be true is this:

      Try the BBP on n bit thru n+m for large m. Then, if all near 0.5 I think will get a contradiction with the measure. The point is that you do poly calls to their small space algorithm. Does that make sense?

  4. Jeremy H permalink
    March 17, 2009 4:56 pm

    That makes a lot of sense. Suppose that the first bit we can definitively decide is bit k. The algorithm can only “fail” if we get 011111…110… or 100000…001…, so either way, x must occur before the end of the run. But once we know the value of a bit within the run, we know which type of run it was. So b_n=1-b_x. And by the above calculations, the run is at most linear in length.

    I suppose I should be a bit more formal about x being in the run, but it just gets ugly to type. 🙂

  5. March 21, 2009 9:03 pm

    Good post, your post could really help me in my work. Hopefully this will help increase my traffic more. Thanks for sharing.

  6. March 25, 2009 1:46 pm

    I read your approach by checking m different bits. The problem that I originally saw is that you need m to be on the order of n, so you have to check pseudopolynomially many values, not polynomially many.

    How do you get the nifty math mode in your comments?

    BTW: It looks “Simonn” is comment spam

    • rjlipton permalink*
      March 25, 2009 4:34 pm

      To get math mode in comments you must use latex. The “trick” is that you must type

      $ latex your math goes here$

      BUT leave no space between the first $ and the latex phrase.

  7. Jeremy H permalink
    March 29, 2009 12:28 am

    daveagp: How does this result in pseudopolynomially many calls?

  8. March 29, 2009 11:13 am

    jeremy: the “polynomial” in poly-time algorithm refers to polynomial in the size of the input, measured in bits. so an algorithm that takes only one integer, n, as input, actually has input size \log(n).

    (e.g. you can add numbers two k-bit numbers using O(k) operations. the value of the numbers is around 2^k… what I mean by this example is that poly-time in the input size is the same as polylog-time in the input value, at least if the input to the algorithm is one integer)

    the proposed algorithm takes time proportional to the input value, which is not polynomial in the input size – this is typically called a pseudopolynomial time algorithm. (another example: factoring a number into primes can be done in pseudopolynomial time using the naive approach of trying all factors one at a time. more info lives at http://en.wikipedia.org/wiki/Pseudo-polynomial_time) it would not be practical to use it to find, say, the 2^100th bit of \pi.

  9. March 29, 2009 11:24 am

    sorry, I should clarify the last sentence: the transcendence measure bound does not guarantee that, a priori, when you run the algorithm to compute the 2^{100}th bit of \pi, you will get definitive bits in any reasonable amount of time.

    that having been said, in actually running the algorithm, the definitive bits may usually come up pretty early – transcendence measure only provides an upper bound, not a lower bound. if we were to assume somehow that pi’s bits were “random,” then it is “likely” that for all inputs up to 2^{X}, you do not need to check more than $O(X)$ bits. this might be what the sketchy wikipedia article was trying to say informally.

  10. Jeremy H permalink
    March 29, 2009 11:28 am

    I had assumed we were encoding in unary to begin with, since the BBP algorithm was already using O(n) time.

  11. March 30, 2009 11:28 am

    Aha – yup, you’re right, I was mistaken on that point. Also, I found in my reading of a certain wiki-like encyclopedia that there is a (1/16)^n factor missing from the formula, which should be
    \displaystyle \pi=\sum_{n=0}^\infty \left(\frac{4}{8n+1}-\frac{2}{8n+4}-\frac{1}{8n+5}-\frac{1}{8n+6}\right)\left(\frac{1}{16}\right)^n
    and found that

    Pi Hex was a project to compute three specific bits of π using a distributed network of several hundred computers. In 2000, after two years, the project finished computing the five trillionth, the forty trillionth, and the quadrillionth bits. All three of them turned out to be 0.

  12. Robert permalink
    February 5, 2010 7:21 pm

    Herbrand isn’t the only person to have a universe named after him, as Alexander Grothendieck also does.

    • rjlipton permalink*
      February 6, 2010 9:16 am

      Cool. Did not know—what a great thing.

  13. February 23, 2010 3:04 am

    > Roughly, a long run of 1’s would violate the above inequality for pi.

    Then you have solved a problem that was open for a long time and thought to be exceptionally hard, normality of pi base 2 (negatively). I really doubt it is that simple.

  14. Steven Twentyman permalink
    June 30, 2010 5:52 pm

    Hello.

    What if, in a general theory of everything kind of way, some singular human conscious had used simple Yes(I should feel guilty) or No(I do not feel guilty) answers to answer every moral question that he could remember that had ever occurred in his life. In this way he could have become completely pure. He would have truly asked himself all the questions that could be asked of him. He would have emotionally evolved back to when he was a child.

    What if he then assigned an ‘It doesn’t matter as life is only made to make mistakes’ label on every decision that he had analysed.

    This would not make him God or the devil, but just very still and very exhausted. Anybody can do this but just for the purpose of this experiment lets say I have done this. Which I have.

    There are no fears in me and if I died today I could deal with that because who can know me better than myself? Neither God or the Devil. I am consciously judging myself on ‘their’ level.

    To make this work, despite my many faults, take ME as the ONLY universal constant. In this sense I have killed God and the Devil external to myself.The only place that they could exist is if I chose to believe they are internal.

    This is obviously more a matter for a shrink more than a mathematician, but that would only be the case depending on what you believed the base operating system of the universe to be. Math / Physics / morals or some other concept.

    As long I agreed to do NOTHING, to never move or think a thought, humanity would have something to peg all understanding on. Each person could send a moral choice and I would simply send back only one philosophy. ‘ LIFE IS ONLY FOR MAKING MISTAKES’.

    People, for the purpose of this experiment could disbelief their belief system knowing they could go back to it at any time. It would give them an opportunity to unburden themselves to someone pure. A new Pandora’s box. Once everyone had finished I would simply format my drive and always leave adequate space for each individual to send any more thoughts that they thought were impure. People would then eventually end up with clear minds and have to be judged only on their physical efforts. Either good or bad. It would get rid of a lot of maybes which would speed lives along..

    If we then assume that there are a finite(or at some point in the future, given a lot of hard work, there will be a finite amount) amount of topics that can be conceived of then we can realise that there will come to a time when we, as a people, will NOT have to work again.

    Once we reach that point we will only have the option of doing the things we love or doing the things we hate as society will be completely catered for in every possible scenario. People will find their true path in life which will make them infinitely more happy, forever.

    In this system there is no point in accounting for time in any context.

    If we consider this to be The Grand Unified Theory then we can set the parameters as we please.

    This will be our common goals for humanity. As before everyone would have their say. This could be a computer database that was completely updated in any given moment when a unique individual thought of a new idea / direction that may or may not have been thought before.

    All that would be required is that every person on the planet have a mobile phone or access to the web and a self organising weighting algorithm biased on an initial set of parameters that someone has set to push the operation in a random direction.

    As I’m speaking first I say we concentrate on GRAINE.

    Genetics – Robotics – Artificial Intelligence – Nanotechnology and Zero Point Energy.

    I have chosen these as I think the subjects will cross breed information(word of mouth first) at the present day optimum rate to get us to our finishing point, complete and finished mastery of all possible information.

    Surely mastery of information in this way will lead us to the bottom of a fractal??? What if one end of the universes fractal was me??? What could we start to build with that concept???

    As parameters, we can assume that we live only in 3 dimensions. We can place this box around The Milky Way galaxy as this gives us plenty of scope for all kinds of discoveries.

    In this new system we can make time obsolete as it only making us contemplate our things that cannot be solved because up to now, no one has been willing to stand around for long enough. It has been holding us back.

    All watches should be banned so that we find a natural rhythm with each other, those in our immediate area and then further afield.

    An old clock watch in this system is should only be used when you think of an event that you need to go to. It is a compass, a modern day direction of thought.

    A digital clock can be adapted to let you know where you are in relation to this new milky way boxed system.(x,y,z).

    With these two types of clocks used in combination we can safely start taking small steps around the box by using the digital to see exactly where you are and then using the old watch to direct you as your thoughts come to you.

    We can even build as many assign atomic clocks as are needed to individually track stars. Each and every star in the Milky Way galaxy.

    I supposed a good idea would be to assume that I was inside the centre of the super-massive black hole at the centre of the galaxy. That would stop us from being so Earth centric.

    We could even go as far as to say that we are each an individual star and that we project all our energies onto the super-massive black hole at the centre of the galaxy.

    You can assume that I have stabilized the black hole into a non rotating perfect cube. 6 outputs /f aces in which we all see ‘the universe and earth, friends’ etc. This acts like a block hole mirror finish. Once you look it is very hard to look away from.

    The 6 face cube should make the maths easier to run as you could construct the inner of the black hole with solid beams of condensed light(1 unit long) arranged into right angled triangles with set squares in for strength.

    Some people would naturally say that if the universe is essentially unknowable as the more things we ‘discover’ the more things there are to combine with and therefore talk about. This is extremely fractal in both directions. There can be no true answers because there is no grounding point. Nothing for the people who are interested, to start building there own personal concepts, theories and designs on.

    Is it possible that with just one man becoming conscious of a highly improbable possibility that all of universes fractals might collapse into one wave function that would answer all of humanities questions in a day? Could this be possible?

    Answers to why we are here? What the point of life really is et al?

    Is it possible that the insignificant possibility could become an escalating one that would at some point reach 100%???

    Could it be at that point that the math would figure itself out so naturally that we would barely need to think about it. We would instantly understand Quantum theory and all.

    Can anybody run the numbers on that probability?

  15. schneider axel permalink
    June 13, 2012 8:29 am

    Isoperimetric inequality & RE Turing Degree/ pi transcendence measure

    Isoperimetric extremum : equilateral triangle and circle

    Isoperimétric unit = sqare “like” a triangle / circle

    “Vitruvian Man” 8/5 : (3,4,5) (4,1, sqr 17) 1!+…+5! = 1+…+17 = 153 R/ 0! = 1
    (3,4,5) (5,4, sqr 41) 1+2+3+4 = 10

    (17+5)/2 = 11
    99*99 = 9801
    33*33 = 1089

    1089 + (3+4) * 2 = 1103 cunningham chaine 2n+1 1103 2207 (real gnomon)

    1+ 153 * 2 * 5 = 1531 smalest cunningham chain 2n-1 (i gnomon)

    1531 3061 6121 12241 24 481

    1089 + (41 * 2 * 2 * 5) + 24 481 = 26 390

    No symetry : smalest cunnigham chain 2n-1 4 iteration 1531
    2n+1 2

    RE Turing Degree

    “long square” (3,4,5) 12 / (14/4) ^ 2 = 0,97959183… 2,05 % incompletness

    RE Turing degree (E. Post)

    pi transcendence measure ?

    sqr p 2
    e 2
    pi 2 * 1,0204… smallest transcendant BBP like ?

    • Golem permalink
      August 7, 2012 7:44 am

      \Displaystyle 1+\frac{1}{1\cdot 3} + \frac{1}{1\cdot 3\cdot 5} + \frac{1}{1\cdot 3\cdot 5\cdot 7} + \frac{1}{1\cdot 3\cdot 5\cdot 7\cdot 9} + \cdots + {{1\over 1 + {1\over 1 + {2\over 1 + {3\over 1 + {4\over 1 + {5\over 1 + \cdots }}}}}}} = \sqrt{\frac{e\cdot\pi}{2}}

  16. Golem permalink
    August 29, 2012 3:42 am

    If one non-trivial zero of zeta isn’t on the 1/2 real then one is false :

    – pi is BBP like and satisfies a transcendence measure
    – e isn’t BBP like but satisfies a transcendence measure
    – 1531 is the smallest Cunningham chain 2nd kind (2x-1) and length 5 (4 iterations) and 2 is the smallest one fot ehe 1st kind (2x+1)

    Quod Erat Demonstrandum

    • Golem permalink
      August 29, 2012 7:39 am

      Max arithmetical reduction of the Lipton problem :

      1+2+3+4 = 10 = 2 X 5 1!+2!+3!+4! = 33 = 16 + 17 (transcendence measure)

      1+……+17 = 153 = 1!+…+5! = 12 * 10 + 33 (BBP like)

      Then 1/2 zeta : (Separable extension = Galois cohomology)

      Vitruve : (4,3,5) (16,1,17) (16,25,41)

      5+17/2 = 11

      1531 3061 6121 12241 24 481 Cunningham 2n-1
      2 5 11 23 47 – 2n+1

      99*99 = 9801 33*33 = 1089 4*99 = 396
      1089 + 2*(3+4) = 1103
      1089 + 2*41 *10 + 24 481 = 26390

      Zéta archimedian spiral pi
      Gamma log spiral e

      Separable extension = Fermat’s spiral (Cubical parabola epsilon = +1 -1) :
      2/3 4/5 16/17

      (cf Sieve of Matiiassevitch)

  17. Axel SCHNEIDER permalink
    November 28, 2014 4:35 am

    – irrationality measure e = 2 (Borwein)
    – Pi is BBP like and irrationality measure = or > 2 then pi is O(Log(|x|)) (Chee Yap)
    – e¨i.pi = -1 (Euler)
    – Fermat is true (Willes)
    – 4/(3+5) = 1/2 (“Vitruvian man”)

    Hypothesis :

    1. Lamda Oracle = Arithmetic mean (3,4) / Quadratic Mean (3,4) = 0,9899494… ?
    2. Incompletness (recursion) = 1 – Lambda = 1,00506… % ?
    3. Irrationality measure Pi = 2 / Lambda = 2,020305… ?

Trackbacks

  1. Cantor’s Non-Diagonal Proof « Gödel’s Lost Letter and P=NP
  2. Making An Algorithm An Algorithm—BBP « Gödel’s Lost Letter and P=NP
  3. How Deep is Your Coloring? « Gödel’s Lost Letter and P=NP
  4. Transcendental Aspects of Turing Machines « Gödel’s Lost Letter and P=NP
  5. One Man’s Floor Is Another Man’s Ceiling « Gödel’s Lost Letter and P=NP
  6. An Old Galactic Result | Gödel's Lost Letter and P=NP
  7. The Other Pi Day | Gödel's Lost Letter and P=NP
  8. The Value of Pi | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading