Skip to content

The World is Digital

October 4, 2009


One of the biggest insights of computer science is that the world is digital

images

Alan Kay is a brilliant computer scientist who has won the Turing Award, along with many other honors. He is famous for creating the Dynabook concept—the idea that became the laptop, and for creating many programming languages, such as Smalltalk.

Today I want to talk about one of the biggest results in all of Computer Science. No, it is not the Halting Problem, it is not the P=NP question, it is not that Linear Programming is in P, nor any of the many other beautiful results of theory. It is, in my opinion, the realization that the world is digital (WID).

This summer I was at an NSF workshop on Computational Thinking—an idea that Jeannette Wing is behind. During the workshop I said:

In my opinion, one of the key insights in all of Computer Science, is that the world is digital.

I also added that this was a relatively recent result, and it has had a tremendous impact on the world.

As soon as I said this, Bill Wulf and Alan Kay both jumped all over me, and continued their comments into the next coffee break. They must have thought that I was crazy or worse—perhaps they still do. Their position was that WID had been known forever—back to the Greeks—therefore it was not new, nor was it a deep insight. I disagreed. It’s hard to argue with the past president of the National Academy of Engineering, Bill; and with a Turing Award winner, Alan. But I tried. I did not get them to budge, not at all.

I still think I am right. I will explain why and I hope that you might agree with me, but either way let me know what you think.

The World is Digital

The more original a discovery, the more obvious it seems afterwards—Arthur Koestler, journalist, novelist, social philosopher, and science writer.

The fact that all things can be represented by sequences of zeroes and ones sounds like a simple idea, but I claim that it is deep and profound.

Look around you. You may see, like I do, beautiful trees outside in the backyard. You may hear wonderful sounds: birds chirping; music playing classical or rock or hip hop or another genre. You may be reading a book or watching a show on a HD-TV and seeing images of people that are almost lifelike.

Yet no matter how beautiful the images are, no matter how complex the sounds are, they all can be reduced to sequences of bits. The world truly is digital. In a sense, 0 and 1 represent computer science’s basic atomic particles. Information consists of arrangements of 0’s and 1’s. Nothing else is needed. Computer science differs from modern physics, while Physics uses hundreds of “basic particles” to build the world—we only need two.

I believe that the discovery that all things can be viewed as digital is as revolutionary as the notion that the world is made of atomic particles. But, the discovery that the WID took half a century longer to discover than even the simplest atomic particles.

Electrons, for example, were discovered by Joseph Thomson in 1897, long before we knew that the WID. Some may scoff and say that it’s obvious that the WID. But, I think I can argue that is not so. Yes, particle physics is a deep theory; however, Computer Science’s insight that everything can be represented digitally is profound.

Who Knew What When?

One way to argue, I believe, that WID is profound is that it is a recent discovery. I believe that if it was “obvious,” then it would have been known for a long time.

{\bullet } The Ancient Greeks: Greats like Euclid did not know the WID. In all of his famous Elements, there is not one section on boolean functions.

Yet, Alan and Bill said to me that the ancient Greeks already knew that the WID. Of course, the Greeks knew that the square root of {2} was not a rational number. Somehow, Alan and Bill felt that this showed that they knew that the WID. I think the Greeks knew many wonderful things but that they were in the dark with respect to the WID—at least that is my opinion.

A kind of atomic theory was known, without proof, to the Greeks. That the world is made of discrete atoms is not the same I would argue as the WID. They did not make the key jump: that is they did not realize that any object of any kind could be represented as a binary string. It seems obvious to us today, but it was not centuries ago.

{\bullet } The Classic Mathematicians: The great mathematicians such as Augustin Cauchy, Joseph Fourier, Carl Gauss, David Hilbert, Pierre-Simon Laplace, and Isaac Newton did not know the WID. They mainly studied continuous systems, and when they did work on discrete systems, they worked with the integers. But I do not believe they ever thought about boolean functions, and nor
do I think that they knew the WID.

{\bullet } Lewis Carroll: Greats like Lewis Carroll—Charles Dodgson—did not know that the WID. While he is famous for his Alice In Wonderland series of books, during his later years he spent a great deal of time working on symbolic logic. He was especially interested in the theory of syllogisms: a set of premises that lead to a conclusion. Here is one of his examples:

Babies are illogical
Nobody is despised who can manage a crocodile
Illogical persons are despised

He then concluded that

Babies cannot manage crocodiles

Lewis Carroll used a version of Venn diagrams to help his readers understand his logical examples; these diagrams are named after the mathematician John Venn who introduced them in 1880. Carroll worked out a method of squares within squares that worked up to {8} letters—where a letter is a “proposition.”

What I find interesting is that Carroll did not see that he was working with boolean logic. His examples and notation are fairly complex, and show, I believe, again that the central nature of bits and boolean values was not understood by him nor his contemporaries.

{\bullet } Gödel: Greats like Kurt Gödel did not know the WID. If he had, he would not have needed to invent his elaborate scheme for numbering proofs. If he had realized that proofs were only a finite collection of symbols, he could have assigned each symbol a unique binary string. That’s all he needed to do: no primes, no fancy facts about primes, just binary strings. Indeed, I never present his complex numbering system when I teach Gödel’s Theorem; I just explain that proofs and formulas are binary strings.

{\bullet } Markov: Greats like the Russian mathematician Andrey Markov did not know the WID. He invented a model of computation that he called “normal algorithms.” Unlike Turing Machines, his model was based on string rewriting rules. I remember reading his 1950 book on his theory of algorithms. One of the striking parts of the book, to me, was the introduction. His model needed to restrict the set of symbols to a finite set of symbols—otherwise, his rewriting rules would not be finite. Today, since we know that the WID this is obviously no issue. However, when he wrote his book he was concerned about the restriction to a finite alphabet. He talked. at length, about the infinite number of ways that one might be able to write a script letter such an “a”:

pic

But, he reasoned that while there might be an infinite number of script letter “a”s, the restriction to a fixed size alphabet was just fine.

That Markov felt, in 1950, that he had to discuss the restriction to a finite alphabet shows me that he was unaware that the WID. Today if I had to argue why a finite alphabet suffices to describe any script letter, I would point out that we can just represent the letter as a digital image. Let the image be {N} by {N} pixels, where {N} is some huge number—if {N} is large enough I would argue that if a human can tell two script letters apart, then they would have different digital representations.

How I Came to Know the World is Digital

I remember when it finally began to dawn on me that the world is digital. I would like to say that I always knew, or at least that it came to me in a flash one day. But that would be a lie. It took me years to realize it. It’s one of those ideas or notions that came slowly to me. However, once I saw it, I wondered how I could have not always known it.

At Carnegie Mellon, where I got my Ph.D., we were never taught that the WID. My professors, like Albert Meyer and Alan Perlis, never told me. Meyer was then, and is now a brilliant researcher—he never told me the WID. Perlis was, at the time, the chair of the entire department of Computer Science, yet he never told me either.

When I was an assistant professor of Computer Science at Yale University in 1976, I got my first inkling that the world could be viewed as digital. I was on the phone with a friend, Bob Sedgewick, who at the time was an assistant professor at Brown University. He was telling me how a newly purchased printer worked.

Now you have to go back 30 years to recall that printers were once very different from the beautiful laser/ink jet printers of today. In those days a printer was a typewriter on steroids. In case you have never used one, here is how a typewriter worked. There was a keyboard like today’s laptops. But when you pressed a key, a piece of metal hit an ink ribbon and then hit your paper. In this way each letter was printed.

Printers did the same—again each letter had a piece of metal that was pressed onto the paper. The user was limited to the font and symbols which were built into the printer. Some printers had clever arrangements, so the printer could go fast. Some had the metal letters arranged on wheels; others had them arranged in even more complex configurations. In any event, typewriters and printers used the impact of a piece of metal onto an ink ribbon to leave a mark on paper. The only difference between printers and typewriters was speed.

Back to my story: Bob was telling me about a new kind of printer that used dots to represent each letter or symbol on the page. He explained how the page was coded as an array of 0’s and 1’s. Where a 1 occurred, the printer placed ink on the page; a 0 instructed the printer to leave a blank on the spot. The way the ink was placed onto the paper was not laser-based or ink-jet based by the way. This early printer used a more primitive technology; however, the page was represented as an array of bits—it was digital.

I have to say, I was surprised. I knew about Gödel numbers. I knew about binary numbers. I knew a lot of computer science—I thought. However, I somehow was surprised that the printer was just placing bits onto a piece of paper. I did not see that printed words and images could be represented solely by dots. Really.

What Bob explained was that he had written a program that created an array of 0’s and 1’s. Then, his program sent the array to the printer which created a printed page. He explained, further, that the program could create text or images or anything in this manner. This was a major shift from pieces of metal striking a ribbon and putting ink onto a page.

Typewriters and printers had been limited to print only the images that were built into the metal keys or balls. Now with the digital approach of using bits as a way to control the printing process, a whole new vista opened up. Any picture, any image, any font could, in principle, be represented, or printed, without changing the hardware.

This was a huge breakthrough.

The Chicken and The Egg Argument

I always show these posts to Subruk before I post them; otherwise, they would be a mess. This time he raised an interesting point that we decided that he should make—that I would not change my thoughts to conform to his insight. I am doing this partly because I do not completely agree with his point, and partly because I think it is a great insight that he should make himself. The plan is that he will add a comment to this once it is out there for you to read.

I would, however, like to address an issue that is related to his point, but please read his comment.

Mathematics is driven by practice and also drives practice. There are many examples of deep mathematics that was created to solve very practical problems. For example, the theory of Fourier Series was driven by the need to solve certain differential equations. Many other examples come to mind: geometry—construction of temples, calculus–astronomy, probability theory—life insurance tables, and on and on.

On the other hand, many times mathematics has been ahead of practice. Godfrey Hardy was actually proud that his favorite area, number theory, was pure—that it had no possible applications. Of course we can only speculate what he would think of modern cryptography’s use of number theory as a basis of encryption. Other areas of mathematics were also developed because they were beautiful. Many examples come to mind: abstract measure theory, set theory, algebraic geometry, infinite dimensional geometry—I am sure the list could go on and on. Sometimes these areas have eventually been used to solve real problems, and sometimes they remain beautiful areas that seem to be without any practical application.

My point is simple. Even before computers existed mathematicians could have worked on binary strings, on boolean functions, and on many of the areas that we work on today. They did not, in my opinion, because they did not understand the power of bits; they did not know that the WID.

A final point. If they had worked on boolean functions years ago, one wonders where theory would be today? Curiously, the difficult foundational issues that plagued early analysis, for example, would have been completely moot. Early mathematicians struggled with the basic question of “what is a real number?” If they had worked on boolean problems, these difficult problems would have been completely avoided.

Open Problems

Is the “world is digital” one of the great insights of Computer Science? Or do you agree with Kay and Wulf?

77 Comments leave one →
  1. subruk permalink
    October 4, 2009 9:09 pm

    My comment is that say instead of WID, that seeing the world in a digital manner makes it easy for us in practice.

    I think the WID is a consequence of the fact that we have very powerful processing and huge storage at relatively small cost. I think the processing power and the availability of digital storage, helps us afford discretization and quantization at a very very fine level. This, helps produce an image/sound or effect that is so close to reality that our senses cannot tell the difference.

    Of course, another advantage of seeing things in a digital way is that it helps us encode things in a manner that we can reproduce data error free with almost certainty. We get CD’s or DVD’s that do not get affected by fungus, and are much more robust towards scratches etc. because of the fantastic error correcting codes which the digital world affords us. We can have the whole of Britannica in a bunch of DVD’s and have multiple copies at nearly zero cost, which protects it from getting eaten by termites or moisture.

    In a nutshell, I think it is all a consequence of the fantastic innovation in processing and storage, and the error correcting algorithms which have helped us build a digital view of the world which is so close to the real thing that we cannot really tell the difference.

  2. John the Scott permalink
    October 4, 2009 10:33 pm

    What do you mean by bits? Do you mean stable, percievable
    changes in electrical or quantum mechaincal state
    or do you mean bits to be literally (ptp) the symbols 0 and 1?

    I agree with Kay, although I would qualify by writing that
    humans have contemplated the discrete natural of reality for eons.

    every generation describes reality in terms of the technology of the day.

  3. October 4, 2009 10:47 pm

    I suppose Boole’s work would, by definition, be a pure study of boolean functions before they were known to be useful in computing. And Charles Babbage’s difference engine and analytic engine are arguably examples of digital computing, at least in theory.

    But yes, the pure mathematics of boolean functions and digital computation before the actual invention of the computer is rather thin on the ground in comparison to similar situations (e.g. number theory before cryptography, Riemannian geometry before general relativity, etc.).

    • October 4, 2009 10:50 pm

      The abacus, by the way, is quite literally a digital computing device, but strangely it is one that did not spawn much in the way of further mathematical development (except for a few specialised algorithms for arithmetic, square roots, etc.).

    • rjlipton permalink*
      October 5, 2009 8:15 am

      I did look at their work. But I agree that Babbage and others were close, but they somehow missed the key point. Do you have any idea why boolean functions could not have been studied hundreds of years ago? The mathematics of the basic definitions and simple theorems would have been much easier to do, then some of the great early work. I find this puzzling. For example, proving that any boolean function is a polynomial would be a simple induction, or proving that NOT and OR are universal would be easy. The early mathematicians would also have avoided the slippery ideas of the infinite, convergence, measure, and so on that they struggled with for years.

      • October 5, 2009 1:11 pm

        I think there are a couple reasons for this. In this post-Hilbert, post-Bourbaki era, we are completely happy with studying abstract structures such as algebras over F_2, but back in the 19th century, there was still a lingering preference for doing mathematics that corresponded somehow with “physical reality”. For instance, while mathematicians were aware of finite fields at the time, I don’t think anyone thought of vector spaces over such fields as being any sort of “geometry”, the way we do now. In the days before Hilbert and Noether, it might even have been controversial to think of working with these fields as “algebra”.

        The second thing is that the digital world is only an _approximation_ to the physical one, and the language for dealing with approximate mathematics (e.g. big-O notation, stability theory, numerical analysis, etc.) was largely only developed in the twentieth century. In the nineteenth century, people still held out hope of solving, say, the three body problem, by exact methods, and the issue of sensitivity to initial conditions was I think first raised by Poincare. One can also see this in the discrete mathematics of the time, which was far more concerned with enumerative combinatorics results than with asymptotic combinatorics. Finally, the probabilistic method (not just in combinatorics, but including such basic methods as Monte Carlo integration, introduced by von Neumann) was still far in the future; non-deterministic methods were either non-existent or highly controversial. Once one removes asymptotics, noise, and randomness from play, there actually isn’t that much to say about boolean functions that is of much interest… (except perhaps for Boole’s observation that any boolean function can be generated from the elementary logical operations).

  4. Shahab permalink
    October 4, 2009 11:21 pm

    I wanted to make a comment somehow near to what Subruk made. I guess you should first show that the world is digital and only then would it make sense to argue if WID is discovered only recently or has been around for a while.

    I may accept the argument that the world is digital to any precision but taking it to to its extreme is not necessarily true to me. You should have really really good evidence for me to accept that. For instance, while I understand that the ways we write ‘a’ is just an example and you can just replace it by any other example, I think it serves as a good example on the argument against WID as well. The argument would be that however big your N may be, I can devise a new program that generates different (not to humans but to some other program) ‘a’ letters that your program fails to tell the difference.

    I understand that you might use first order compactness to prove me wrong but my argument is that, after all, we may not even be able to describe the world in a logical system similar to FO. Indeed, to me this IS the case. As far as I can understand the space, it’s continuous. I may be wrong but you have to prove that to me because all my intuitions tell me so.

  5. JuliaD permalink
    October 4, 2009 11:35 pm

    Shahab, any program you can write, or any physical system you can put together, can be explained digitally by increasing the precision.

    It is thus impossible to disprove that the WID. On the other hand, perhaps someday the physicists will be able to show that the WID. Therefore you should believe it.

    (By the way, the same argument shows that you should believe in God. 🙂

    • October 5, 2009 1:35 am

      JuliaD:
      Shahab, any program you can write, or any physical system you can put together, can be explained digitally by increasing the precision.

      Notice the key words (bolded), what we DO appear reducible to digital because it entails a finite amount of information and our means of expression (languages of any sort) are necessarily discrete, words, symbols, disctinct phonemes, bits, blueprints, whatever.
      A classic case of confusing the map and the territory.
      There is no evidence that the world itself is digital, i.e. that there is a potentially infinite “digital map” which would be an exact description of the whole world.
      I rather suspect that on the contrary the world is intrinsically irreducible to any digital map.

      • Kenny Evitt permalink
        October 15, 2009 4:44 pm

        Isn’t the ‘quanta’ of quantum mechanics evidence that the universe is discrete? I think it’s hard for many people to seriously consider that the WID *because* our mathematic culture is so heavily steeped in the notion of continuity.
        I consider it far more likely that the WID than that the universe is truly continuous in the mathematical sense. Isn’t it far more plausible that the universe is just very fine-grained compared to our (unaided) perceptions? The Planck scale seems plenty small enough for bit storage.

      • October 16, 2009 12:28 am

        Isn’t the ‘quanta’ of quantum mechanics evidence that the universe is discrete?

        No, read the paper I already linked.

  6. John Moeller permalink
    October 4, 2009 11:41 pm

    I tend to agree with Subruk. I think that where WID breaks down is in the measure of space and time. You must accept that space and time are quantized in order to really view them as digital. Even then, an element of space that small is so fuzzy that it’s difficult to describe as being digital.

    What we really work with to describe these measures are rational numbers with error bars. Subruk’s observation that the approximations are close bears this out. Image and sound recordings are perfect examples of this. The sampling may be good, but when you look closely enough, it’s just a simulation.

    That said, we can do quite a lot with our rational numbers. We will never be able to see the difference in some pair of Markov’s a’s. We will never observe a perfect circle. If this is the sense in which the WID, then ok, I buy it.

    • breakingkeyboards permalink
      October 5, 2009 1:20 pm

      I think the circle does not exist what really exist is a program behave like circle . What you call circle is a program , only programs exist.

  7. October 5, 2009 12:33 am

    I think you can use physics to make a case that WID is not just a useful abstraction. There are only finitely many spatial dimensions and varieties of fundamental particle. The current state of the art in theoretical physics states that it’s impossible to know anything about lengths shorter than the Planck length or times shorter than the Planck time. Therefore we can encode the full extent of our knowledge about the physical extent, composition, and duration of anything, and do so with no loss of precision.

  8. JeffE permalink
    October 5, 2009 12:34 am

    What Subruk said. Thinking of the world as discrete/digital is useful in some contexts. Thinking of the world as continuous/analog is useful in some contexts. These alternatives are neither exhaustive nor truly exclusive.

    Or are you really claiming (like Fredkin, Zuse, and Wolfram) that the universe is actually made of bits?

    • rjlipton permalink*
      October 5, 2009 8:07 am

      No. I am making two points I think. One that the world is representable as digital. Two that the study of bits/boolean objects is really important and fundamental. Perhaps even more so than other parts of continuous mathematics.

    • breakingkeyboards permalink
      October 5, 2009 12:41 pm

      …the phylosophy try to explain why ( but this is very difficult and there are strong limits to the answers ) , the physic try to explain how ( and here we get more relevant results but also here there are limits ) , the computer science try to explain only what ( I hope we can go some steps forward )

  9. October 5, 2009 2:15 am

    I think it likely that the WID, but it would be nice to prove it. The sampling theorem was firmly established by 1949 (Wikipedia cites Russian and German versions predating Shannon by several decades), but the Church-Turing thesis could still stand fortification. Until we can reproduce continuous physics results using a discrete theory, it remains a possibility that analog machines could be constructed which could outpace any digital computer (or at least, any digital computer of comparable size and/or energy efficiency).

  10. Jeremy H permalink
    October 5, 2009 2:36 am

    Rather than address whether I agree with WID, I want to propose a different insight in the same vein that I think is of far greater importance:

    Information is a fundamental “thing”. Furthermore, information processing is a fundamental process.

    Maxwell’s Demon is a classic example of this. Scott Aaronson showed that “P != NP” implies quantum mechanics, to a surprising extent (spread throughout various papers, such as http://www.scottaaronson.com/papers/pp.pdf). I believe that Seth Lloyd has also Quantum Comp to derive various pieces of physics.

    I consider this to be far more profound than the fact that we can approximate the real world to arbitrary precision using 1s and 0s.

  11. woupiestek permalink
    October 5, 2009 2:46 am

    I’ve seen centuries old tablecloths with images on them, made with a digital technique similar to that of the printer: embroidery.

  12. Ryan Williams permalink
    October 5, 2009 2:49 am

    There is a third position here, between the claim that the digital view is a useful tool, and the claim that everything “truly” is digital. It is something like: “Every experience that we have can be represented in a digital format which to all of us is completely indistinguishable from the original experience.” According to this, there is no loss in thinking of everything as being in some digital format, because for us weak humans it’s just as good as the real thing, whatever that real thing may be.

  13. Aryeh Kontorovich permalink
    October 5, 2009 4:36 am

    Markov was not alive in 1950:
    http://www.gap-system.org/~history/Biographies/Markov.html

    • rjlipton permalink*
      October 5, 2009 8:08 am

      I believe that I am talking about a different Markov. See this and this and please and let me know.

      Markov (Jr.) Andrei Andreevich
      (1903–1979)

  14. October 5, 2009 5:41 am

    That the world is “effectively” digital is a proposition I can subscribe to. However, whether the world is truly and fundamentally digital is, I think, a question for physics rather than computer science, and an unresolved one at that. In some approaches to the theory of quantum gravity, such as causal sets and loop quantum gravity, the world is fundamentally discrete and therefore digital. In others, such as string theory, continuity still reigns supreme at the fundamental level. A bias for fundamentally digital theories might lead one to reject things like string theory out of hand, but it would be a pretty flimsy reason for doing so.

    One argument against fundamental discreteness is that there are problems with Lorentz invariance. Imagine that instead of having a continuous space-time manifold, physics is formulated on a discrete lattice of points that can be embedded in such a manifold. Whilst the lattice may look uniform in one frame of reference, because of length contraction and time dilation it will look very non-uniform in other reference frames. On the other hand, the original continuous manifold looks the same under such transformations (we are assuming flat space-time here). Therefore, this sort of discreteness singles out a preferred reference frame, in violation of the spirit of Einstein’s relativity. Of course, there are ways around this sort of arguments, e.g. choose a random sprinkling of points rather than a uniform lattice, but it does show that finding a fundamentally discrete physics that is consistent with currently known laws is a difficult problem.

    You can probably tell from this that I am of the “information is physical” rather than the “physics is informational” school of thought.

    • Personne permalink
      October 5, 2009 9:13 pm

      As Matt Liefer, I think the question remains open. The universe may, or may not, be digital in nature. Or computable -same question. For those who think it’s obvious, the Lorentz invariance seems a pain is the ass.

      rjlipton, don’t you think that’s a problem with your position?

  15. October 5, 2009 9:03 am

    zero and one, yin and yang?

  16. Anonymous permalink
    October 5, 2009 9:18 am

    Even before computers existed mathematicians could have worked on binary strings, on boolean functions, and on many of the areas that we work on today. They did not, in my opinion, because they did not understand the power of bits; they did not know that the WID.

    Why on earth would they have wanted to? I don’t think you realize that there’s no great insight or power in the idea that the world is digital. Imagine if you had told Gutenberg that he could dispense with all the characters except 0 and 1 by encoding everything in terms of them. He wouldn’t have been impressed at all. He would have said it was obvious you could in principle (it’s no deeper than the fact that you can already reduce the full world of ideas to 26 letters plus punctuation), but that it would be pointless. Of course he would be quite right.

    The only reason we care nowadays is that binary turns out to be the native language of computers. Other than that, the fact that you can encode things in binary is of almost no importance at all. There’s nothing sacred or important about using the smallest number of symbols. This is why when Leibniz tried to convince people binary was a great idea, nobody really cared.

    If he had, he would not have needed to invent his elaborate scheme for numbering proofs. If he had realized that proofs were only a finite collection of symbols, he could have assigned each symbol a unique binary string.

    This is flat out wrong. Goedel’s goal wasn’t just to prove that some mathematical system was incomplete, but rather than elementary number theory was, so he encoded everything into that form. (And the numbering system is not much more elaborate than any other – it just looks elaborate because it is fully written out. The only tricky part is the use of the beta function to encode sequences, and if you want to do everything in first order number theory, you cannot replace that part with simple binary tricks.)

    That Markov felt, in 1950, that he had to discuss the restriction to a finite alphabet shows me that he was unaware that the WID.

    This argument says nothing about what Markov believed, but rather what he thought his readers might. Even today, most people (even most highly educated people outside science and engineering) have only a vague understanding that the world is digital. I.e., they have probably been told that everything in a computer is made up of 0’s and 1’s, but they have little or no idea how this is actually done.

    So in summary:

    (1) Mathematicians and scientists have known for a long time that you could in principle encode things into just two symbols. (Leibniz knew this explicitly, and it would not have surprised people before him.)

    (2) Before computers, there was no reason for anybody to care.

    (3) Now that we have computers, it’s no surprise that computer scientists all know and love this encoding.

    (4) Nevertheless, random people still don’t.

    • rjlipton permalink*
      October 5, 2009 10:53 am

      I do not buy this argument.

      Why on earth, to quote you, would people care about 5-dimensional spaces, since we live in 4-space (including time)?

      • Anonymous permalink
        October 6, 2009 10:07 pm

        Why on earth, to quote you, would people care about 5-dimensional spaces, since we live in 4-space (including time)?

        I’m not sure what the relevance of your question is. The fact is that people do care about higher dimensions, and have for a long time. Not just scientists, but also ordinary people with little or no scientific or technical background. By contrast, it’s exceedingly rare for anyone to get terribly excited about binary except by playing with computers. When they do, it’s really the computers they are excited about, not the idea of representing things in binary.

        There are very few places in computer science where binary is truly essential. 90% of the time, all that’s required is a finite alphabet. 90% of the rest of the time, it’s a matter of algebraic constructions that work best for primes. Occasionally, characteristic 2 is really essential, but this is quite rare.

    • July 6, 2013 10:37 am

      ” I don’t think you realize that there’s no great insight or power in the idea that the world is digital.”
      Surely understanding how life reproduces itself is a task requiring great insight, else some centuries of eminent biologists were imbeciles. Yet understanding this required the insight that reproduction is digital. If the binary view had been prevalent by, say, 1900, then understanding self-replication would not have taken so long. Once you could see how rapidly and accurately bacteria could reproduce, you (the WID-oriented biologist) would have immediately recognized that digital chemical mechanisms must be involved, since the more familiar statistical chemical mechanisms could never explain such a low error rate.

      Likewise, with respect to “power”, as von Neumann argued, Turing’s separation of pure information from pure mechanism provided the power for high-fidelity reproduction. Since that (digital mechanism) is the only technique we know of by which Nature has managed to create life, it seems rather powerful to me.

      Whether the universe itself is continuous or discrete in some “objective” (is that possible?) sense, surely the insight that we can only interact with the world digitally (since our senses do not have infinite resolution) is a powerful one.

  17. October 5, 2009 9:27 am

    The fact that the World is Digital is definitely a groundbreaking idea; it allows us to use any theorem involving discrete symbols to prove properties of the world around us. I agree with you.

  18. October 5, 2009 9:27 am

    I think the WID is a model originated by a group of telecommunication engineers in 1950’s in the development of switching and telecommunication systems. On the counter-part in computer science I would prefer that computer programming all most at the same period uses the WID model in the implementation of the algorithms. Common mathematical tool is the Boolean algebra, so implicitly Boole’s work can be counted first in the WID.

  19. October 5, 2009 9:44 am

    The question whether the world is “truly” digital or can it just be viewed this way is to some extent like the proverbial tree that falls in the forest without no one seeing it: as long as you have a convenient and predictive model of reality then that’s the best science can do. (This is similar to asking whether quantum mechanics just predicts or really describes the world.)

    Perhaps what is more surprising is why continuous models have turned out to be so useful to model a digital world. (And indeed perhaps even cleaner and more useful than artificially enforcing discreteness.) But on the other hand, in computer science we use asymptotic notation and argue about inputs tending to infinity all the time, and we believe these results are useful to model a finite world. So sometimes things do get cleaner when you take them to the limits.

    Anyway, while people realized this to some extent, I don’t think they fully grasped the implications of being able to encode *anything* as discrete symbols until the 1930’s or so, which is why the math world found Godel’s and Turing’s works so surprising.

  20. October 5, 2009 9:48 am

    In many ways “Digital” is just an artifact of the way we want to represent things. There is something even more profound going on here, I think.

    At some point, we had a world view based on infinities and continuous functions. We saw the world around us stretching out to infinity in all directions, great and small. But then Einstein’s work put a cap on both large and small. Fractals and chaos theory reset our notions of deterministic. Suddenly, we’re left in a world that may be both finite and discrete, pushing the infinities and continuous functions into the back into the realm of the abstract. Back into mathematics.

    If we know the size of the universe, and we know the particles that make it up, then we know both the smallest and largest degree in which we need precision. It is finite. We know — no matter how complicated — that the things ‘in’ the system cannot expand outside of the system. And in that way, we know that the “natural” representation of the world around us, is actually a digital one. All of the other stuff, is just a by-product of our limited way of thinking about the world around us. Just a way to allow our “intelligence” to manipulate our perspective.

    Paul.

  21. Nick permalink
    October 5, 2009 9:56 am

    Here’s something that is only vaguely related, but I use it sometimes to scare my non-computer scientist friends. It’s probably been said many times before, it’s not deep or anything, but it’s a fun pub conversation.

    Having convinced someone that every photo stored in their digital camera sequence is a sequence of dots, and that there is a number for each dot that defines its colour, then it’s not too hard to convince them that each photo is actually just a number.

    Then you can tell them that when they think of a number (in practice, a very, very large number) there is a corresponding photo. So you can tell them that a particular number might really be, to a digital camera, a photo of them winning a gold medal in the olympics or whatever.

    If the person is still interested, I usually try and explain that for fixed dimensions, there are a finite number of photos. So a computer could just loop through them all. It would then have generated a photo of _everything_, e.g. you shaking hands with Turing, etc. (Of course, finding the ones that aren’t just random nonsense would take forever).

    I guess a lot of people have thought of that before, but it is good for non-scientist friends.

  22. Arnab Paul permalink
    October 5, 2009 10:41 am

    Is WID a realization in the area of “Computer science” only ? or is WID a truth?

    It is one thing how information truly exists vs. how we choose to represent them.

    I have no knowledge where physics/math stand in terms of understanding/explaining if and how the fabric of space-time is really quantized; probably our science as a whole has not obtained a closure on that ?

    versus

    It is probably a matter of argument and checking historical timeline on when science as a whole started taking it for granted that we can “represent” everything digitally.

    Probably this is more aligned with Subruk’s point.

  23. proaonuiq permalink
    October 5, 2009 11:12 am

    “Or are you really claiming (like Fredkin, Zuse, and Wolfram) that the universe is actually made of bits?”

    “No. I am making two points I think. One that the world is representable as digital. Two that the study of bits/boolean objects is really important and fundamental. Perhaps even more so than other parts of continuous mathematics”

    So Lipton´s WID proposition is epistemological; on the other hand FREDKIN-ZUSE-WOLFRAM´s one is ontological (ontological WID).

    Ontological WID is a variety of Tegmark thesis: reality is mathematical. But if mathematics (including boolean objects) is a product of the human mind for representing the world and brain is ultimatelly made of matter, how could it be the case that physical reality or matter (wich antecedes brains) follows a mathematical structure ? That´s another egg-chicken problem.

    One set of solutions to this problem lead to Designer theories (religions); another set of solution lead to Many Worlds theories (i.e. each mathematical structure is physically realized, generates its own world, where its own brains perceives it in different manners (wich could be digital, continous…)).

    Other solution is that all mathematical theories are physically realized but one and only one generates the world we live in, the unique possible world (that is, the other mathematical structures generate nothing but noise, heat), and human brain-minds as universal simulators, which trough mathematics can simulate any possible world, can realize about this fact (although apparently this not happens as frequently as one might expect).

    According to this last solution, there is a unique reality generator whose manifestations in reality are: SM+GR in physics (yes, GR present formulation is continous), DNA code and cell dynamics in biology, social institutions such a languages and its dynamics in sociology…and what we are interested to fish in the sea of general boolean objects or systems is the unique binary system wich generates this STRUCTURED reality.

    • breakingkeyboards permalink
      October 5, 2009 1:10 pm

      I think the position of FREDKIN-ZUSE-WOLFRAM is more strong than a WID , it is somethink like a deterministic digital world (a very wonderfull world!) .
      I think there is not a strictly relation between digital and deterministic but a deep investigation on this relation I think can be very interesting.

  24. breakingkeyboards permalink
    October 5, 2009 12:35 pm

    I disagree with subruk becouse the continuity hipothesis does not avoid a digial world but the digital hypothesis has the advantage to simplify the rules so by the Occam’s razor I think there are no reasons to think the world is not digital.

    • October 5, 2009 2:21 pm

      I think there are no reasons to think the world is not digital.

      I “think” that you should read a bit more about physics instead of just regurgitating popular clichés, may be Christoph Schiller.

      • rjlipton permalink*
        October 5, 2009 7:12 pm

        What was the popular cliche?

  25. proaonuiq permalink
    October 5, 2009 4:52 pm

    Un très intéressant link sur un “mathématicien” qui surement ne serait pas d´accord avec votre WID thèse:

    http://portail.mathdoc.fr/archives-bourbaki/.

    Qui se rapelle aujourd´hui, 70 après, que le structuralisme bourbakiste était l´avant garde ? Qu´est ce que ça va rester, dans 7 ans, de nos idèes d´aujourd´hui ?

    j´ai dit !

  26. October 5, 2009 9:04 pm

    The tagline has a typo:
    “One of the biggest insights of computer science is that the world is digial”

    digial should be digital

    • rjlipton permalink*
      October 5, 2009 10:49 pm

      Thanks…seems to be the least of the issues this post. Do you all hate it?

      • October 6, 2009 3:34 pm

        seems to be the least of the issues this post. Do you all hate it?

        I don’t. My reading is that your critics believe your statement is about mathematical knowledge, whereas your claim is cultural-historical.

        Frege, Russell, and to some extent their predecessors mentioned by other commenters, were interested in constructing a rigorous foundation for all mathematics. It is one thing to say, “All math can in principle be derived from these axioms, and represented by a binary alphabet.” It is quite another thing to say, “The entire universe is 0’s and 1’s” and engage in theory, experiment and engineering based on that principle. The difference is that the second statement is freighted with the understanding that it is useful to think of the world that way, not merely interesting or logically equivalent.

        I’ve seen “The universe is a computer” attributed to the physicist John Wheeler. I doubt Babbage thought the universe could be “reduced” to one of his machines.

        I believe this ideological shift (from digital-ness being merely “foundational” to critically useful) is due to the rise of computational tools in the physical sciences. It’s similar to the rise of projective geometry in the 1600’s. Once there were microscopes and telescopes, mathematicians revisited the ancient Greek results on perspective and geometry. Many of those theorems required specialized arguments that did not transfer to other theorems. One program of the 1600’s was to construct general arguments that could be applied to prove multiple theorems. The Greeks “could have” done this. For whatever reason, they didn’t. They new technology gave Kepler and others motivation to reinterpret and generalize those ancient works.

        So my position is perhaps a synthesis of the comments by Subruk and Tao. Once people understood boolean functions had a fundamental relation to physical reality, and not just a “formal” relation to mathematics, interest in them exploded.

      • October 7, 2009 10:36 pm

        Well now that the typo’s fixed, I can’t see anything else to complain about. =)

        Perhaps I could offer an interesting practical example from the graphics geometry community. Recently, Caltech’s group introduced this idea of discrete differential geometry: differential geometry for piecewise-linear meshes. The tagline for the effort has been to “discretize the theory” rather than view triangle meshes as approximations to continuous manifolds. This seems like a perfect example of the WID hypothesis at work. Plus, the approach seems to work wonders for a variety of simulation problems. However, the discrete differential geometry theory hasn’t been enthusiastically embraced by all graphics geometry researchers. In particular, researchers who study surface reconstruction from 3d scanner data benefit much less from the DDG theory, since they have to assess their reconstructions relative to a “real” object, which if discrete, is discrete at a much smaller scale than the sampling rate. Compound this problem with statistical error and WID is a much less useful hypothesis—for that problem.

        Perhaps there’s room for a competing World is Statistical (WIS) hypothesis. =) I bet you could get a lot of machine learners behind that one.

  27. October 6, 2009 6:59 am

    is the WID principle really:

    — everything knowable an be expressed with information in a canonical form
    — anything which is expressable with information is expressible with zeroes and ones

  28. October 6, 2009 9:26 am

    (CNN) — Three scientists won the Nobel Prize in physics on Tuesday for two breakthroughs that led to two major underpinnings of the digital age — fiber optics and digital photography, the Royal Swedish Academy of Sciences said.

    Is the digital age the same with WID?

  29. Alan Kay permalink
    October 6, 2009 9:35 am

    For example Leibniz, not only was *a* discoverer that all numbers could be represented by using just two symbols (also used in the I Ching in China much earlier), he also made a philosophical leap that “to make all things, one needs just two principles”. He chose God and Void

    (And this, via George Dyson, has one of many references to Leibniz’s writings)

    http://www.edge.org/discourse/schirrmacher_eurotech.html

    Charles Sanders Peirce, arguably the greatest American mind in the 19th century was (as far as we know) the first to state and show (roughly around 1875) that logic (especially boolean logic) could be completely generated from a single principle (e.g. “not both”), which, because Peirce’s voluminous writings were not easily available, was rediscovered by Sheffer in the early 19th century (the “Sheffer Stroke” ca 1911).

    Peirce ca. 1880 also proposed that one could make logical machines based on these ideas (and Boole’s and de Morgan’s) — and remarked that electricity would be a very good way to do this (thinking of telegraph relays, etc.)

    Russell and Whitehead, building on Frege and Peano (and Peirce and Sheffer) showed how mathematics can gotten out of logic, and thus all from “not both”.

    And, it’s worthwhile looking into what the Pythagoreans really thought about the relationship between the world and discrete number……

    Science does *not* know whether “the universe is digital”, but it represents our maps of the universe using mathematics, which can be derived from a single discrete idea (which is a kind of comparison).

    There is a wonderful character in Moliere’s play “Le Bourgeois Gentilhomme” who in the midst of it suddenly discovered that he had been “speaking prose” all of his life!

    Best wishes,

    Alan

  30. Markk permalink
    October 6, 2009 9:43 am

    I think it is an interesting point that it has taken a generation who built digital logic to mentor another generation who have elaborated the mathematics of digital computation. It could have been done earlier, yes, but was that just contingent luck or have we really learned enough to have a new way of looking at the world? I think the latter, and we still don’t have its full consequences. In a sense this I think CS is walking down roundabout way to re-connecting with the physical world, as its theoretical structure could put constraints on physical theories, or give form to them. Just like Riemannian structure infused the physics of the beginning of the last century and helped form the basis of insights like Relativity.

    Today it might be possible that computational theory can constrain and give guidance to physical theory. Perhaps not in the grand way of GR, but look at the use of Shannon in models of spacetime, and extrapolating to how computations look at the limit of relativity seems quite promising. Examining when computations finish as points of spacetime and what things like PSPACE turn into when there is a “distance cost” as it were, or a speedup from outside a reference frame as more and more computation (and thus energy are done there.) I have no basis but I feel like there are some physical principles that are connected to the structure of the computational hierarchy.

  31. Vijay permalink
    October 6, 2009 10:33 am

    Dear Prof. Lipton,

    My instinctive reaction on reading your post was to wish you good luck. I do not have much of an opinion on the matter. Oddly enough, my attitude of eschewing opinions on topics on the border of science and philosophy was strongly influenced by reading your “Social Proofs and Processes” paper and the aftermath that followed.

    The humour in your writing tickles me. Thank you for writing with such dignity!

    Vijay.

  32. Suhail Shergill permalink
    October 6, 2009 5:18 pm

    I would say the great insights of Computer Science isn’t that the world is digital, but rather the world as we are able to represent it is inherently digital (the limit then isn’t on the underlying physical world, but rather the power of our representations of it) and that for all practical intents and purposes that doesn’t pose any problems. In that light I would say that Turing was perhaps the first to have made the connection.

  33. October 6, 2009 6:20 pm

    One thing that makes this column so great, is that (pretty much) everything Dick posts is true, and (pretty much) the exact opposite is true too.

    So it must be true that the world is not digital. How can this be?

    Well, quantum mechanics (the way it is usually taught) sure makes the world look digital, with its “collapsing” and its “projecting” and its general algebraic look-and-feel.

    So let’s conceive quantum mechanics instead in terms of flows on state-spaces. There are two ways to accomplish this, and both begin by regarding ⟨H⟩ as a function on a state-space (technically called a Berezin symbol). Then the gradient one-form ∇⟨H⟩ determines the quantum trajectory in either of two reasonable ways: (1) via the traditional Schrodinger equation, via a metric G and a complex structure J on the state-space, or (2) as Hamiltonian flow, via a symplectic structure ω on the state-space.

    Physically speaking, the Schrodinger flow preserves the informatic “goodness” of quantum mechanics, and the Hamiltonian flow preserves the symplectic (thermodynamic) “goodness” of quantum mechanics mechanics.

    We want our (non-vector, non-binary) quantum dynamics to have *both* kinds of goodness. What constraints does this place on our three structure {G, J,ω}? The answer is not hard to derive: they must form a compatible triple, which is to say, the state-space must be a Kähler manifold (the Wikipedia page for almost complex structures has a nice discussion of compatible triples)

    So if we want, we can keep doing QM on the familiar binary-type, vector-type Hilbert space (which is among the simplest Kähler manifold), but if we wish, we are free instead to pullback QM onto any of a vast array on non-vector-space quantum manifolds … and this approach has the advantage that the geometry of the state-space becomes one more parameter we can adapt to get the physics and/or engineering and/or informatic properties that we want.

    Of course, QM/QIT researchers already do this Kählerian pullback—onto tensor network state-spaces for example—but usually people do not think informatically or geometrically about the non-vector/non-binary state-spaces upon which (most) large quantum calculations already are conducted.

    The bottom line: “The World Is Not Digital; It Is Geometric.”

    Specifically, it is Kählerian, and full of “goodness.”. 🙂

  34. Arnab Paul permalink
    October 6, 2009 7:05 pm

    Engineering played an enormous role in pushing the boundaries of what can be accomplished “digitally”. Some credit of WID relatizion should go to them too.

    Some day far out in the future the same engineering advancements may take us where we can take free strides into the continuous state spaces of quantum world with our computers. Some day we might even take advantage of relativistic interactions underlying the motions particles. Some computing era might one day belong to ‘string theoretic’ way of thinking.

    For a non-entity like me, what’s utterly fascinating ‘today’ – is to see the ranks of the greatest engineering minds and the sharpest mathematical minds sharing and differing on ideas in a little corner of the internet.

    I beg to you all – please never stop blogging.

  35. Josh permalink
    October 7, 2009 12:21 am

    1) To add to Alan Kay’s excellent historical remarks: in fact, Liebniz was one of the earlier people (the first?) to try to make mathematics (indeed, all ideas) algorithmic, and a major part of that was coming up with a discrete language to describe ideas.

    2) Obviously the fact that it’s 0 and 1 is irrelevant — any finite set would do. (In fact, we could even encode everything in unary, although this still requires a “blank” or “void” to distinguish where unary strings end…). What’s important is that everything digital can be encoded in a countable set, which in turn can be encoded as finite strings over a finite alphabet. Perhaps the strongest mathematical realization of this is the fact that every theory has a countable model (maybe this only applies to first order theories, I forget — but still, if you’ve never thought about this before, think about the fact that there is a countable model of the real numbers).

    3) Following up on my countable-model-of-the-reals, I’m surprised no one has mentioned Hermann Weyl’s philosophical writings on the continuum, and whether it was indeed “real.” I believe some of his musings boil down to the fact that most “real” numbers can never be described in any language ever, though I’ve never read the whole thing.

    4) Subruk mentioned that making things digital allows data to be passed around error-free. Indeed, this is why the “digital abstraction” (for anyone who’s ever taken a CS architecture course) is so important. In my opinion, it’s also why organisms with DNA won out over reproducing collections of molecules without it. But this is essentially the utility of digits in engineering, and not particularly about the nature of reality.

    5) Someone made a comment that, given the finite number of elementary particles, and the Planck time and Planck distance, that the world is quantized and therefore digital. But even despite all these quantizations, the amplitudes that appear in quantum superpositions are still continuous (or so the theory goes). Moreover, even if we can only measure distance down to the Planck scale, a particle could still be at any of a continuum number of points — we can just only tell its position to within an accuracy of the Planck scale. So the error bar can only be so small, but the center of the error bar can be anywhere.

  36. Vinod Grover permalink
    October 8, 2009 1:56 am

    I wonder if analog computers, as a technology, had an impact comparable to digital computers would we say world is analog? It is a representation that approximates some part of the world – hard to know whether world really is digital.

    If you mean that digital representation have been the most successful I would agree.

    In a prior era people thought the world is an engine, etc…

  37. proaonuiq permalink
    October 10, 2009 5:41 am

    “In a prior era people thought the world is an engine, etc…”

    I think this argument (now we think WID is true just because we just discovered the power of digital engines) is flawed.
    Yes, it is well known that metaphors about reality depends on scientific and technological advances. But it we agree that science progress (that is that we know now more truths than 2 centuries ago) we must agree that metaphors progress too (that is, our metaphors are now more close to the truth than in the era where people though the world was an engine).

  38. October 10, 2009 12:23 pm

    I remember reading an article once that said that one of Shannon’s greatest contributions was the realization that it makes sense to represent signals digitally. At the time, this was considered extremely controversial.

  39. TimothyAWiseman permalink
    October 15, 2009 1:41 am

    A very interesting post. First, just to be a stickler for details, WID in a strong form appears simply false. In the most trivial case, no amount of 1’s and 0’s can represent e or pi. The closest you can come is representing an algorithm, but that is hardly the same thing.

    Also, at least some facets of the world around us (space and time) seem to be continuous, so they can be approximated but never precisely described with any number of 1’s and 0’s.

    Now, in a slightly weaker form, the world can be *approximated* in a digital form, it is indeed a deep insight. I am no historian, but I very much suspect from my limited readings that this would greatly surprise Plato, Euclid, and their compatriots.

    I also doubt that any of the greats in generations past were aware of it, though I think it would come as much less of a shock to those after Newton’s revolution in Physics and the understanding that the world can be numerically approximately arbitrarily closely. It is a much smaller jump to put it all in Base 2 and realize it can be applied to virtually everything after you are accustomed to finding numerical approximate solutions to describe the reality around you.

    • Kenny Evitt permalink
      October 15, 2009 5:12 pm

      You’re assuming e and pi are real! But I’ve yet to ever find e or pi in a non-representational form. You’re correct that we can conceive of numbers that cannot be finitely represented – but does that mean that the WID must be false because the integers are infinite?
      I can’t even think of e or pi as things – they’re explicitly limits in my mind. Notice that both are defined in terms of a constraint. Even the real numbers themselves are typically constructed in the same way, as limits or constraints on sequences (or sets) of rational numbers!
      I think the evidence is nearly compelling, but in the opposite direction! There are fundamental discrete entities for both matter and energy. Even space-time is discrete in the sense that sufficiently small scales are beyond the workings of quantum mechanics.
      The wonderful book Programming the Universe had a section which particularly bothered me – at the end of a list of quantities that were (at least supposedly) discrete, the author made the aside that at least angles are continuous. What if they’re not?!
      Lastly, I just don’t buy that the universe is, in effect, computing continuous functions. They’re certainly nice approximations though!

      • TimothyAWiseman permalink
        October 16, 2009 8:52 am

        Several interesting points. As to e and pi, it depends on what you mean by real. If you mean the concepts are real, then they are to the extent that the number 1 is real. If you mean that they are represented perfectly somewhere in the physical world in some fashion, then that goes directly to whether or not physical reality is discrete or continuous. If it is discrete then they absolutely cannot, but if if it is continuous then they almost certainly are. I will however point out that even in that sense, they remain precisely as real as sqrt(2). Saying that pi and e are not physically real in that sense effectively denies that there are any right triangles with unit length for the non-hypotenuse in physical reality. Also, pi and e are not normally defined as limits. The methods for calculating their exact values always involve limits, but that is a different thing. Pi is defined by the ratio or a perfect circles circumference to its diameter, the fact that its exact value is normally expressed as a power series is separate.

        I do not actually claim to know whether physical reality is discrete or continuous, but space and time certainly seem to be continuous. The fact quanta exist is no evidence to the contrary, it simply means that certainly quantities in the physical realm are discrete.

        Claiming WID in the weak sense of approximation is an interesting revelation and that intuitive experience supports and has some significant implications for what can be done and achieved. Claiming WID in the strong sense contradicts some basic beliefs about how physics operates and while it is entirely possible, I have yet to see any compelling reason to believe it.

  40. October 16, 2009 9:02 am

    Like I said in my previous comment, I still don’t understand why the question whether the “world is digital” is answerable by science. What can be conclusive evidence one way or the other?

    The only thing we can know is whether the cleanest theories we can come up with assume continuity or discreteness, and that perhaps says more about us than about the world (and can change with time). I do still see it interesting, if the world is discrete, that it’s so well approximated by continuous theories. I guess the discreteness is “small enough”. Again, I think it’s somewhat analogous to the fact that in computer science we analyze a finite world with theories that assume that the input length tends to infinity.

    –Boaz Barak

  41. Jiav permalink
    May 30, 2010 1:26 pm

    I suggested this post for the following contest:

    http://www.3quarksdaily.com/3quarksdaily/2010/05/richard-dawkins-to-judge-2nd-annual-3-quarks-daily-prize-in-science.html

    Good luck 🙂

    • rjlipton permalink*
      May 30, 2010 1:58 pm

      Thanks very much.

  42. slw permalink
    June 8, 2010 4:29 am

    I am surprised nobody has mentioned http://en.wikipedia.org/wiki/Jacquard_loom in the comments yet. The basic principle is pretty much the same as your printer example, but nearly 200 years earlier.

  43. Serge permalink
    July 7, 2013 6:12 pm

    > “I believe that the discovery that all things can be viewed as digital is as revolutionary as the notion that the world is made of atomic particles. But, the discovery that the WID took half a century longer to discover than even the simplest atomic particles.”

    These two discoveries shouldn’t be considered independantly from each other. For, elementary particles being digital, the fundamental forces that glue them together are nothing but the result of some computational complexity. Thus, the striking analogy between integer factoring and nuclear fission is by no means accidental. On the contrary, an interpretation of the fundamental forces in terms of problem complexity could represent their long-sought unification.

    • Serge permalink
      July 11, 2013 5:05 pm

      Here’s what that entails:

      1 – every force is the expression of a computational complexity
      2 – every motion is a computation
      3 – the speed of light is related to the limit of computing speed
      4 – mass, energy and information are equivalent.

  44. Julian permalink
    January 8, 2016 11:47 am

    I thought if the smallest unit is a planck then we nust be a digital world. There is no analog. No analog clocks no analog signals. Just high resolution digital. I also believe we dont move. We teleport between planks. Speed to get from one plank to another might be based the amount of energy to buildup in a time frame to perform that teleport. If you can build up and use energy faster then it will be a shorter amount of time between teleports. Thats why domeone running gets somewhere faster than someone walking. Its kinda sucky thst our world teaches us fiction as fact at school. Im not saying what i said is fact (yet)? But it seems to happen tome and time again… how many planets were you taught there was at school? Their fiction as fact philosophy is ehat makes people stop thinking… instead they should teach that this is what is believed to date… not what is fact.

  45. Kevin Gaskill permalink
    September 19, 2017 2:06 pm

    This is all an illusion, and the entire known universe is a hoax. Everything is ditigal, what you see, hear, taste, smell, anything that effects your senses, is digital. I’ll never believe otherwise.
    We’ll take the works of Tesla for example. I believe he discovered the bubbles edge, and science wants to call it somekinda reflection. I disagree, and believe that the signals bouncing back from a infinite universe, is the outer edge of the digital bubble.
    As technology advances, and things become more realistic, holographic images are right around the corner. I look forward to the day when you can enter a room and make that room anything you can imagine. If you look at the advancements in technology, and the speed at which it has progressed, we are only a short time away from the day all reality can be altered. What you see today isn’t really there, it’s a digitally enhanced reality that will soon be able to change with a tap of your cell phone.
    For years people have been trying to figure out what happened to certain groups of people that once inhabited this world. Has anyone ever considered that they became aware of their surroundings and made themselves vanish to another reality? When everything is digital, anything is possible.
    For instance, the only way to make something infinite is to base it on a digital backdrop. Example, a weather man standing in front of a green screen can make a report from anywhere in the known world, and the people watching believe he’s really there. The illusion created by the green screen seems realistic, but it’s far from it because the weather man is still standing in the same place, the only thing that changed is the backdrop. The same principles apply in this world, but there is no green screen, and the changes can’t be controlled, yet.
    In the not to distant future many things will rapidly by discovered. The technology we developed to help us understand the universe is nothing compared to the realization that this is all digital. When everyone has the ability to change reality, what is going to happen? Will we vanish like so many cultures before us, or will we forge forward into the digital unknown?
    See, it’s simple, the only way to create a universe that is viewed as infinite is to make it from a holographic image. One that continues on forever.
    Let me explain, when we finally have the technology to create a vision that isn’t really there, yet you can touch it, feel it, smell it, taste it, will it really be there? Or, will it be exactly where we are at today?
    Trust me, this is all digital, and it can be changed in the blink of an eye. Just like the future will change your perception of reality, in that same blink of an eye.
    We are in the Era of instant communication, with anyone. Soon, you will be able to transform your reality to suit your needs. It’s coming, and it’s coming fast, only then will people understand the power of the digital universe.
    Good luck, my friends!

Trackbacks

  1. Mini Printers » Blog Archive » The World is Digital « Gödel's Lost Letter and P=NP
  2. Mini Printers » Blog Archive » The World is Digital « Gödel's Lost Letter and P=NP
  3. Flow » Blog Archive » Daily Digest for October 8th - The zeitgeist daily
  4. Test Your Intuition (10): How Does “Random Noise” Look Like. « Combinatorics and more
  5. Flow » Blog Archive » The Current Flow, Mid-November 2009 - The zeitgeist daily
  6. What Is A Complexity Class? « Gödel’s Lost Letter and P=NP
  7. Thanksgiving Day, Pardons, and Turkeys « Gödel’s Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading