Author’s note: I wrote this in 2017 and it languished in the drafts since then because I couldn’t think of a good conclusion. Really this is typical of the subject matter and all this piece offers is something to chew on, if you have the interest in it or have never before considered the question. At the time I was helping tutor a TLA⁺ seminar and had the opportunity to ask Leslie Lamport (who has thought about computation quite a bit!) what he thought of the question with all his accumulated wisdom, and he replied “it’s one of those meaningless distinctions, like the difference between alive and not-alive.” So there you have it. Hopefully there are enough decent jokes in this post to make it worthwhile.
Most people will agree that computers are different from rocks in some way. Rocks, after all, do not appear next to CPUs on Newegg. You would be disappointed if you ordered a new computer and a rock arrived in the mail. Attempting to run Doom on a rock would undoubtedly end in failure, even though Thought Leaders predicting where Doom will and will not run are famously short-lived in their credibility. It is thus mysterious why someone would ask what, exactly, separates computers from rocks. Thankfully, philosophy is here to rescue us from the simplicity of our intuition.
When we have realized the obstacles in the way of a straightforward and confident answer, we shall be well launched on the study of philosophy - for philosophy is merely the attempt to answer such ultimate questions, not carelessly and dogmatically, as we do in ordinary life and even in the sciences, but critically after exploring all that makes such questions puzzling, and after realizing all the vagueness and confusion that underlie our ordinary ideas.
Bertrand Russell, The Problems of Philosophy (1912)
When asked to define a computer, programmers will mumble something about Turing Machines then look at you askance. They’re not wrong: the mathematical definition of computation has gone effectively unchallenged since the publication of the Church-Turing Thesis in 1952, which claims that a function is computable (in the general philosophical sense) if, and only if, it is computable on a Turing Machine (and some other equivalent mathematical models). This is a beautiful result, and endlessly useful, but applies only to the pristine universe of mathematics. Our concern is with the physical world: what makes one physical object a computer, and another not?
We are trying to define the essence of a computer, in terms of necessary and sufficient conditions. Once we have found these conditions, we can use them to decide whether any given physical object is a computer. We want our definition to exclude things which are obviously not computers - like rocks - while including things which obviously are computers - like CPUs. We’d also like the definition to be “nice” - simple, precise, unambiguous, and avoiding arbitrary or disagreeable conditions.
Computers are artifacts humans use to speed up calculations. Are the brains of nonhuman animals not computers? Does a cell not compute when it reads a string of DNA to assemble a protein?
Computers are physical objects which use electric charges to transform inputs into outputs. Many kinds of non-electric computers exist, performing computation in media ranging from marbles to water to pulleys. Arguably, simple devices such as an abacus also perform computation.
Computers are physical objects which implement a Turing Machine (TM). Glossing over the kick-the-can-down-the-road word “implement” here, TMs require an infinitely-long tape. No physical computer can have an infinite amount of memory, so no physical computer can implement a TM in a simple literal way.
Okay, computers are physical objects which implement a Linear Bounded Automaton (LBA) - basically a Turing Machine with finite memory. Maybe. What is meant by “implement”?
I guess implement means every state of the LBA corresponds to some physical state of the computer. How do you define a physical state?
By measuring an arbitrary component of physical reality, like testing whether the voltage in a wire is above some threshold to indicate a 1 instead of a 0. Okay, assume we can do this. Is it enough to just have a set of physical states which correspond to the LBA states?
No, you also need corresponding state transitions. Good! How can we define a state transition in a physical computer?
Like first the physical computer is in one state, then time passes and it’s in another. Hmm, how would we be able to distinguish our computer from a random physical process which just happened to follow the right state transition path, like a series of dice rolls?
You run the computation again to see whether the same thing happens? See, that’s the problem with randomness…
Okay sure, but on the other hand no physical computer can re-run a program in the same way with complete certainty. There’s hardware failure, cosmic ray bit-flips, power loss, asteroid strikes… Fair point. I still think we can draw a distinction between a computer and a random physical process, though.
All right. A physical state transition is when a process transitions from one state to another, and there is a causal relationship between those two states. What is meant by a “causal relationship”?
Like if a physical process is in one state, the laws of physics cause it to then deterministically transition into the second state. Would the weather be a computer, then? It is subject to physical laws, and certainly there are enough physical weather states (combinations of humidity, heat, air pressure, etc.) to correspond to any Linear Bounded Automaton you care to define.
I can easily imagine using weather phenomena as a computing medium if we’re already using water, marbles, pulleys, etc.; however, for simplicity let’s say we also want to know that the physical process will behave a certain way ahead of time. Ah, this is called a “counterfactual” - we are no longer dealing with what is and what did happen, but rather what could be and what could have happened. We’ve moved from a single timeline into branching, parallel worlds.
Is that bad? I’d say it makes us less sure-footed. How could we know whether all our possible worlds function the way we think they would - that is, all our state transitions will work as expected if they are triggered?
Science, I guess. Let’s not even venture near that epistemological quagmire. Sure, whatever. Scientific predictions will give us the confidence we need in asserting the counterfactual correctness of a physical process in implementing a Linear Bounded Automaton.
So are we done, then? Not quite. Rocks are still computers, in this account.
What? Sure. What about the LBAs which have only a single state, or multiple states but no transitions out of the start state?
Those are useless! Who cares about those!? So we’re arbitrarily ruling out certain kinds of LBAs because their physical manifestation is inconvenient to our present goals?
Yes. Fair. Can we come up with generalizable rules for which LBAs are permitted and which are not?
This is going nowhere
Perhaps it is impossible in principle to satisfactorily carve up continuous reality into discrete categories for human consumption. At the very least, demarcation via necessary & sufficient conditions is doomed to failure. Wittgenstein famously used the example of “games” as a category for which no single feature is common to all members (see family resemblance). Essences do not exist.
Why try, then, if all our attempts to comprehend the world in human terms are simple folly? For some there is an element of sport: see the spirited debate on whether a hot dog is a sandwich; it’s fun to try to come up with essences for things, and some people believe it makes for decent conversation (by the way - if you pivot hotdog/sandwich conversations into an extemporaneous lecture on language games you are, ironically, playing the wrong language game). For others it is a superego-driven search for capital-T Truth, screaming into the void of a world which admits nothing.
Or perhaps capital-T truth does not exist, philosophical thought is not worthwhile, and despair is all we have.
Or perhaps it is possible to comprehend the world with finality, and we just haven’t yet developed the right paradigm for doing so.
Or perhaps, again, capital-T Truth does not exist, and there are positive qualities, because otherwise we’d live in a world of complete tyranny; our current freedom is both a wonderful gift and a terrifying price.
What matters, as far as anything can matter, is that you reason for yourself and hold yourself responsible for your own beliefs.
Sources & Further Reading
Author’s note: two of the links rotted since I wrote this in 2018; thank goodness for archive.org!
- Ghica, Dan (14 July 2015). “What things compute?”. The Lab Lunch. Archived from the original on 10 August 2017 - I read this several years ago; it planted the seed of confusion in my head which grew until tried to hash everything out in this blog post.
- Aaronson, Scott (8 August 2011). “Why Philosophers Should Care About Computational Complexity” (pdf) - See the section “Computationalism and Waterfalls” which inspired the dialogue about computing using the weather.
- Rapaport, William J. “Philosophy of Computer Science” (pdf). Archived from the original on 3 February 2021. - a wonderful work-in-progress easy-to-read introductory textbook, with an uncanny knack for answering questions as they occur to you. Includes a great survey of interesting philosophical topics!
- Piccinini, Gualtiero. “Computation in Physical Systems”. The Stanford Encyclopedia of Philosophy. Edward N. Zalta (ed.) - not much to say here; the SEP is a treasure. This article is more dense and technical than Rapaport’s work, but still accessible.