Search
Close this search box.
Search
Close this search box.

The Brain as Computer/The Computer as Brain

By Charles Henry

Part 1

The metaphor of the brain as a kind of computer has been popular for decades. Recently, articles have appeared that nuance the metaphor, claiming that computers—or, more specifically, some incredibly powerful software applications that they run—might replace our brains, or at least some key functions of our thought and analytical processes. This blog briefly explores some of these analogies, leavened with a few personal observations.

When did we discover the brain? For millennia our understanding of physical organs and functions were mostly relegated to magical interpretations: what we could not see inside the body was often described in metaphysical terms: humors, spirits, and such. There is, though, an amazing exception to this tradition. The first occurrence of the word “brain” in any language is in the Edwin Smith Papyrus. Written probably around 3,500 years ago, it is remarkable not only for its naming rights but also for the “modern” sensibility it depicts in its 48 case studies of medical trauma and pathology. It describes the brain and some of the brain parts (fluid and casing), and also notes the causal relationship between brain damage and what we might term neurological consequences. For most of our historical era, though, the Edwin Smith Papyrus is an outlier: it wasn’t until the Renaissance and the rise of empirical science, and the attendant acceptance of autopsies, that the brain as a unique, and uniquely mysterious, organ began to be better understood.

The rise of computers in the twentieth century naturally gave a fresh metaphor to philosophers and scientists as they continued the tradition of trying to conceptualize our brain and its functions by analogy: anchoring the ineffable to the known world. Earlier analogies included describing the brain as a complex set of hydraulics, or later, a telephone switchboard, so the emergence of the computer as model is easily intuited. The brain as computer took on more literal implications later in the twentieth century. The rise of neurophilosophy, championed by, among others, Patricia and Paul Churchland, posited that the brain was like a computer, but also could be described as a complex computational mechanism; a more sophisticated knowledge of the circuits and connections of the brain’s neurological apparatus would yield a more concise understanding of methodologies and the execution of its “programs.” Churchland and Sejnowski’s The Computational Brain (MIT, 1993) is an articulate and engaging framework of this approach.

Another prominent philosopher and neuroscientist promulgating the brain as computer analogy has been Daniel Dennet, although a recent conversation with Dennet in Edge suggests an evolving, more shaded comparison: “The vision of the brain as a computer, which I still champion, is changing so fast. The brain’s a computer, but it’s so different from any computer that you’re used to. It’s not like your desktop or your laptop at all, and it’s not like your iPhone except in some ways. It’s a much more interesting phenomenon.” This suggests that the more formal reductionist arguments of the Churchlands and others need to be re-thought: the brain is a computer, though a computer the likes of which we have not yet built.

An interesting turn of phrase—and metaphor—can be gleaned from the response to a powerful pattern recognition program developed at Cornell University, called Eureqa (pronounced “eureka”). According to the website, Eureqa is “a software tool for detecting equations and hidden mathematical relationships in your data. Its goal is to identify the simplest mathematical formulas which could describe the underlying mechanisms that produced the data.” The response to Eureqa has been positive, focusing on the possibility of understanding this software tool and its future avatars as replacements of aspects of the human brain.

One article in Singularityhub declared in its title, “Eureqa: Software to Replace Scientists,” arguing in the text that this software is incredibly fast, efficient, and can save the scientist exceptional time. Another essay, “Move over, Einstein: Machines will take it from here” in New Scientist, similarly touts the possibility of machines and their applications effectively substituting for some functionality of the human brain. While these descriptions can contain some degree of exaggeration, the re-mirroring of the brain-as-computer model arises in part from the new scale of data and the increased reliance on machines to read, interpret, and find meaning in it. Our brains cannot, given our lifespans and the knowledge we have inherited, do a credible job of finding patterns and nascent equations against exobytes of data. As our machines generate an unprecedented mass of raw information, we need the same machines to read it. The challenge is not limited to the sciences: the very large digital libraries of text, image, and sound now available to humanists is, in some respects, beyond our reached without the aid of software.

The shift of grammatical place—from “our brain is a computer,” to “the computer is our brain”—entails a semantic repositioning of considerable importance. A subsequent blog will explore the implications of this shift, including the possibility of machines making new discoveries that we cannot recognize or understand.

Did you enjoy this post? Please Share!

Facebook
Twitter
LinkedIn
Pinterest
Reddit

Related Posts

CLIR and Coherent Digital: An Idea Applied

CLIR and Coherent Digital: An Idea Applied Building Capacity through a Not-for-Profit and Corporate Partnership —by Charles Henry CLIR and Coherent Digital LLC (CD) have

Skip to content