I believe it was Warren McCulloch, an early researcher in neuroscience, who first proposed that there was an analogy between the way a neuron fired and the functioning of early computers, a neuron being either in two states, at rest, or when it got enough electrical input from its sources, would then fire, a burst. I think McCulloch was the first to draw an analogy between the on and off state of the neuron and the zero-one digital operation of early computers. And this analogy set off a metaphorical revolution, in which the working of the brain, and by extension the mind, was increasingly considered to be either not just analogous to, but identical with computation.
McCulloch was part of an early disciplinary group that included Gregory Bateson and Norbert Wiener who put together the first conferences on cybernetics, The Macy Conferences, which I guess were happening around 1950. The primary interest of Bateson, who came from anthropology, was in systems, feedback systems, recursive feedback loops, and how systems regulate or go out of control. His basic metaphor for the functioning of the mind wasn't the computer so much as the thermostat. A thermostat measures the temperature. When it's too cold it turns the heat on, and when it gets too hot, it turns the heat off, and so it establishes a regular relationship with the environment and maintains something like homeostasis, and he thought that was an important way of understanding mind-body interaction with the environment.
He also liked to use the analogy of two people with electric blankets to show how feedback systems could run amok. Picture a couple in bed. Husband and wife each have separate electric blankets, but in the middle of the night, they get the switches mixed up, so that he holds the switch to her blanket, and she holds the switch to his blanket. Well, when he's cold, he turns it up, so she gets hotter, and when she gets hotter, she turns hers down, so he gets colder. He gets colder, so he turns his up, and she gets hotter. She gets hotter, she makes him even colder. It's a run-away system of feedback that runs amok. It's the opposite of homeostasis. You might see this picture of mutually-interacting feedback as a precursor for some of the things we're talking about in our discussions of intersubjectivity later on, because what we're looking at is mutually-interacting systems, mutually creating systems.
The dilemma that arose with the computer model of the mind is that it basically got out of the systems picture and created this isolated computational mind. Now that's taken us in all sorts of weird directions. One of the weirdest that I came upon recently was in the work of Ray Kurzweil, a self-styled futurist and AI developer, the inventor of a lot of quite remarkable artificial intelligence kinds of machines and products. He developed one of the original optic scanners that could read texts. He developed early voice recognition patterns. I think he invented the first really complete music synthesizer that could create all the sounds of anything digitally.
But by taking this picture of the brain-equals-mind-equals-computer to its extreme conclusions, he's gone on to say that he anticipates that the brain will be completely reverse engineered by 2025. That means everything that the brain can do, will be reverse engineered computationally into something a computer can do.
That claim makes sense only if you think of what a mind does and what a brain does as purely instrumental. What does it mean to reverse engineer subjectivity? What would it mean to reverse engineer sadness, or joy, or zazen? What are all these kinds of brain capabilities of human qualities? What does it mean to reverse engineer them? And he goes even farther, and has predicted that by 2045, an event that he calls the singularity will occur, and this means the ultimate interface between human and machine, so that the biological and the computer will completely merge. And he calls it a singularity after the event horizon around a black hole, saying that we just can't even imagine what could happen after this. This is an event that will be utterly transformative of what it means to be human.
When I've listened to his talks, there seems to be a bifurcation, though, in what happens when man merges with machine. On one hand he's a big advocate of nanotechnology, and so he thinks that one of the things we'll do is inject ourselves with millions or billions of little micromachines that course through our body and constantly repair everything that's breaking down inside, so that we would never get old, we would never get sick, the body would become perfectly efficient, and so there you get a kind of machine-assisted biological immortality. The other possible bifurcation is the idea that the mind will become totally downloadable or uploadable to the cloud, since the mind is reducible to a whole bunch of programs or algorithms. Somehow all of these will be able to be transferred up and out into a kind of immortal virtual reality.
When I've read some reviews of Kurzweil's work by philosophers like John Gray, one of the things they tend to point out is how much the singularity has in common with ideas like the rapture. It's a kind of end-of-history moment of total transcendence, where once and for all we will transcend our mortality, our physical bodies, that we will somehow be translated into a completely different realm. I would say that this is the ultimate curative fantasy. What you have -- and I think Kurzweil has talked about part of his drive in thinking this way and pursuing all of this -- is a kind of fear of death, in the sense that man must become immortal. And so there's a way in which there is this whole now technologically-driven program that remarkably overlaps with the whole set of religious ideas about eternal life and a soul which will somehow be independent of and transcend the body. For Kurzweil it's the mind which will be separable from the body and be uploaded into the cloud. In religious terms, it's the soul that will be separable from the body after death and have its kind of immortality.
I think these ideas bear more than a little family resemblance to each other. I think that many of us who engaged in this practice early on tended to treat the idea of enlightenment as our own personal singularity -- there was a kind of equivalent fantasy of some event that would be utterly transformative so that life would have this kind of before-and-after quality, and many of the early accounts of kensho had this kind of born-again conversion-experience quality that having had this moment, everything was completely changed once and for all.
Lots of those moments, of course, do occur, but unfortunately they don't all play out the way we anticipate, and as one famous book title put it, "After the Ecstasy, the Laundry." We always seem to come back down to earth. But I think that once you get a sense for the form of these curative fantasies, you start seeing them everywhere in all different shapes and sizes, and you could see in Kurzweil’s project something that seems like yet another attempt to transcend what it is to be mortal, what it is to be human, to think that the project of this life is not to become more fully human, but to somehow escape the confines of being human. And while I do not by any means pretend to be an expert of what's technologically possible in the future, I do think I'm an expert on what's crazy.