The Risk of Another Consciousness Winter

The term “consciousness winter” describes the period of time in the 20th century wherein it was effectively an academic death-sentence to study consciousness scientifically. It’s a play on the more-well known “AI winter” that occurred in the 1980s, when funding and interest in AI dried up due a string of high-profile over-claims and failures as the limitations of the early hardcoded learning systems became clear. When I entered academia studying cognitive science, AI looked a lot like a dead field. It stayed that way until the deep learning revolution, which acted as the Cambrian explosion that lead to successful modern AIs like GPT-4.

A similar phenomenon of a long gap in progress happened with our scientific understanding of consciousness, but rather than several smaller waves it was one big wave. It’s historical origins? Behaviorism, the rise of the Vienna Circle and logical positivism, and other philosophical and even political currents meant that consciousness was effectively exiled from the purview of science in the mid 20th century. Personally, I’d peg the depths of the consciousness winter as sometime around 1940, and lasting well into the 1960s and even ‘70s, and only being clearly over in the ‘90s.

Here’s a description of what this was like from my recent book, The World Behind the World, part of which covers the history of consciousness research:

All the way to the 1990s, research into consciousness was not viewed as “proper” science. Yet in their beginnings, both neuroscience and psychology claimed consciousness directly as the phenomenon they were interested in. William James’s 1890 field- inaugurating The Principles of Psychology coined the term “stream of consciousness.” But by the 1920s these approaches had become heavily criticized for their reliance on introspection. Destined to become the most prominent critic was Harvard student B. F. Skinner . . . Skinner soon became the most significant proponent of a radical form of behaviorism, arguing that organisms were merely input/output black boxes and any talk of the intrinsic perspective was unscientific, barely better than paganism or astrology.

If there is a villain for consciousness research, it is Skinner: failed novelist, rejecter of the intrinsic perspective. Due to the popularity of his approach consciousness became a pseudoscientific word and psychology was stripped of the idea of a “stream of consciousness,” stripped of everything intrinsic, for almost a century. In order to survive as a science, psychology only kept the reduced elements of consciousness—attention, memory, perception, and action—while throwing out the domain in which they exist, the very thing that gives them form, sets them in relation, and separates one from the other. . . 

I originally coined the term “consciousness winter” when replying to a recent letter signed by 124 scientists declaring Integrated Information Theory pseudoscience (a popular theory of consciousness I helped develop aspects of in graduate school). This unusual move, and the following debate, garnered a lot of attention in the press.

History can be dangerously cyclical. If a consciousness winter happened once, it can happen again. After all, ending the consciousness winter required, essentially, the lucky prestige of Nobel Prize winners from other disciplines, who then switched to studying consciousness.

It took two Nobel Prizes to break it into acceptable discourse, though neither prize had anything to do with the brain or its workings. The first prize was awarded for discovering the structure of DNA, and one of the recipients in 1962 was Francis Crick. The second prize was for discovering the structure of antibodies, and one of the recipients, following ten years behind in 1972, was Gerald Edelman. As their fields moved on, Francis Crick and Gerald Edelman became interested in other disciplines. The terrains they had been the first to truly traverse, molecular biology and immune function, had been drawn out, the boundaries established, and they, explorers, found themselves surrounded by mere cartographers. . .

In response to a skepticism that had governed entire generations of scientists, both Crick and Edelman fundamentally made the same argument: that consciousness is a natural phenomenon produced by the brain and therefore falls within the purview of science. 

While I think the contributions of Crick and Edelman are perhaps too-easily emphasized in ending the consciousness winter, they did play a big part, and I’ve heard some version of this story from almost everyone I’ve talked to in consciousness research (often while remarking on how times have changed).

But apparently some academics in the field seem to think there never was a consciousness winter. So is the thesis of Jake Browning, a graduate student in philosophy at NYU. According to Browning (written as a reply to me):

There is a story that goes, "in the 20th century, you couldn't talk about consciousness. Before Edelman, Crick, and Koch, it was a bad word that no one would touch.”

Browning says that story is not just overly simplistic, but wrong:

The Crick and Koch papers were indeed important for stimulating enthusiasm for research on consciousness and the brain in mainstream psychology and neuroscience. However, this was hardly the beginning of scientific interest in and research on consciousness. In the 1960s and 1970s, studies of split-brain, blindsight, and amnesia patients laid the conceptual foundations for later work on consciousness. . . Additionally, consciousness and the brain were the subject of a number of scientific conferences starting in the 1950s that were attended by leading researchers in psychology and brain science. Furthermore, theories about what consciousness is and how it relates to the brain were proposed by a number of prominent researchers long before the 1990s, including Karl Lashley, Wilder Penfield, Donald Hebb, Roger Sperry, Sir John Eccles, George Miller, Lord Brain, Michael Gazzaniga, Leon Festinger and coworkers, George Mandler, Tim Shallice, and Michael Posner and coworkers among others. . . 

Thus, while some people were shying away, there wasn't a genuine "winter."

Color me skeptical. What I think is going on here is that “laying the conceptual foundations for later work on consciousness” is doing most of the work for Browning. E.g., according to Browning, mere work with amnesia patients was “laying the foundations” for consciousness research. Yet a lot of those names never worked on consciousness explicitly, or did so only in the most oblique ways. And even though a few were more explicit about consciousness (e.g., Sperry, Gazzaniga, Eccles) represented some vanishingly small fraction of working, or even famous, neuroscientists of the era. They operated on the margins, protected by fame or highly particular subfields (like access to split-brain patients). And even in those cases, their most explicit work on consciousness is later, in the ‘60s and ‘70s, when the consciousness winter was beginning to thaw anyways.

As I argued in The World Behind the World, consciousness is the main function of the brain as an organ, so it’s almost impossible to study the brain without doing something that potentially bears on consciousness. At the same time, I also pointed out that this refusal to be explicit is why neuroscience is in such a messy state—we’ve been studying the main function of our subject of research from an angle, instead of head on. If you can’t talk about something, it makes it very hard to study, even if you also can’t avoid helping “lay conceptual foundations” around it! If you look directly at whether researchers were allowed to talk about consciousness explicitly in the period of time stretching from the ‘20s to the ‘80s, in the majority of cases I think the answer is a resounding no.

So, is there good evidence for a consciousness winter or not? Now-a-days the field is engaged in an explicit search for what’s commonly called the “neural correlates of consciousness.” This term essentially does not exist prior to the ‘90s.

Pretty stark! Now, maybe “neural correlates of consciousness” is just our modern terminology. Maybe that massive take-off is just a linguistic replacement of something that was already ongoing but was rebranded under a new name? Someone might point out that the “neural correlates of consciousness” is not used any time prior to the 1990s either—and the claim of a consciousness winter is that psychology started interested in consciousness (look, for instance, to how freely early psychologists like William James or Wilhem Wundt discussed consciousness), and this then was suppressed for decades after the rise of behaviorism in the 20th century.

So we can’t necessarily rely on the modern terminology (although variations on similarly phrased queries all reveal the same sudden burst of interest in the ‘80s). How about just the word “consciousness” overall, across all books. How did that usage change over time?

I find this astounding. It lines up almost exactly with where I would begin and end the consciousness winter of the 20th century, putting the depths of the winter right around 1940.

What’s interesting too is that, if the word itself was less popular, the excision of consciousness from the purview of science likely affected more than just science alone.

And on thinking about it, it’s clear in society and culture—and here I’m talking literature, art, etc.—these fields spent the middle of the 20th century, right in the depths of the consciousness winter, problematizing their relationship to consciousness. Perhaps some of these cultural changes were subtle downstream effects from removing consciousness from science’s purview. One could write an entire book about this thesis, but I’ll just give a few examples: during the years of the consciousness winter literature became dominated by postmodernism, and therefore more focused on the structure, textual metaphysics, and content of the text qua text. Authors like Pynchon, Gaddis, and Barthes replaced authors like Virginia Woolf and James Joyce in popularity. In turn, fictional characters became more obvious pawns of the author, more likely to play funny self-acknowledging games, less likely to be thought of as “real.” It’s if authors became skeptical of the intentions and thoughts of their own characters, and could no longer really believe them or take them seriously.

Similarly, painters abandoned the psychological musings of modernism and embraced the abstract mechanics of postmodernism. Abstract Expressionism, and the rise of mechanical painters of fractured form like Jackson Pollock and Mark Rothko, replaced the older, more human-focused and perception-focused art of Pablo Picasso or Henri Matisse. During the consciousness winter art itself became more mechanical, more focused on novel techniques, less interested in representing consciousness; indeed, even suspicious of it.

Left: Matisse’s Dance from 1910. Right: Pollock’s Number 1 from 1948.

The cultural-trickle-down-theory of the consciousness winter fits well with one of the main hypotheses from The World Behind the World: that our civilizational conception of an “intrinsic perspective” (by which I mean the richness with which we think or talk about consciousness) rises and falls historically, and gains greater depth or clarity, or becomes hollowed out, due to historical events. E.g., in the book I argue that during the “dark ages” (an unpopular historical term but still a useful one) there is very little intrinsic perspective to be found in literature, but that ancient Roman literature evinces a rich understanding of consciousness. Or, as Bryan Ward-Perkins wrote in The Fall of Rome and the End of Civilisation:

Almost all the references we have to writing in post-Roman times are to formal documents, intended to last (like laws, treaties, charters, and tax registers), or to letters exchanged between members of the very highest ranks of society. . . Most interesting of all is the almost complete disappearance of casual graffiti, of the kind so widely found in the Roman period.

With understanding of the depths of consciousness bled dry by mass illiteracy in the dark ages, the only place it remained was in discussions of religion by the educated clergy. All the way through the medieval era there is a noticeable lack of attention paid to everyday mental states. Similarly, during the 20th century, there appears to be, if you can’t call it a reversion of a similar depth, at least a distancing from consciousness as postmodernist suspicion crept into the arts. I find it unsurprising that this happened when consciousness was exiled from science out into the cold.

The Warning Signs

While there are no immediate signs of another consciousness winter—it has not started snowing—there are several forces at work in the modern world that could spur on a creeping cold. The first is the lack of scientific progress on a well-accepted theory of consciousness itself. This is one of the most important problems in science, and yet few work directly on it. Those who do inevitably become controversial figures. Now that it’s been made acceptable (by frustrated figures within the field) to call even well-known and popular theories of consciousness pseudoscientific, funding and interest and acceptance could experience a drawdown across the board, freezing a still-fledging field.

More broadly, we are potentially poised for another consciousness winter due to the rise of AI. While there were early debates over whether AI counts as conscious, most people are assuming that GPT-4 is not conscious (likely, in my opinion, correctly). That is, there is nothing it is like to be GPT-4, as it were. However, this conclusion would mean that intelligence and consciousness can be orthogonal. You can have high intelligence with minimal consciousness. Obviously, that’s not true for humans. There is an inner fire of experience within all of us. We don’t know exactly how to reconcile that with the brute mechanical workings of our brain, but nature does—such reconciliation occurs at every millisecond, and in every human. But with the rise of intelligent artificial beings who lack consciousness, our own nature may be obscured. People will intuitively begin to think of humans as AIs, just biological Large Language Models. We might stop being able to tell the difference between us and them.

For all these reasons, especially the downstream cultural effects, we should worry at even the slightest glimpse of hoarfrost.

Previous
Previous

Ultrasonic Transformers: Towards a Noetic Frontier

Next
Next

The Effects of Lucid Dreaming in Waking Life*