In my last three posts on JTB+, I have defended a psycho-epistemic model of categorization on which Justified True Belief+No False Lemmas would be a viable definition of knowledge. As a reminder, the scenarios we’re trying to write off as non-counterexamples to this model are so-called Gettier Cases, which are characterized by a common theme of being right by happenstance. Prototypical sorts of examples I gave in my earlier posts are the deer in the yard example1 and the fake barn county2 example. The way my model approaches incorporating these as non-examples into the JTB+ model is by insisting that, no, in the act of categorizing the objects we saw into the wrong category, we actually did invoke a false lemma, namely our unconscious belief about the proper way to categorize objects in this particular context. I showed that it doesn’t really matter which cognitive model we prefer of how categorization works; as long as our model has some plausible characteristics we would demand of any sort of categorization schema, the psycho-epistemic model works to write off Gettier Cases.
However, in these posts I smuggled in the assumption that there are, in fact, unconscious beliefs that we can appeal to in the psycho-epistemic model. And it’s not altogether clear that this should be an actual thing, or what sort of thing it would be if it existed.
In this post I’m going to be referencing the SEP article on belief to give a short primer on each major school of thought in philosophy of mind on how best to characterize belief and how and if each one can accommodate the notion of unconscious belief that I need for my defense of JTB+ to go through. This will be less of an argumentative thesis and more of a sanity check for my own benefit to make sure I’m not arguing for something that’s manifestly at odds with all of established philosophy of mind.
Representationalism
It is common to think of believing as involving entities—beliefs—that are in some sense contained in the mind. When someone learns a particular fact, for example, when Kai reads that astronomers no longer classify Pluto as a planet, he acquires a new belief (in this case, the belief that astronomers no longer classify Pluto as a planet). The fact in question—or more accurately, a representation, symbol, or marker of that fact—may be stored in memory and accessed or recalled when necessary. In one way of speaking, the belief just is the fact or proposition represented, or the particular stored token of that fact or proposition; in another way of speaking, the more standard in philosophical discussion, the belief is the state of having such a fact or representation stored. (Despite the ease with which we slide between these different ways of speaking, they are importantly distinct: Contrast the state of having hot water in one’s water heater—the state of being “hot-water ready”, say—with the stuff actually contained in the heater, that particular mass of water, or water in general.)
So in some loose sense, Kai’s belief “Pluto isn’t a planet” is itself an unconscious belief; it isn’t on Kai’s conscious mind most of the time and needs to be prompted to come to the fore. He is, to use SEP’s turn of phrase, “hot-water ready”.
But obviously that’s not the sort of thing we mean when we refer to unconscious belief. So what’s the difference between Kai’s belief and our unconscious beliefs about categorization? Or, to use a more visceral, worldly sort of example, the unconscious beliefs of a rape victim who has thoroughly repressed the experience3?
Well, the primary obvious difference is just the amount of prompting required to get at the belief. If Kai is “hot-water ready” with regard to his beliefs about Pluto, he is less ready with regard to the beliefs he never examines consciously about categorization or trauma. The representation will still be stored, but it is stored in a different area of memory than pedestrian beliefs about Pluto, an area harder to access.
It is also common to suppose that beliefs play a causal role in the production of behavior. Continuing the example, we might imagine that after learning about the demotion of Pluto, Kai naturally turns his attention elsewhere, not consciously considering the matter for several days, until when reading an old science textbook he encounters the sentence “our solar system contains nine planets”. Involuntarily, his new knowledge about Pluto is called up from memory. He finds himself doubting the truth of the textbook’s claim, and he says, “actually, astronomers no longer accept that”. It seems plausible to say that Kai’s belief about Pluto, or his possession of that belief, caused, or figured in a causal explanation of, his utterance.
Note that this is just Something Beliefs Do on the representationalist view, and not Something Beliefs Are, in the same way that tires are usually attached to vehicles, but not being attached to a vehicle doesn’t make a tire not a tire.
That aside, we will discuss later in this post models of belief where the behavioral aspects just are the beliefs, and these behavior-based models of belief will naturally suit unconscious beliefs the best because in this regard, unconscious beliefs are at least as good as conscious ones, and it seems plausible to me that unconscious beliefs are in fact primary drivers of behavior. Notice how, often, our conscious beliefs and unconscious ones can be in conflict. If I’m a PTSD victim who has internalized the visceral unconscious belief that People Are Dangerous, even if I consciously know that that’s wrong and can articulate that it’s wrong, the unconscious belief will often be the one that wins out and influences my behavior. The edginess aside, I’m reminded of a scene from the show Mr. Robot where the protagonist gives an internal monologue about daemons:
There's a saying -- 'The devil is at his strongest while we're looking the other way.' Like a program running in the background silently. While we're busy doing other shit. 'Daemons,' they call them. They perform action without user interaction. Monitoring, logging, notifications, primal urges, repressed memories, unconscious habits. They're always there, always active. You can try to be right, you can try to be good, you can try to make a difference. But it's all bullshit. 'Cause intentions are irrelevant. They don't drive us, daemons do.
So it seems like representationalism can account for unconscious belief. The representations are just tucked a bit further down, and they still do the stuff that ordinary beliefs ought to do. What form might these representations take?
One model is the language of thought hypothesis:
According to the language of thought hypothesis (see the entry on the language of thought hypothesis), our cognition proceeds rather like such a robot’s. The formulae we manipulate are not in “machine language”, of course, but rather in a species-wide “language of thought”. A sentence in the language of thought with some particular propositional content P is a “representation” of P.
Aha! So on this model, there’s a corresponding sentence in the language of thought that Kai has stored somewhere, and its content is the proposition “Pluto isn’t a planet.” Notice that we need to expend computational energy, on this model, getting from the language of thought to the ordinary language propositional content. It seems easy enough to imagine that how “close” the belief is to the conscious mind correlates with how computationally expensive the retrieval-translation process is. Those beliefs that are at the fore of our mind at any given moment will be easily stated and not difficult to translate from language of thought to ordinary language; those beliefs that we don’t attend to consciously but which are in some sense pedestrian beliefs will also be relatively easily stated, but it may take a bit of prompting to get the person to articulate them, and the translation may be slightly more computationally intensive; and finally, those beliefs that we never examine that are either neglected or repressed will take a lot of prompting to dredge up, i.e., a lot of energy to translate.
But there are other models of representationalism.
A number of philosophers have argued that our cognitive representations have, or can have, a map-like rather than a linguistic structure (Lewis 1994; Braddon-Mitchell and Jackson 1996; Camp 2007, 2018; Rescorla 2009; though see Blumson 2012 and Johnson 2015 for concerns about whether map-like and language-like structures are importantly distinct). Map-like representational systems are both productive and systematic: By recombination and repetition of its elements, a map can represent indefinitely many potential states of affairs; and a map-like system that has the capacity, for example, to represent the river as north of the mountain will normally also have the capacity to represent, by a re-arrangement of its parts, the mountain as north of the river.
But what happens to the map when, for example, conscious and unconscious belief disagree? It’s easy to imagine a repressed memories scenario (but again, if you find repressed memories controversial, pick your favorite unexamined background daemon belief): a person could believe consciously that they weren’t molested, but subconsciously know that they were. Our representational structure needs to be able to account for disagreements of this sort.
But I want to gesture toward the idea that this is a broader problem for representationalists, since even in ordinary circumstances, many (all?) of us have some set of contradictory beliefs. I have argued in the footnote of an earlier post on counterfactuals that justified beliefs do not compose, in the sense that A being a justified belief and B being a justified belief does not always imply that A&B is a justified belief, but it’s plainly irrational to believe A and believe B yet reject A&B.
Indeed, SEP notes that a linguistic model might have the edge here; if we’re just manipulating linguistic strings mechanistically to structure our beliefs, we could easily end up with two whose propositional content contradicts each other. It’s less clear how the maps view could make sense of people not making sense. It’s also not clear to me how the maps view can accommodate unconscious belief. Maybe part of the map is smudged or blurry or something, but that seems like stretching the metaphor a bit far. It seems a whole lot more natural to appeal to the linguistic view if we are going to talk about unconscious belief.
I submit that representationalism can accommodate unconscious belief, but that it probably ought to stick with a language of thought model if it is going to. I’d like to remain agnostic insofar as that is possible in future JTB+ posts, but this is my favorite theory, personally.
Dispositionalism
The SEP section in dispositionalism starts with a thought experiment where we are prompted to imagine an alien named Rudolfo who has seamlessly integrated into human society and behaves just as we’d expect a human to believe—as if, in other words, he has an assortment of beliefs about the world.
Perhaps we can coherently imagine that Rudolfo does not manipulate sentences in a language of thought or possess internal representational structures of the right sort. Perhaps it is conceptually, even if not physically, possible that he has no complex, internal, cognitive organ, no real brain. But even if it is granted that a creature must have human-like representations in order to behave thoroughly like a human being, one might still think that it is the pattern of actual and potential behavior that is fundamental in belief—that representations are essential to belief only because, and to the extent to, they ground such a pattern. Dispositionalists and interpretationists are drawn to this way of thinking.
This seems wildly unintuitive to me. Dispositionalists deviate from what I said before about the behavioral aspects of belief being Something Beliefs Do and instead claim that these behavioral aspects just are the belief. No doubt you’re already coming up with objections, because like I said, it’s a very unintuitive view that in my opinion strains credulity.
Often cited is the disposition to assent to utterances of P in the right sorts of circumstances (if one understands the language, wishes to reveal one’s true opinion, is not physically incapacitated, etc.).
Ah! So one of the primary behavioral factors is linguistic. Indeed, you might make a reductionist case that having a belief in a proposition P is equivalent, on a dispositionalist view, to the truth value of the counterfactual “If you were in the right sorts of circumstances, you would utter or assent to an utterance of P.” The problem is that, while this approach will license a lot (all?) of the normal sorts of beliefs, you’d also be licensing the following sort ot thing:
Mary doesn’t want to lie to her friend Sally, but Sally has asked her if the dress she’s wearing makes her look fat (it does). Mary wishes to reveal her true opinion but out of concern for her friend’s self-esteem she says “No, of course not.” If our only behavioral criteria for belief is the disposition to utter or assent to utterances under the right circumstances, we have to admit that Mary believes the dress doesn’t make Sally look fat (which Mary doesn’t believe).
You may object that I’m playing fast and loose with what counts as a “right sort of circumstance” here. Does Mary really “wish to reveal her true opinion”? And the answer is, I don’t think it matters. The dispositionalist is cheating by smuggling the idea of a “true opinion” into the “right circumstances” criteria to begin with! What does it mean for an opinion O to be someone’s true opinion? You might say that’s obvious. But then, opinions are so closely related to beliefs—in fact, I claim opinions are beliefs—that I struggle to imagine how we could talk about true opinions without also already having some idea of what makes something one’s true belief, and at that point we may as well just go with that definition and give up the dispositionalist project entirely. Or if the criteria for what makes something one’s “true opinion” is the same as what makes something one’s “authentic belief”, then putting the “…wishes to reveal one’s true opinion” into the “right circumstances” criteria is just circular!
So the dispositionalist additionally need other sorts of behavioral cues; we can’t just expect to reduce the truth conditions of “Alice believes proposition P” to one measly counterfactual and expect things to go smoothly.
Of course, there are ways to insert the right conditions into “right circumstances,” by fiat, but as SEP notes:
The second standard objection to traditional dispositional accounts of belief is to note the loose connection between belief and behavior in some cases—for example, in a recently paralyzed person, or in someone who wants to keep a private opinion (e.g., a Muscovite who believes, in 1937, that Stalin’s purges are morally wrong), or in matters of very little practical relevance (e.g., an American homebody’s belief that there is at least one church in Nice). Again, the traditional dispositionist seems faced with a choice between oversimplifying (and thus mischaracterizing some people’s dispositions) and loading the dispositions with potentially problematic or unwieldy conditional antecedents (e.g., she’d get the umbrella if her paralysis healed; he’d speak up if the political climate changed).
It seems terribly unconvincing to have to tack so many caveats onto the model for it to make sense.
Nevertheless, if you are a dispositionalist, you will find it decidedly easy to incorporate notions of unconscious belief into your model. The “right circumstances” under which Alice will utter or assent to her unconscious beliefs might be a bit more elaborate or taxing, but they will exist. And as I briefly argued in the section on representationalism, unconscious beliefs are at least as good as conscious beliefs at driving behavior. Recent developments in neuroscience support this: a natural way to interpret the data that decisions can be predicted seven seconds before they are consciously made is that the unconscious mind is the true decision maker and the conscious mind is only playing catch-up.
Interpretationism, Functionalism
The SEP article has sections in interpretationism and functionalism; these both rely on external behavioral cues, and I find them unconvincing for the same reasons as I did dispositionalism, but if we want to incorporate a notion of unconscious belief into them, we need only rephrase our last paragraph on dispositionalism into the jargon of interpretationism and functionalism, and it will look much the same. That’s all I will say about these two schools of thought, but the article’s linked above if you’re interested in reading about them.
Eliminativism, Instrumentalism
According to eliminativism, once folk psychology is overthrown, strict scientific usage will have no place for reference to most of the entities postulated by folk psychology, such as belief. Beliefs, then, like “celestial spheres” or “phlogiston”, will be judged not actually to exist, but rather to be the mistaken posits of a radically false theory. We may still find it convenient to speak of “belief” in informal contexts, if scientific usage is cumbersome, much as we still speak of “the sun going down”, but if the concept of belief does not map onto the categories described by a mature scientific understanding of the mind, then, literally speaking, no one believes anything.
So, on this view, there are no beliefs. But on this view there’s presumably no knowledge either; it won’t, therefore, be terribly relevant to epistemologists or the epistemological project I’m defending. If you’re an eliminativist, you didn’t even get to this point in my JTB+ project; you scoffed as soon as I outlined the project and noped out.
A slightly softer position is instrumentalism:
Instrumentalists about belief regard belief attributions as useful for certain purposes, but hold that there are no definite underlying facts about what people really believe, or that beliefs are not robustly real, or that belief attributions are never in the strictest sense true (these are not exactly equivalent positions, though they are closely related). One sort of instrumentalism—what we might call hard instrumentalism—denies that beliefs exist in any sense. Hard instrumentalism is thus a form of eliminativism, conjoined with the thesis that belief-talk is nonetheless instrumentally useful (e.g., Quine 1960, p. 221 [but for a caveat see p. 262–266]).
For hard instrumentalists, presumably knowledge attribution is just as instrumentally useful as belief attribution, so they might care about epistemology. A hard instrumentalist could easily incorporate notions of unconscious belief and JTB-adjacent issues into their folk-ontology as instrumentally valuable. Even if they don’t buy my JTB+ formulation as true, they might entertain it as instrumentally useful.
So we see that the main schools of thought on belief in philosophy of mind, apart from eliminativism, can accommodate notions of unconscious belief. Behavioral theories like dispositionalism can do it the easiest, but I think those are kind of crazy; we should therefore put in slightly more work to incorporate unconscious belief into representationalism, namely language of thought representationalism.
I see something that looks like (but is not) a deer in my yard and form the justified belief that there is a deer in my yard, and unbeknownst to me there is in fact a deer somewhere else in my yard; I seem to be correct only by happenstance, and we don’t want to call my belief knowledge.
I’m driving through a county which is, unbeknownst to me, full of fake barn facades and I happen to see a real barn. You do the math.
I acknowledge that the existence of repressed memories remains controversial in psychology. That’s a separate debate I don’t feel like having and non-essential to the thrust of my argument. It’s just a convenient illustrative example.