Part one of two.
Popular behavioural science books emphasise the influence of the unconscious on decision making. Are we really unable to accurately report what causes our behaviour? To what extent do we really know our own minds when making decisions?
Imagine this. Shoppers were approached in a department store and asked to choose one nightgown from four and then justify their choice. 12% of shoppers selected option 1 (most left), 17% selected option 2, 31% selected option 3 and 40% selected option 4 (most right). Almost all said ‘quality’ was the reason for their choice. The funny thing is, all the nightgowns were identical. Although shoppers tended to select the most right-hand option, they never stated position as a reason for their choice. On probing, position was denied as a cause.
Nisbett & Wilson’s seminal research (the nightgown study was replicated with other consumer goods) is often the most commonly cited evidence in support of the claim that we have little or no introspective access to our decision-making processes. Or, put another way, that we have no idea why we do what we do.
Based on their own research and a review of existing evidence, Nisbett & Wilson drew three conclusions:
- We are unaware of the true causes of our behaviour
- Our retrospective verbal reports about what (causally) influences our judgements is inaccurate
- A priori causal theories, based on what we typically do (or should do), are used to explain responses to stimulus.
They also proposed that although mental processes are inaccessible, mental content (products of our processes – like personal historical facts, emotions, knowledge of intentions etc) are accessible, which deceive us that we also know about process.
Why does this matter?
Popular behavioural science books Thaler & Sunstein’s Nudge and Kahneman’s Thinking Fast and Slow emphasise the influence of the unconscious on decision making. Some, like Daniel Wegner and Dijkterhuis even claim conscious thought has little or no influence on behaviour at all.
This matters! Conscious awareness of the reasons underlying our behaviour and, relatedly, the extent to which conscious thoughts cause our behaviour are critically important in the real world. Experts such as doctors and policy-makers are trusted to make evidence-based decisions free from bias. Individual accountability, autonomy and responsibility are the very foundations of our legal framework and society.
Before we take a closer look at the science, let’s first try and define consciousness.
What exactly is ‘consciousness’ and how do you assess it?
Although there is no agreed definition of consciousness, theories generally distinguish between two different states:
- phenomenal consciousness, experienced as having essentially subjective, first-personal qualities
- conscious thought, used in reasoning and reflection (i.e. awareness of having or being in that state), which is the focus of this article.
Assessing conscious awareness often relies on verbal self-report measures which can be notoriously unreliable, subject to social desirability bias and demand characteristics. Recognising that measuring conscious awareness will only be informative if measures are free from bias and error, Newell & Shanks developed four criteria for assessing the adequacy of awareness measures, which I’ll refer to throughout this blog:
- Reliability: assessments should be unaffected by factors that do not influence the behavioural measure (e.g. social desirability);
- Relevance: assessments should target only information relevant to the behaviour;
- Immediacy: assessments should be made as close to the behaviour as possible to avoid forgetting/interference; and
- Sensitivity: assessment should be made under optimal retrieval conditions.
Now let’s look at the science…
I’ll be blunt. There are several issues with Nisbett & Wilson’s (1977) conclusions. First, the hypothesis is unfalsifiable because both correct and incorrect reports are used to support the conclusions.
Second, the distinction between mental content and process lacks clarity (see Smith and Miller, 1978). There is no definition of process or content beyond listing what content includes. How should rules associated with higher cognition, like long division, be categorised? Or the active process of forming intentions which does not neatly fit the content category? There is a risk of circular reasoning if we use consciousness to distinguish what is process or content.
Third, verbal self-report measures can be unreliable indicators of mental processes. Nisbett & Wilson recognise these issues, but do not address them in their own research. Suppose participants misinterpret experimenters’ questions about the cause of their choice? This would fail the ‘relevance’ test and lead to erroneous conclusions.
As Newell & Shanks point out, the ‘position effect’ – the typical preference for the rightmost product seen in the studies – could simply be a consequence of the decision-making process participants use. That is, the decision rule might be to examine the nightgowns from left to right, mentally comparing each nightgown to the best one seen so far. The nightgown currently in view is considered ‘the best’ if it is at least equal to the previous ‘best’. This decision rule predicts a preference for the rightmost nightgown given they are all identical. In this view, it is valid for participants to deny position – which only influenced how the products were sampled – and report quality as the reason for their choice.
Accurate introspection is possible when appropriate measures are used
Research on ‘choice blindness’ suggests accurate introspective access to our decision making is possible when more sensitive elicitation measures are used. Petitmengin and colleagues presented participants with two photos of female faces and asked which one was more attractive. Unbeknown to participants, their chosen photo was swapped before they were asked to verbally justify their choice. Consistent with Nisbett & Wilson’s findings, 80% did not detect the manipulation and provided an explanation of a choice they did not make.
But when an ‘elicitation interview’ (see Vermersch) was used to obtain a description of participants’ choices, 80% detected the manipulation. Elicitation interviews focus on ‘how’ and ‘what’ questions. Interviewers avoid ‘why’ questions which lead people to justify rather than describe what they have done.
Let’s look at another example. This gets a bit technical so bear with me…
The multiple cue probability learning (MCPL) literature avoids the issues associated with ‘why’ questions, using statistical methods to infer explicit knowledge of internal mental states. It examines if/how people learn about the objective probabilistic relationships between the information at hand and the judgement and the extent to which people are aware of how they integrate the available information into a single judgement. For example, HR professionals assess prior experience, test results and interview performance to predict a candidate’s future job performance.
Previous MCPL research (e.g. Evans, Clibbens, Cattani, Harris, & Dennis, 2003; Gluck, Shohamy & Myers, 2002; Slovic & Lichtenstein 1971) typically finds people can learn which cues are more predictive of the judgment outcome, but lack explicit knowledge of how they achieve this. Or, put another way, that there is a dissociation between learning and insight. But research from Lagnado & Newell (2006) and Speekenbrink & Shanks (2010) undermine the standard assumption that MCPL is inaccessible to conscious awareness. Using a novel task and innovative statistical methods (‘rolling regression’ – see paper for detail) to analyse individual learning, they found participants’ self-reported knowledge of the cues they used corresponded closely to how they were actually using the cues.
So, let’s sum up so far. Despite it being a good story, it seems inaccurate to claim the reasons underlying decision making are (always) inaccessible to conscious awareness.
But what about the more implicit claim that the causal role of conscious thought on our behaviour has been vastly overstated? We’ll look at this question in part two, out very soon.
One thought on “Do you know why you do what you do?”