If you have a brain, you’re automatically biased. Here are ways you can prevent this thinking from affecting your decision making.
In the past few years a lot has been written about Silicon Valley’s lack of diversity and the steps taken by organisations such as Google and Intel to address the problem.
Companies are thinking of more innovative ways to build the pipeline of diverse applicants, which is a great place to start, but the pipeline won’t help you if can’t retain diverse workers. And that is the main issue, you can’t fix until you understand how our brains see people who are different and the way bias works.
Psychologists have been studying how we make sense of other people- we call this area of research ‘person perception’-for the better part of a century. It turns out that almost entirely below our awareness, our brains transform pieces of disparate information – how someone sounds, what they look like, their behaviors and what they say, paired with what we know, or think we know about them into a cohesive impression.
We feel like we see people as they are, but nothing could be further from the truth. Impressions are just as much about the perceiver and their assumptions, expectations and memories- as they are about the perceived. Beauty is in the eye of the beholder and so is everything else.
It’s no surprise that the process of perception is a biased one. There are lots of biases hardwired into our brains: preferences for people who are very similar to us or who are in a group; wariness of those who are different; a tendency to save mental energy by using shortcuts like stereotypes to fill in the blanks about others.
These unconscious biases may have been helpful in the hunter gatherer days to help keep us alive, but in the modern workplace and world, they can cause a lot of trouble.
To address the problem, everyone needs to understand that we are ALL biased.
The good news is that a lot of companies are accepting that this is a fact and have been implementing unconscious bias training programs to raise awareness. This mainly involves teaching people about how biases work and giving them experiences that reveal some of their own biases- similar to Harvard’s online Implicit Association Test (IAT).
The bad news. Although it’s a good first step, teaching people about unconscious bias doesn’t actually break bias.
This is because bias is still unconscious, a word that needs to be taken very seriously. Even when you know they exist, it’s impossible to access the mental processes that create bias.
Daniel Kahneman, the world’s foremost expert on bias wrote, “it’s difficult for us to fix what we can’t see”.
So any strategy that requires you to catch yourself being biased in the moment and correct it is bound to fail. Instead, we need to find strategies that break bias even when we don’t think we are biased. The good news. These strategies have been found.
In the last decade, neuroscientists and psychologists have learned a great deal about fighting bias. They’ve learned that there are various bias based on different neural cognitive quirks or underpinnings that each need a different type of strategy to break it.
For instance, biases that are caused by mental laziness can be corrected by creating simple step-by-step processes for people to use when making decisions that will get them to be more thorough and deliberate.
Biases that are motivated by unconscious negativity towards people who seem different – like women in male dominated tech companies – can be broken by focusing on shared goals or similarities. A study by neuroscientists Jay Van Bavel and Will Cunningham is a great example. In it, while participants were asked to quickly categorize words as “good” (e.g., happiness, kitten) or “bad” (e.g. sadness, war). Before each word, the face of a white or black male was briefly shown on the computer screen. The idea here is that seeing something positive before a positive word will make it easier to categorize, but seeing positive before a negative word will lead to more mistakes. The same logic holds true for seeing something negative before a word.
Participants’ performance showed the typical effect of unconscious bias–namely, that they had a harder time categorizing negative words when shown after white faces, but not black ones. In other words, because they unconsciously saw white faces–not black faces–as positive, their performance on negative words was affected.
Next, the researchers took another set of participants through the same task, only this time, they were shown the faces before it began and were told that some of the white and black faces belonged to students who would be on their team on a later task, while the other faces were from the opposing team.
White participants showed the same positivity bias toward black faces of fellow team members that had previously only been present for whites. This illustrates how knowing someone is a member of the team completely wipes out the unconsciously biasing effect of race.