Why Putting On Blinders Can Help Us See More Clearly

Even if your organization doesn’t have a “blinding” policy for hiring and other people evaluations, it’s possible to reap some of the benefits.

Reading Time: 19 min 



Image courtesy of Brian Stauffer/theispot.com

Would you decide which job candidates to interview based on their names — or which ventures to fund based on entrepreneurs’ gender or physical attractiveness? Few managers would admit doing so, even to themselves. But research shows that decision makers are in fact susceptible to exactly this type of bias. Identical resumes sent in response to job postings are less likely to generate a callback for an interview if the name at the top suggests the candidate is Black.1 And female entrepreneurs face harsher questions from potential investors and are less likely to have their ideas funded than men (particularly attractive men).2

Generally, this body of research demonstrates that the fairness of social evaluations — such as whom to hire, invest in, or promote — can be adversely affected by irrelevant and seemingly innocuous attributes, like name or appearance, because of the biases they evoke. How might these judgments be made more equitably? One way to reduce the potential for bias and increase objectivity is to adopt a decision-making strategy called blinding — that is, limiting the information that can be considered in an evaluation. The logic is straightforward: An evaluator cannot be biased by irrelevant information about a target of evaluation (for instance, a job candidate’s name) if that information is hidden from view. It is for this reason that Justice is typically depicted wearing a blindfold: The blindfold ensures the impartiality of her decision-making.

Over the past several years, we have studied both the benefits of and the barriers to blinding in the context of organizational evaluations like hiring decisions and performance reviews. More specifically, we have explored the factors that might influence whether evaluators will choose on their own to use a strategy of blinding in their evaluations. In the absence of organizationwide blinding policies that strictly limit the information people can incorporate into their decisions — policies that are rare and sometimes hard to implement — these personal preferences are important to understand. We have found that managers and other evaluators in organizations can make fairer and more accurate assessments by proactively blinding themselves to potentially biasing information about a target of evaluation. In this article, we briefly summarize some of the evidence that blinding works, look at the motivational and cognitive forces that can determine whether decision makers are willing to use this strategy, and suggest some ways that leaders might apply our findings to encourage blinding in their organizations.

Evidence That Blinding Works

By ensuring that only relevant information is available to the evaluator, blinding reduces the likelihood that an evaluator’s judgment can be contaminated or distorted by stereotypes and other forms of unconscious bias.3 In a classic example, major symphony orchestras in the United States used blinding to counteract the prevailing negative stereotypes about women in the music world and increase gender diversity. Through the end of the 1950s, almost all orchestra members were men. Then, beginning in the 1960s, most symphony orchestras adopted new audition procedures: People auditioning for positions had to do so from behind a screen, obscuring their identity and gender. As a result, negative stereotypes about women could no longer contaminate how a (biased) listener judged the quality of the music, and the percentage of women hired for new positions rose steadily. By the 1990s, women made up approximately 25% of the membership of these orchestras (up from about 5% in the 1950s) and represented about 50% of new hires.4

Blinding strategies can also be used in other organizational settings to increase the fairness and objectivity of evaluations that may otherwise be susceptible to bias and distortion. For instance, some experiments have assessed the impact of anonymous hiring — a policy that conceals candidates’ names from hiring managers — on hiring outcomes. This research demonstrates that if job applications are stripped of identifying information, members of underrepresented social groups (ethnic minorities and women) become more likely to advance to the interview stage and, in certain cases, ultimately receive job offers.5 Inspired by these findings, some boutique companies have emerged that offer services to help organizations implement blinded hiring practices.6

Blinding as a Personal Choice

The research demonstrating the benefits of blinding shows what happens when it’s adopted as policy. But as we’ve noted, blinding policies in organizations are relatively rare.7 This may be largely due to managers’ and evaluators’ resistance to any restrictions that would govern their decision-making. Other barriers may reside in elements of organizational design: For example, if the HR function in an organization is decentralized, as might make sense for a company with widely varied units or locations, management may be hesitant to institute a one-size-fits-all policy.8 So, in our research, we have framed blinding as a personal choice, asking, “When might managers and other evaluators in organizations opt to blind their own judgment?”

Before exploring that question, though, we wanted to get a clearer sense of just how rare blinding policies are. We surveyed 828 HR professionals from a range of organizational settings in the U.S. to gauge their familiarity and experience with blinding policies in hiring and other domains. On average, our respondents had worked 13.72 years in human resources and reported making more than 300 hiring decisions in their careers. Over 95% had direct experience making hiring decisions, and more than 58% indicated that hiring was one of their central tasks in their current job. Since research has clearly established the benefits of blinding policies, we were looking for insights into why they haven’t risen to the level of best practices. We thought that our sample of participants, with their deep backgrounds in HR and hiring, would be more likely than most to have at least some experience with blinding. But we were surprised by how little exposure they’d had.

For each participant, we first provided a brief description of blinding policies in hiring. We explained how organizations that implement them generally withhold certain information about job applicants from hiring managers until after an interview invitation or other recruiting decision is made, with the goal of reducing the risk of bias and increasing objectivity. Next, we asked participants about their familiarity with such policies in hiring and other evaluative domains: Overall, 59% indicated that they were familiar with blinding policies. However, only 19% were currently working at an organization that used blinding policies in hiring, and only 18% indicated that they had previously worked for such an organization. Moreover, only 20% indicated that they had received training related to blinding. Experience with blinding policies was consistently low among respondents working in businesses and nonprofit organizations; it was slightly higher among respondents working in the government sector.9

This data suggests that blinding policies are not commonly adopted in U.S. organizations, which underscores the importance of understanding individuals’ preferences regarding blinding. To return to the example of hiring, evaluators generally have (and prefer to have) considerable freedom in what information they incorporate into their decision-making processes.10 And studies have shown that hiring decisions can be biased by the information that hiring managers seek out beyond the credentials included in an application, such as social media profiles or personal websites.11 So we have examined the circumstances under which evaluators might choose to blind themselves — or not — to such potentially biasing information about a target of evaluation.

Our research suggests that curiosity is a key factor for individuals who do not opt to blind their decision-making. People tend to experience curiosity as a strong, in-the-moment impulse and a potent driver of behavior.12 In one study, we had subjects make a performance evaluation: They were asked to view a video of someone completing a pattern recognition task and to estimate the quality of that performance. We offered subjects the opportunity to view a “profile” of the performer — containing irrelevant and potentially biasing background information, such as name, photo, and hometown — before watching the video and making their estimates. We found that roughly 50% of subjects chose to view the profile. When asked why, they rated curiosity as a strong motivator. In contrast, we asked a separate group of subjects whether they thought they should see the same profile if their goal was to make an unbiased estimate. Roughly 90% of those participants indicated that they did not think they should see it. These subjects reported much lower motivation based on curiosity, with a shift in motivation related to concerns about fairness and accuracy. Thus, people have the insight that it is better to avoid certain information, but they need to trigger this insight by reflecting on what they should do.13

Another of our studies explored the role of curiosity in blinding preferences related to a mock hiring decision. Subjects were tasked with judging a mock candidate’s fit for a job and had the option to receive the candidate’s name and photograph in addition to a standard summary of work experience and education to help inform their judgment. Unlike our other studies, which forced a choice — to see potentially biasing information about a target before an evaluation or not at all — in this study, we gave a subset of subjects a third option: to first judge the job candidate based on credentials alone and then see that person’s name and photo (with the option to revise the initial judgment). We found that those who were presented with this third option — to see the name and photo later in the decision process — were more inclined to form an initial judgment blind compared with subjects forced to make the stark choice between seeing the name and photo in advance of their initial judgment or not at all.14 In short, this study demonstrated that offering to satisfy evaluators’ curiosity at some point (after an initial blind evaluation) can reduce their immediate drive to give in to it. Notably, few subjects — under 20% — ultimately chose to revise their initial, blind judgment after receiving the candidate’s name and photo. In other words, with their curiosity satisfied, most subjects chose not to adjust their assessments based on the potentially biasing information.

Besides curiosity, people may also choose not to self-blind because they honestly, but incorrectly, believe biasing information to be useful or helpful. Real-world examples abound. For instance, managers stubbornly believe in the usefulness of unstructured interviews — which are open to biased questions and random tangents — even though structured interviews are known to be better predictors of job performance.15 Managers give too much credence to their “gut” ability to discern a job candidate’s likely value and too little to the potential for bias in their hiring decisions.16 In one of our studies, managers took part in a mock hiring task similar to the one described above. Part of the task involved choosing whether to learn the job candidate’s race and gender in advance. The managers also separately indicated whether they thought knowing that information was useful when making hiring decisions. The vast majority of our managerial subjects — about 85% — indicated that they did not think a candidate’s race and gender were useful information to have when making a hiring decision. However, of the remaining 15% who did think it useful, almost three-quarters chose to receive that information rather than be blind to it. Similarly, we asked our managerial subjects whether learning a job candidate’s race and gender could bias their hiring decisions. Those who elected to view the information in the mock hiring task not only perceived it to be useful to their decision but also disagreed that it could bias their judgments.17

It is possible that some subjects in this study believed that receiving race and gender information before evaluating a job candidate would help them meet diversity goals, a topic we’ll return to shortly. Nevertheless, it is alarming that they also felt the information was not likely to distort their hiring judgments — that they were somehow impervious to the effects of unconscious bias. Indeed, a large collection of careful studies demonstrating the impact of predictable biases on interview callbacks, job offers, and salary offers — associated with knowledge of candidates’ names and social categories like race and gender — suggests otherwise.18

Of course, people may also choose not to self-blind simply because they do not recognize the biasing content delivered by seemingly innocent information. In one study, we provided some managerial participants with the option of seeing a job candidate’s professional headshot (along with credentials) and gave others the option of learning a candidate’s race and gender. Even though a person’s photograph is very likely to reveal that information, we reasoned that the explicit option to learn a job candidate’s “race and gender” would be more likely to cue reflection about potential decision bias than the option to see a candidate’s “professional headshot.” Indeed, whereas 45% of managers chose to see the headshot when presented with that option, only 20% chose to see the race and gender information.19 Certain information, like a candidate’s name, headshot, or college graduation year, may fail to cue a desire to self-blind because the underlying, potentially biasing content — race, gender, age, and so on — is not immediately brought to mind.

In sum, our research suggests that evaluators who can overcome or delay a curiosity-driven impulse to receive potentially biasing information about a target — and who understand that having such information tends to hurt rather than help decision-making — are more likely to choose to blind their own evaluations. Self-blinding can take many forms in practice. Here’s one example: Aaron Weyenberg, in his former role as director of R&D at TED Conferences (the company that hosts TED Talks), was concerned that implicit biases might contaminate evaluations of job candidates he viewed on LinkedIn. He was specifically worried that seeing profile photos could skew his assessment of the listed credentials. To get around this issue, Weyenberg engaged in an impressive exercise in self-blinding: He developed a Google Chrome browser extension that automatically replaces people’s photos on LinkedIn with photos of dogs.20

Not everyone can develop their own web browser extension, but simpler options exist. For instance, hiring managers could make a commitment not to search for the social media profiles of job applicants or Google their names. Even better, they could ask an assistant to remove candidate names from applications, resumes, and so on. Small choices like these could offer big rewards in terms of decision accuracy and fairness.

Help Others Blind Their Decisions

We’re not only encouraging individuals to blind their own judgments; we’re also urging organizational leaders to prod employees in that direction. In the absence of formal blinding policies, that’s where we see the greatest potential to make evaluations more equitable overall in organizations — which can lead to a more diverse, inclusive workforce and level the playing field for learning and advancement opportunities.

But how do you do that? We have two suggestions for applying our findings to capture the benefits of blinding without having to adopt a formal policy.

Nudge deliberative thinking. Leaders can foster deliberative thinking by training people — and then prompting them in practice — to pause and reflect when conducting an evaluation. As we have mentioned, idle curiosity is a consistent, powerful motivator for evaluators who seek out irrelevant and potentially biasing information in advance. But it is experienced more as a transient impulse than as a lasting drive.21 So nudging evaluators to approach their decisions in a careful, slow, and deliberative manner can help address the problem.

To test this idea, we ran an experiment, putting managers through another mock hiring task. Some participants were simply asked whether they would like to see the job candidate’s name and photograph in addition to credentials. Others were first asked to indicate whether they thought they should see the name and photograph if their goal was to make an unbiased estimate of the candidate’s value. Then we had them make the same choice as the first group — whether to see or be blind to the candidate’s name and photograph. About half of the first group of managers, who made a straightforward blinding choice alone, opted to see the name and photograph. In the second group, only 15% did so.22 In other words, by asking managers to first consider what they should do, we nudged them to approach their second decision — whether to see the name and photo — with a more careful, deliberative mindset. This nudge reduced the impact of curiosity on the managers’ blinding preferences and encouraged them to self-blind.

The fact that the simple question “What should you see?” is enough to encourage self-blinding is a positive, useful sign for creating change in organizations. In the spirit of prompting reflective or deliberative thinking, managers can be trained to approach evaluations by first asking themselves, “What information should I have if my goal is to be fair and accurate?” This question shifts their focus away from curiosity and toward concerns about objectivity, thus encouraging self-blinding. One benefit of this type of intervention is that it does not reduce managers’ autonomy, like a blinding policy would; managers are merely nudged to approach evaluations in a different manner.

Of course, interventions that encourage deliberative thinking will not affect blinding decisions driven by the incorrect belief that potentially biasing information is useful to have. However, such beliefs could be corrected through training. As mentioned previously, many hiring managers may not know why learning a job candidate’s name, for instance, could contaminate their decisions, since names seem like innocent information. Similarly, they may not be aware of the full suite of biases that could be triggered by seeing a candidate’s photo — ranging from more obvious biases related to race and gender to less obvious ones related to age, attractiveness, weight, and other factors. Evidence-based training that illustrates these traps may change their views of their own susceptibility and increase their desire to self-blind.

It also bears noting that people can be poor judges of their general susceptibility to bias and often believe that others are more susceptible to bias than they are.23 In short, people may choose to receive potentially biasing information about a target of evaluation even when they correctly identify it as potentially biasing, due to the mistaken impression that, though such information may bias others, it will not adversely affect their own judgments. Indeed, in one of our studies, we asked one group of managers whether they would choose to provide a job candidate’s race and gender to someone else, were that other person making a hiring decision. We asked a separate group of managers whether they would like to receive the candidate’s race and gender themselves. Though only 5% elected to provide someone else with the race and gender information, 19% chose it for themselves.24 This finding suggests that training programs that aim to illuminate the biasing content in seemingly innocent information should also remind trainees that they are no less susceptible to unconscious bias than anyone else. It’s part of being human.

Change the order of information. Research shows that people feel a strong desire to protect their decision-making autonomy. For instance, managers push back on diversity initiatives in organizations when they feel that those initiatives are imposed on them and constrain their hiring decisions.25

Still, it’s possible to protect some of managers’ decision autonomy while capturing the benefits of blinding. You can do this by changing the order in which information is received and evaluated rather than restricting the amount of information people get. For instance, managers can be asked to first perform a blind evaluation (such as evaluating an anonymized resume) and then receive the information that was hidden from view (the job candidate’s name, college graduation year, hobbies, and so on), with the option to revise their initial blind evaluation. With the order of information evaluation structured in this way, cognitive pressure to be consistent with the initial evaluation should reduce the extent to which managers’ final evaluations (incorporating any revisions) are distorted by potentially biasing information.26 In a series of studies, we found some evidence that evaluations performed in this manner are less susceptible to bias than evaluations that incorporate potentially biasing information from the beginning. As described above, across these studies, less than 20% of participants chose to revise an initial, blind evaluation after receiving a mock candidate’s headshot.27

Managers and others in evaluative roles may view decisions that are structured in this fashion (which we call a fair order) more favorably than those who are thoroughly blinded by policy, because their freedom to see all available information — and incorporate it into evaluations if desired — is preserved. This approach could be useful in many evaluative domains. For instance, entrepreneurial pitches could be presented in two parts: first, a written description of the idea being pitched (without any identifying information about the entrepreneur), and then a video or in-person presentation of the actual pitch. Investors who first read and evaluate the blind version of the idea — the written description — and then see the pitch with the option to update their evaluation may be less likely to be swayed by the gender or the attractiveness of the entrepreneur in their final evaluation than those who learn about the idea from the pitch alone.

A fair order strategy could also help to resolve some of the apparent tensions between blinding policies and diverse-hiring goals. For instance, hiring managers or recruiters seeking to advance individuals from underrepresented groups to the interview stage couldn’t possibly do so if entirely blind to identifying characteristics such as names and demographics. But they could perform blind evaluations of credentials, forming an initial impression, and then look at names and demographics, with the option to revise initial impressions. In this way, the bias-reducing benefits of blind evaluations could be reaped along with the benefits of giving members of disadvantaged social groups an unblinded look.

Of course, blinding is not a panacea, and we want to note a few complications with regard to achieving diversity goals. On their own, blind evaluations shift all focus to credentials, which may not fully reflect candidates’ potential. They also give applicants from dominant social groups a leg up, since those candidates are more likely than members of underrepresented groups to have fancy pedigrees and impressive recommenders, given systemic inequalities in access to resources and education. In other words, while blinding on its own may help minority job applicants by masking their identity during evaluations of their credentials, it may also hurt those same applicants by obscuring the role that discrimination played in shaping their credentials. These concerns are especially pertinent for highly credentialed positions, such as those requiring an advanced degree. In such cases, the benefits of blinding in hiring may not be truly unleashed without careful attention paid to widening the pool of credentialed minority applicants or widening the scope of acceptable credentials (broadening the span of “target” schools, perhaps, or the types of experience that are deemed valuable).

Although blinding strategies can improve the fairness and accuracy of almost any evaluation involving people in the workplace — including performance reviews, project proposals, and entrepreneurial pitches — hiring is a reasonable and fruitful place to start. A handful of tech companies are beginning to adopt blind hiring to reduce the impact of bias.28 As these efforts gain traction and attention, other employers may follow.

But for now, while companywide blinding strategies and policies are still rare and decision makers are keen to preserve their autonomy, the most practical approach is to encourage self-blinding. By nudging deliberative thinking and changing the order of information presented, organizations can help managers and other evaluators overcome their initial curiosity about irrelevant and noisy data, recognize their own susceptibility to bias, and size people up more effectively.



1. M. Bertrand and S. Mullainathan, “Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination,” American Economic Review 94, no. 4 (September 2004): 991-1013; and M. Bertrand and E. Duflo, “Field Experiments on Discrimination,” in “Handbook of Economic Field Experiments,” vol. 1, ed. A. Banerjee and E. Duflo (Amsterdam: North-Holland, 2017), 309-393.

2. D. Kanze, L. Huang, M.A. Conley, et al., “We Ask Men to Win and Women Not to Lose: Closing the Gender Gap in Startup Funding,” Academy of Management Journal 61, no. 2 (April 2018): 586-614; and A.W. Brooks, L. Huang, S.W. Kearney, et al., “Investors Prefer Entrepreneurial Ventures Pitched by Attractive Men,” Proceedings of the National Academy of Sciences 111, no. 12 (March 2014): 4427-4431.

3. T.D. Wilson and N. Brekke, “Mental Contamination and Mental Correction: Unwanted Influences on Judgments and Evaluations,” Psychological Bulletin 116, no. 1 (July 1994): 117-142.

4. C. Goldin and C. Rouse, “Orchestrating Impartiality: The Impact of ‘Blind’ Auditions on Female Musicians,” American Economic Review 90, no. 4 (September 2000): 715-741.

5. O. Åslund and O.N. Skans, “Do Anonymous Job Application Procedures Level the Playing Field?” ILR Review 65, no. 1 (January 2012): 82-107; A. Krause, U. Rinne, and K.F. Zimmermann, “Anonymous Job Applications in Europe,” IZA Journal of European Labor Studies 1, no. 1 (December 2012): 1-20; and M. Bøg and E. Kranendonk, “Labor Market Discrimination of Minorities? Yes, but Not in Job Offers,” Munich Personal RePEc Archive, paper no. 33332 (Munich: Munich University Library, 2011).

6. GapJumpers (www.gapjumpers.me) is one such company; Applied (www.beapplied.com) is another.

7. D. Bortz, “Can Blind Hiring Improve Workplace Diversity?” HR Magazine, March 20, 2018, www.shrm.org.

8. J. Dooney, “Huh? We’re Switching Back Again? How Centralized and Decentralized HR Department Structures Influence HR Metrics” (Alexandria, Virginia: Society for Human Resource Management, 2016), www.shrm.org.

9. S. Fath and S. Zhu, “Preferences for, and Familiarity With, Blinding Among HR Practitioners,” Social Science Research Network, Jan. 17, 2021, https://papers.ssrn.com.

10. K.I. van der Zee, A.B. Bakker, and P. Bakker, “Why Are Structured Interviews so Rarely Used in Personnel Selection?” Journal of Applied Psychology 87, no. 1 (March 2002): 176-184; and J. Dana, R. Dawes, and N. Peterson, “Belief in the Unstructured Interview: The Persistence of an Illusion,” Judgment and Decision Making 8, no. 5 (September 2013): 512-520.

11. A. Acquisti and C. Fong, “An Experiment in Hiring Discrimination via Online Social Networks,” Management Science 66, no. 3 (March 2020): 1005-1024; and V. Bartoš, M. Bauer, J. Chytilová, et al., “Attention Discrimination: Theory and Field Experiments With Monitoring Information Acquisition,” American Economic Review 106, no. 6 (June 2016): 1437-1475.

12. G. Loewenstein, “The Psychology of Curiosity: A Review and Reinterpretation,” Psychological Bulletin 116, no. 1 (July 1994): 75-98.

13. S. Fath, R.P. Larrick, and J.B. Soll, “Blinding Curiosity: Exploring Preferences for ‘Blinding’ One’s Own Judgment,” Academy of Management Proceedings 2020, no. 1 (August 2020).

14. Fath, Larrick, and Soll, “Blinding Curiosity.”

15. van der Zee et al., “Why Are Structured Interviews So Rarely Used.”

16. L.A. Rivera, “Hiring as Cultural Matching: The Case of Elite Professional Service Firms,” American Sociological Review 77, no. 6 (December 2012): 999-1022.

17. S. Fath, R.P. Larrick, and J.B. Soll, “Encouraging Self-Blinding in Hiring,” unpublished manuscript.

18. Bertrand and Duflo, “Field Experiments on Discrimination.”

19. Fath, Larrick, and Soll, “Encouraging Self-Blinding in Hiring.”

20. The Profile of Dogs browser extension is available on the Google Chrome web store. We are agnostic about the possibility that people may be biased in favor of or against certain dog breeds.

21. Loewenstein, “The Psychology of Curiosity.”

22. Fath, Larrick, and Soll, “Blinding Curiosity.”

23. Wilson and Brekke, “Mental Contamination and Mental Correction”; and E. Pronin, D.Y. Lin, and L. Ross, “The Bias Blind Spot: Perceptions of Bias in Self Versus Others,” Personality and Social Psychology Bulletin 28, no. 3 (March 2002): 369-381.

24. Fath, Larrick, and Soll, “Encouraging Self-Blinding in Hiring.”

25. F. Dobbin and A. Kalev, “Why Diversity Programs Fail,” Harvard Business Review 94, no. 7 (July-August 2016): 1-20.

26. I. Bohnet, A. van Geen, and M. Bazerman, “When Performance Trumps Gender Bias: Joint vs. Separate Evaluation,” Management Science 62, no. 5 (May 2016): 1225-1234.

27. Fath, Larrick, and Soll, “Blinding Curiosity.”

28. R. Feintzeig, “The Boss Doesn’t Want Your Resume,” The Wall Street Journal, Jan. 5, 2016, www.wsj.com.

Reprint #:


More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.