What to Read Next
Most businesses understand that they must attract star performers — and compete fiercely for them — to thrive in the marketplace. What they struggle with is how to do it well. The perennial challenge of finding the right people and matching them with the right roles has become even more complex now that AI and robotics are rapidly changing jobs and in-demand technical skills are in short supply. While most organizations still rely on traditional hiring methods such as résumé screenings, job interviews, and psychometric tests, a new generation of assessment tools is quickly gaining traction and, we argue, making talent identification more precise and less biased.
Certain things have remained constant and are unlikely to change anytime soon. When sizing up candidates, managers try to predict job performance while assessing cultural fit and capacity to grow. Studies show that managers look for three basic traits: ability, which includes technical expertise and learning potential; likability, or people skills; and drive, which amounts to ambition and work ethic.1
What we need from talent identification tools and methods — old or new — has also stayed the same. To assess their effectiveness, we must look for a strong correlation between candidates’ scores and subsequent job performance. This may sound obvious, but we’ve found in our work with recruiters and hiring managers that many of them use tools based instead on ease and familiarity — and rarely correlate them to results.
Emerging assessment methods can be grouped into three broad categories: gamified assessments, digital interviews, and candidate data mining. What they have in common is their ability to detect new talent signals (that is, new indicators of performance potential).2 Here we’ll explain how each of these methods work and their strengths and limitations.
A new breed of psychometric tests for recruitment focuses on enhancing the candidate experience. These tools apply gamelike features, such as real-time feedback, interactive and immersive scenarios, and shorter modules, which make the test taking more enjoyable. The catch is that users’ choices and behaviors are mined by computer-generated algorithms to identify suitability for a given role.
For example, HireVue’s MindX employs gamified cognitive-ability tests by asking users to play sleek games — think Nintendo’s Brain Age — that predict IQ. Pymetrics has done something similar with classic psychological tests such as the Balloon Analogue Risk Task, which evaluates candidates’ impulsivity and risk-taking by examining how far they allow self-inflating balloons to expand before they burst (bigger balloons mean more rewards, but there are no rewards if they burst).
Email Updates on Managing Tech
Get periodic email updates on how to incorporate new tech into your company’s strategy and operations.
Please enter a valid email address
Thank you for signing up
Arctic Shores, which is often used for evaluating college graduates, puts candidates through what feels like a series of 1990s arcade games and correlates their choices to standard personality traits and competencies. As these types of tools are used more widely in high-volume hiring environments, tool providers gather enough data to demonstrate significant links between candidates’ scores on the games and their job performance.3
In addition, many companies are designing their own gamified assessments, which they position at the interface between hiring and marketing. For instance, Red Bull’s Wingfinder is available to the general public and used to attract candidates through the drink company’s social media channels. Candidates are provided an extensive report on their strengths and weaknesses, regardless of whether they are formally considered for a position.
Despite the branding and marketing benefits of gamification, as well as the obvious appeal of providing a more enjoyable assessment experience — which can result in a larger number of candidates — this approach to talent identification has two disadvantages. First, there is a natural tension between fun and accuracy. The more interesting and enjoyable the assessment experience, the less predictive it tends to be, not least because getting a comprehensive picture of a candidate’s background requires longer testing time, and time is the enemy of fun. Second, to deliver a “cool” assessment experience, particularly if it is branded and comparable to some of the games people play purely for fun, the costs will increase significantly. It is one thing to design a standard Q&A type of self-report and another to create immersive gamelike experiences for candidates — and talent acquisition budgets are generally quite limited when it comes to assessment tools.
The second major development in talent identification is the widespread use of digital interviews. On the surface, these tools look like any other videoconferencing technology, but they provide a couple of added advantages.
For one thing, interviewers or hiring managers can post their questions on the platform to create a structured (consistent and repeatable) interview protocol for stakeholders to use in their conversations with candidates, which helps them make fair, accurate comparisons. For another, algorithms can be used to flag and interpret relevant talent signals (facial expressions,4 tone of voice, emotions5 such as anxiety and excitement, language, speed, focus, and so on), replacing human observations and intuitive inferences with data-driven sorting and ranking.
Research has long suggested that job interviews are most predictive when they are highly standardized — that is, when they put all interviewees through the same process and have a predefined scoring key to make sense of the answers. Given that insight, video interviews can increase the accuracy of the job interview findings while reducing costs and enabling hiring organizations to operate at scale (in our conversations with HR executives, we’ve learned that companies such as JP Morgan Chase and Walmart do thousands of video interviews each year).
One question about such platforms is their tendency to replicate and reinforce biases that are inherent to any interviewing process. That’s certainly a limitation. If the people responsible for making hiring decisions are themselves biased, we should not expect AI to erase that problem. To complicate matters further, if those same people are then tasked with evaluating new hires’ performance, their biases will be masked. From a statistical standpoint, they may have correctly predicted future performance with their candidate selections — but to an extent, that prophecy is self-fulfilling.
Clearly, if you’re making biased decisions about which outcomes to measure to gauge performance, that won’t change with machine-learning models (though you will get faster results). One way to address this issue is by focusing less on individual traits, for instance, and more on group outcomes such as productivity numbers and revenues. For managers, 360-degree reviews can also be useful, because they crowdsource performance evaluations, mitigating individual biases. Another option is to “train” algorithms to ignore the signals that predict human bias but not job performance (such as gender, age, social class, and race). Tool providers like HireVue tell us they are doing this to eliminate the impact of skin color on hiring decisions, for example.
Candidate Data Mining
The third new approach, passively mining candidate data and analyzing people’s digital footprints, is fast growing as well. While it has mostly been used to serve up targeted consumer messages in marketing and advertising, it is equally applicable to talent identification in HR. Online behavior can reveal information about individuals’ interests, personalities, and abilities, which in turn predicts their suitability for particular jobs or careers. For example, many hiring managers now investigate candidates’ reputation, followership, and level of authority on networking websites such as LinkedIn and Facebook, and they use that information to rate and rank people. LinkedIn and Entelo provide tools that do this automatically for recruiters, by giving them a range of scores to help evaluate candidates. While there is a big difference between popularity metrics and actual potential, networking sites represent true peer feedback, so recruiters find them very predictive.
Passive data scraping has been extensively examined in studies highlighting consistent links between people’s social media activity and key job-related qualities.6 For example, a team of researchers showed reliable connections between the groups people like on Facebook and their broad character traits, such as whether they are more or less extroverted or agreeable.7 Given that these traits have been systematically associated with strong performance across different jobs, the findings suggest that Facebook data can provide useful information to employers about a person’s potential fit for a job or role. Furthermore, the very character traits extracted from Facebook behavior and other social media signals — such as the words people use on Twitter or in blogs or emails — are markers for abilities, likability, and drive.
There’s a dark side to this capability, though, because it exposes candidates’ personal lives to intense scrutiny (particularly those in the Generation Z cohort, many of whom have been using social media practically since birth). Organizations must think about how they’ll respect people’s privacy while getting the information they need to make smart hiring choices. Even if the boundaries between private and public life have eroded, it is ethical to ensure that people are aware of how their data is used.
The Ethics of Talent Identification
Of course, it is essential that any tools for talent assessment and recruitment meet ethical guidelines. Although legal regulations, such as the EU’s General Data Protection Regulation, are context — and especially market — dependent, two basic considerations are critical across the board.
- Promoting consent and awareness: There is now quite a big difference between what candidates believe employers know about them and what employers really know — and hiring organizations have an ethical obligation to do what they can to close that gap. Do candidates opt in to all parts of the talent identification process? Do they understand what is being done with their data? Do they have opportunities to provide or withhold consent for the data being mined? If employers (and tool providers) aren’t transparent throughout the process, their brands could suffer tremendous harm.
- Fostering fairness: It’s also important to consider the degree to which hiring tools may stack the deck against certain groups of candidates, particularly people of color, women, and individuals at a socioeconomic disadvantage. This has long represented a problem for talent identification, but scientifically defensible assessment tools (like thoroughly validated psychometric tests) go to great lengths to increase predictive accuracy while reducing the risks of discrimination.8 Newer tools and methods must be scrutinized with the same lens. For example, video or speech signals identified as markers of talent can also reflect social class and educational background; selecting people based on such signals may result in a more homogeneous workforce. Organizations should be aware it is possible to make meritocratic hiring decisions that undermine social fairness, because the best candidates on paper may also be the most privileged candidates — those who enjoyed an elite education and benefit from expansive networks.
Millions of people look into changing jobs every year — and employers must evaluate those candidates. As new tools for assessing talent become more mature, their costs decrease, and more companies can adopt them to improve the process and the yield of good hires.
Are organizations ready to use these tools effectively and responsibly? We think so, as long as HR and business leaders spend time carefully evaluating the issues raised here.
Hiring the right person is probably the most important decision a manager makes. If machines can make this process more accurate and less biased, every business can see tremendous benefits.
1. R. Hogan, T. Chamorro-Premuzic, and R.B. Kaiser, “Employability and Career Success: Bridging the Gap Between Theory and Reality,” Industrial and Organizational Psychology 6, no. 1 (March 2013): 3-16.
2. T. Chamorro-Premuzic, D. Winsborough, R.A. Sherman, and R. Hogan, “New Talent Signals: Shiny New Objects or a Brave New World?” Industrial and Organizational Psychology 9, no. 3 (September 2016): 621-640.
3. J. Bersin, “HR Technology Disruptions for 2018: Productivity, Design, and Intelligence Reign,” Bersin by Deloitte, 2017.
4. N. Perveen, N. Ahmad, M. Abdul Qadoos Bilal Khan, R. Khalid, and S. Qadri, “Facial Expression Recognition Through Machine Learning,” International Journal of Scientific and Technology Research 5, no. 4 (March 2016): 91-97.
5. C.P. Latha and M.M. Priya, “A Review on Deep Learning Algorithms for Speech and Facial Emotion Recognition,” International Journal of Control Theory and Applications 9, no. 24 (January 2016): 183-204.
6.G. Park, H.A. Schwartz, J.C. Eichstaedt, M.L. Kern, M. Kosinski, D.J. Stillwell, L.H. Ungar, and M.E.P. Seligman, “Automatic Personality Assessment Through Social Media Language,” Journal of Personality and Social Psychology 108, no. 6 (June 2015): 934-952.
7. G. Farnadi, G. Sitaraman, S. Sushmita, F. Celli, M. Kosinski, D. Stillwell, S. Davalos, M.F. Moens, and M. De Cock, “Computational Personality Recognition in Social Media,” User Modeling and User-Adapted Interaction 26, no. 2-3 (June 2016).
8. L.M. Hough, F.L. Oswald, and R.E. Ployhart, “Determinants, Detection, and Amelioration of Adverse Impact in Personnel Selection Procedures: Issues, Evidence, and Lessons Learned,” International Journal of Selection and Assessment 9, no. 1-2 (March 2001): 152-194.