Can We Really Test People for Potential?

We need a more nuanced approach to predicting job performance.

Reading Time: 10 min 

Topics

People Analytics

This limited series from Spring 2019 focuses on how managers can use data and analytics to persuade decision makers, assess talent, and facilitate personal growth.
More in this series
Permissions and PDF Download

Editor’s note: This article is part of a new MIT SMR series about people analytics.

Have you ever taken an aptitude or work personality test? Maybe it was part of a job application, one of the many ways your prospective employer tried to figure out whether you were the right fit. Or perhaps you took it for a leadership development program, at an offsite team-building retreat, or as a quiz in a best-selling business book. Regardless of the circumstances, the hope was probably more or less the same: that a brief test would unlock deep insight into who you are and how you work, which in turn would lead you to a perfect-match job and heretofore unseen leaps in your productivity, people skills, and all-around potential.

How’s that working out for you and your organization?

My guess is that results have been mixed at best. On the one hand, a good psychometric test can easily outperform a résumé scan and interview at predicting job performance and retention. The most recent review of a century’s worth of research on selection methods, for example, found that tests of general mental ability (intelligence) are the best available predictors of job performance, especially when paired with an integrity test. Yet, assessing candidates’ and employees’ potential presents significant challenges. We’ll look at some of them here.

People Metrics Are Hard to Get Right

For all the promise these techniques hold, it’s difficult to measure something as complex as a person for several reasons:

Not all assessments pass the sniff test. Multiple valid and reliable personality tests have been carefully calibrated to measure one or more character traits that predict important work and life outcomes. But countless other tests offer little more than what some scholars call “pseudo-profound bullshit” — the results sound inspiring and meaningful, but they bear little resemblance to any objective truth.

People often differ more from themselves than they do from one another. Traditional psychological assessments are usually designed to help figure out whether people who are more or less something (fill in the blank: intelligent, extraverted, gritty, what have you), on average, do better on whatever outcomes the organization or researcher is most interested in. In other words, they’re meant to capture differences among people. But several studies have found that, during a two-week period, there can be even more variation within one individual’s personality than there is from person to person. As one study put it, “The typical individual regularly and routinely manifested nearly all levels of nearly all traits in his or her everyday behavior.” Between-person differences can be significant and meaningful, but within-person variation is underappreciated.

People change — and not always when you expect them to. The allure of aptitude, intelligence, and personality tests is that they purport to tell us something stable and enduring about who people are and what they are capable of. Test makers (usually) go to great lengths to make sure people who take the test more than once get about the same score the second time around. Yet compelling evidence suggests that we can learn how to learn, sometimes in ways we didn’t anticipate. We can also shift our personalities in one direction or another (at least to some degree, though not always without cost) for both near-term benefits and longer-term goals. Interestingly, one recent study with more than 13,000 participants found that people tend to become more conscientious right before getting a new job, which is conveniently around the time a hiring manager would be trying to figure out how hard they would work if they landed the role.

The nature of the task can matter more than the nature of the person. Most of us have heard the theory that we each have a preferred learning style, and the more we can use the one that fits, the more we’ll remember. Unfortunately, virtually no evidence supports that theory. That doesn’t mean that all approaches to studying are equally effective — it’s just that the strategy that works best often depends more on the task than on the person. Similarly, different parts of our personalities can serve different types of goals. We act extraverted when we want to connect with others or seize an opportunity, and we become disciplined when we want to get something done or avoid mistakes. In one study, conscientiousness especially emerged when the things that needed to get done were difficult and urgent — even for people who were not especially organized and hardworking in general.

One way to read this list of challenges is to come away convinced that people analytics is a fool’s errand. But that would ignore the fact that each of these caveats has been uncovered through rigorous analysis of people data.

Instead, it’s probably more constructive to remember what personality psychologist Brian Little, while channeling psychologist Henry Murray and anthropologist Clyde Kluckhohn, says in his popular TED Talk: “Each of us is … in certain respects, like all other people, like some other people, and like no other person.” People analytics, in other words, needs to include better person analytics.

What It Would Take to Go Granular

What would better person analytics look like?

For starters, we would consider the context. Companies selling off-the-shelf assessments often tout the many thousands of diverse professionals who have already taken their survey to prove that it can work for all kinds of people and circumstances. A large validation effort can be a sign of an invaluable general-purpose tool, but that doesn’t mean it’s right for every job. Sometimes the situation calls for customization.

Consider a project that the Wharton People Analytics research team did with Global Health Corps (GHC), a leadership development organization aimed at improving health equity. Each year, GHC screens thousands of applications to find the most promising candidates for yearlong fellowships, and the management team had developed a hunch that a certain personality trait might be predictive of a fellow’s job performance. So we devised multiple methods to measure it. The first was a general measure of said trait that was previously developed, validated, and published in a peer-reviewed journal, while the second was a new situational judgment test (SJT) we developed with GHC so that we could look for evidence of this trait in how people responded to a number of job-relevant scenarios. We also tried a more advanced linguistic analysis to flag indicators of this trait in candidates’ application essays. Whereas the established measure had the best evidence behind it, and the linguistic analysis was the most technically sophisticated, in the end the situational judgment test was the only significant predictor of job performance for candidates.

When considering why this worked best, we think it’s not just because the SJT took the organization’s unique context into account but also because it captured the extent to which this trait showed up in many different situations, not just on average. Custom measures are not always the answer, but sometimes the context really is important.

Next, we would design new measures with variability in mind. Given the findings mentioned above about how much people’s behavior can change from one situation to the next, it might seem paradoxical to even try to find something enduring about a person’s character. But just because personality is dynamic does not mean it is undiscoverable. Some researchers have proposed using if-then questionnaires to detect nuanced patterns in each person’s personality profile, although such techniques have yet to be well-tested in the workplace. A better approach might be to take repeated measures from the same employees over time. That is often easier said than done, given the challenges many organizations face in getting employees to fill out even a single survey. If the participation problem can be overcome, however, repeated measures can lead to insights — about what people are like in general and the ways in which they vary — that onetime surveys simply can’t generate.

Earlier this year, for example, George Mason University researcher Jennifer Green and her colleagues took a novel approach to understanding the relationship between employees’ personalities and their organizational citizenship behaviors (those often-underappreciated extra ways that employees support their colleagues and organizations over and above their job duties). By using an experience sampling methodology in which they collected multiple reports from more than 150 employees over the course of 10 workdays, they were able to show that employees with more consistent personalities were, in turn, more consistent in going beyond the call of duty — even after controlling for their general dispositions. For jobs where consistency is key to success, these researchers argued, repeated measures offer a chance to find stability in the variability of employees’ personalities.

Finally, we would give people their own data in ways that would help them develop. Although most employees won’t have the skills or even the interest to track and analyze their own data, that doesn’t mean they wouldn’t be able to use it if it were summarized well and presented clearly.

Some tools have been designed expressly for that purpose. Microsoft MyAnalytics, for example, is an add-on to Office 365 that aims to reduce the pain of collaborative overload by sending you reports about your schedule and communication patterns. While there are nudges and recommendations built in, the basic premise behind the service is that providing you with a summary of your own data will help you identify your own strategies for making your work life better. In a similar vein, Ambit Analytics received a preseed round of funding in early 2018 for its technology that uses real-time voice analysis to coach managers in the moment on their communication skills. While the long-term viability and utility of both of these tools remains to be seen, both point to the potential for giving individuals more opportunities to learn from their own data.

Taking a more granular approach to people analytics does have its risks, of course. For one thing, because individuals can be more easily identified by their data, privacy may be an even larger worry than it normally is. It’s a valid concern — one that underscores the need for vigilance. Organizations must develop robust policies and practices to govern ethical data collection, access, and use. They must also be transparent with employees not only about what kinds of data are being collected but also about what the data says about them. Open and ongoing dialogues about the costs and benefits of more personalized analytics should be as common as the legalese-filled privacy statements people too often just click through.

A bigger-picture concern is the risk of hypercustomization. Organizations are prone to an often inaccurate uniqueness bias in which they assume that no other group of workers has ever been quite like them. That can lead to situations like the one I found myself in a few years ago, when senior leaders from two organizations independently — in the same week — asked me about the idea of measuring their employees’ levels of grit. When I explained that grit is usually measured as passion and perseverance for long-term goals, both were quick to say that they were defining it differently for their context. One said it was really about resilience and tenacity at her nonprofit; the other insisted that ambition was at its core. They may each have identified important traits for their respective organizations, but the problem was that they both contended they wanted to measure something called “grit.” If bespoke measurements with identical names start to proliferate, it’ll become much harder for all of us to talk with and learn from one another.

And learning, after all, is the raison d’être for people analytics. Organizations invest in it because they hope it will tell them something about their current or future employees that will increase the odds of forming productive long-term relationships with them. Employees engage with it when they have a reasonable expectation that it might reveal something about themselves and who they might yet become in their careers. Both of these goals will be better served if we pursue a finer-grained understanding of human potential.

Topics

People Analytics

This limited series from Spring 2019 focuses on how managers can use data and analytics to persuade decision makers, assess talent, and facilitate personal growth.
More in this series

Reprint #:

60305

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comments (3)
Kwesi da-Costa Vroom
Introduction:
I have been part of different work environment and typical one I would like to make reference to is with multinationals for convenience,  for at least a 10. The dynamics that comes to play with predicting performance is just as interesting as it is deserving of a much broader scope for further research. 

Discussion:
“The typical individual regularly and routinely manifested nearly all levels of nearly all traits in his or her everyday behavior.” (source: Article:Can we really test people for potential.) Very complex statement. It encapsulate all the traits for favourable performance predication just as the reverse is real.

May the scale below would of great business interest in measuring performance and same can form an interesting basis for prediction future performance;
1.Prediction perspective.
a. Financial objective perspective
b. Customer Satisfaction perspective
c. Internal Business Operation Perspective and 
d. Motivation and Asset development perspective

It is important to note that the hired be it full time, Part time have expectations just as the employer.
Performance Predication has become the new layer on needle point on assurance for retention to justify investment decisions on newly hired and even old hands.
Considering having such a complex "specimen" or "sample" with characteristics as described in the  quote above would obviously pose a challenge to predictions. 

Take a typical 9-5 schedule worker who commits 8hrs daily. For purposes of discussion, let's say our sample  works 7days a week. The sample is a new staff. If he stays on the job for 6yrs. Technically, it means he/she may have spent two years literally living on the job. That is a third of the time. There is so much that goes on in the life of this staff within the two-thirds (2/3) of the time which is the four years outside the job. happenings in the 2/3 time would be a huge contribution factor to what is influencing the quote above. Remember, practically half of the 2/3 time is used to sleep. So basically, the staff shared the active part of their waking life in two equal halves with the employer.

Concerns:
To what extent  must an employer indulge in the employee's 1/3 time (8hrs) for interest in  accuracy in performance predictions without infringing on privacy.

What level of commitment is sufficient to assure the employer of high future performance. 

These are relevant for planing and strategic development

Conclusion: 
All the test eventually gives valuable insight that traditional approach may not offer, however, intuition and the powerful human instincts is highly recommended for reviewing computer aided test result before making conclusions to inform business decisions.
Deanna Brown
Thank you for a lucid and thoughtful article.  However, I suggest that your description of the raison d’être or purposes of the tests and analytics is not quite accurate for either employers or employees.  As with your "grit" example, the notion of relationships and of careers means something very different to each party.  And "people analytics" are geared toward employer needs;  after all, employers pay for the development, administration, etc.  Indeed, employees may not even have access to interpretations of the data.  To meet the purposes you describe, analytics, both instruments and process, need a lot more thought.
Barry Deutsch
My experience over 30 years of hiring and performance management consulting, and over 1000 executive search projects is that there is NO test that is predicative of future performance.

The intellectual tests do not predict future performance. They simply give you a perspective on raw intelligence - the ability to logically and rationally process information.

The personality assessments do not predict performance - they simply give you insight into a person's preferred behavior/communication style in a work setting. Most of the tests are easily manipulated by the candidate into answering questions or checking words that the hiring manager wants to hear - not the real candidate.

Both of these are still useful tools and insights even with their flaws.

However, the ONLY way to predict future performance is to conduct a performance or success based structured interview that correlates with the outcomes desired in the role. Adding role plays, homework, and working/practical sessions/real case studies - along with deep and intrusive reference checking can boost interview accuracy from what most of the studies show is basically a 50/50 success rate into the 80-90 percent range.