Using analytics to improve hiring decisions has transformed industries from baseball to investment banking. So why are tenure decisions for professors still made the old-fashioned way?
‘Moneyball’ for Professors?
How can we use information about an employee’s current performance to predict future performance? That’s one of the key questions addressed by the ever-growing use of predictive analytics as a human resources (HR) tool. The application of predictive analytics to the management of baseball teams — brought to life in the book (2003) and then movie (2011) “Moneyball” — made vivid the ways that data-based modeling can be used for more accurate talent acquisition and deployment. The idea that metrics could guide strategy by supporting intuitive decision making has created a boom in the use of predictive analytics in the HR industry.
Ironically, one of the places where predictive analytics hasn’t yet made substantial inroads is in the place of its birth: the halls of academia. Tenure decisions for the scholars of computer science, economics, and statistics — the very pioneers of quantitative metrics and predictive analytics — are often insulated from these tools.
That may soon change. A study we conducted with Dimitris Bertsimas and Shachar Reichman, published in Operations Research, finds that data-driven models can significantly improve decisions for academic and financial committees. In fact, the scholars recommended for tenure by our model had better future research records, on average, than those who were actually granted tenure by the tenure committees at top institutions.
Tenure decisions have impacts that ripple far outside of university campuses. The choices of which scholars are offered permanent posts are the key HR judgments made by academic institutions in the United States. These decisions impact not just the scholars’ careers but the funding of universities and the overall strength of scientific research in private and public organizations as well. On an individual level, a tenured faculty member at a prestigious university will receive millions of dollars in career compensation. At a broader scope, these faculty will bring funding into the universities that house them. The National Science Foundation, for instance, provided $5.8 billion in research funding in 2014, including $220 million specifically for young researchers at top universities.
Despite these factors, academic decision-making processes rely mainly on subjective assessments of candidates. We believe, though, that if analytics is given the opportunity to complement the tenure decision-making process by offering improved predictions about candidates’ future performance and scholarly research, businesses and the public will be better served by the academic community. Given the stakes, we think it is time for a “Moneyball moment” in academia.
Bringing predictive analytics to any new industry means identifying metrics that often have not received a lot of focus. It also means measuring how those metrics correlate with a measurable definition of success. In the case of academics, we identified not only the impact of scholars’ past research but also details about with whom they collaborate and how their research fits into existing work as the metrics to take a deeper dive into. We defined future success as the volume and impact of a scholar’s future research, and we looked at the ways that our selected metrics could be used as predictors of the impact of future research.
This page contains a form, you can see it here.
To prove the concept, the four of us developed a method based on quantitative metrics that can predict research performance better than earlier models. Our models use a concept called “network centrality.” This measures how connected a given scholar is in the networks that help define how successful their research is: the citation network, the coauthorship network, and a dual network combining the first two. (See “The Academic Dual Network.”) By building models using data from more than 130,000 scholars who had published papers in the field of operations research, we found that this approach significantly outperformed simple predictive models based on citation counts alone, which is the more commonly used approach.
Using a hand-curated data set of 54 scholars who obtained doctorates after 1995 and held assistant professorships at top-10 operations research programs in 2003 or earlier, these statistical models made different decisions than the tenure committees for 16 (30%) of the candidates. Specifically, these new criteria yielded a set of scholars who, in the future, produced more papers published in the top journals and research that was cited more often than the scholars who were actually selected by tenure committees. (Top journals were defined as the publications Management Science, Mathematical Programming, Mathematics of Operations Research, and Operations Research.)
Of course, we need to consider some of the limitations we encountered. While we are encouraged by our ability to better forecast future research success, other criteria need to be measured in tenure decisions. For example, the proposed models do not account for scholars’ service to their universities or their personalities — criteria that cannot be easily quantified. Tenure committees must rely on imprecise measures when evaluating candidates on these factors. The analysis has other limitations too, including the relatively small number of scholars in the data set and the focus on only one field.
Nonetheless, we expect that similar models could be used in a variety of academic contexts, such as hiring new professors, evaluating candidates for grants and awards, and hiring scholars who previously held tenure-track positions at other institutions. More experimentation is needed to assess the usefulness of predictions of future research impact in making these decisions.
Going forward, for prediction models to be most useful to academic tenure committees, they need to be implemented and separately calibrated for a broad range of academic disciplines using a large-scale database. One possibility is to develop and distribute the models as a complementary service to an existing bibliometric database like Google Scholar or Web of Science. Models would need to be updated periodically, as patterns of publication change over time.
Though further evaluation is needed, the demonstrated effectiveness of these predictive analytic models in the field of operations research suggests that data-driven analysis can be helpful for academic personnel committees. Subjective assessments alone no longer need to rule the day.