Experiments in Open Innovation at Harvard Medical School

What happens when an elite academic institution starts to rethink how research gets done?

Reading Time: 21 min 

Topics

Permissions and PDF

Harvard Medical School seems an unlikely organization to open up its innovation process. By most measures, the more than 20,000 faculty, research staff and graduate students affiliated with Harvard Medical School are already world class and at the top of the medical research game, with approximately $1.4 billion in annual funding from the U.S. National Institutes of Health (NIH).

But in February 2010, Drew Faust, president of Harvard University, sent an email invitation to all faculty, staff and students at the university (more than 40,000 individuals) encouraging them to participate in an “ideas challenge” that Harvard Medical School had launched to generate research topics in Type 1 diabetes. Eventually, the challenge was shared with more than 250,000 invitees, resulting in 150 research ideas and hypotheses. These were narrowed down to 12 winners, and multidisciplinary research teams were formed to submit proposals on them. The goal of opening up idea generation and disaggregating the different stages of the research process was to expand the number and range of people who might participate. Today, seven teams of multidisciplinary researchers are working on the resulting potential breakthrough ideas.

In this article, we describe how leaders of Harvard Catalyst — an organization whose mission is to drive therapies from the lab to patients’ bedsides faster and to do so by working across the many silos of Harvard Medical School — chose to implement principles of open and distributed innovation. As architects and designers of this experiment, we share firsthand knowledge about what it takes for a large elite research organization to “innovate the innovation process.”

Harvard Catalyst’s Experiment

Harvard Catalyst, the pan-university clinical translational science center situated at Harvard Medical School, wanted to see if “open innovation” — now gaining adoption within private and government sectors — could be applied within a traditional academic science community. Many insiders were highly skeptical. The experiment risked alienating top researchers in the field, who presumably know the most important questions to address. And there was no guarantee that an open call for ideas would generate breakthrough research questions.

Topics

References

1. K.J. Boudreau and K.R. Lakhani, “How to Manage Outside Innovation,” MIT Sloan Management Review 50, no. 4 (summer 2009): 69-76.

2. See the following canonical references on open innovation: H.W. Chesbrough, “Open Innovation: The New Imperative for Creating and Profiting from Technology” (Boston: Harvard Business School Press, 2003) and E. von Hippel, “Democratizing Innovation” (Cambridge, Massachusetts: MIT Press, 2005).

3. In the case of science problems, see the analysis by L.B. Jeppesen and K.R. Lakhani, “Marginality and Problem-Solving Effectiveness in Broadcast Search,” Organization Science 21, no. 5 (September/October 2010): 1016-1033. Software problems are analyzed in K.J. Boudreau, N. Lacetera and K.R. Lakhani, “Incentives and Problem Uncertainty in Innovation Contests: An Empirical Analysis,” Management Science 57, no. 5 (May 2011): 843-863.

4. Economists studying innovation have investigated the institutional structure of academic scientific research and open science. See, for example, P. Dasgupta and P. David, “Toward a New Economics of Science,” Research Policy 23, no. 5 (September 1994): 487-521; P. David, “Common Agency Contracting and the Emergence of ‘Open Science’ Institutions,” American Economic Review 88, no. 2 (May 1998); and S. Stern, “Do Scientists Pay to Be Scientists?” Management Science 50, no. 6 (June 2004): 835-853.

5. See P. Stephan, “How Economics Shapes Science” (Cambridge, Massachusetts: Harvard University Press, 2012).

6. A. Einstein and L. Infeld, “The Evolution of Physics: From Early Concepts to Relativity and Quanta” (New York: Simon & Schuster, 1938).

7. G.M. Carter, “What We Know and Do Not Know About the NIH Peer Review System,” RAND Corporation (1982); D.E. Chubin and E. J. Hackett, “Peerless Science: Peer Review and U.S. Science Policy” (Albany, New York: SUNY Press, 1990); S. Cole, J.R. Cole and G.A. Simon, “Chance and Consensus in Peer Review,” Science 214, no. 4523 (November 20, 1981): 881-886; and L. Langfeldt, “The Decision-Making Constraints and Processes of Grant Peer Review, and Their Effects on the Review Outcome,” Social Studies of Science 31, no. 6 (December 2001): 820-841.

8. D.F. Horrobin, “The Philosophical Basis of Peer Review and the Suppression of Innovation,” Journal of the American Medical Association 263, no. 10 (March 9, 1990):1438-41; U.W. Jayasinghe, H.W. Marsh and N. Bond, “A Multilevel Cross-Classified Modeling Approach to Peer Review of Grant Proposals: The Effects of Assessor and Researcher Attributes on Assessor Ratings,” Journal of the Royal Statistical Society: Series A (Statistics in Society) 166, no. 3 (2003): 279-300.

9. A good example of novices beating experts is the use of prediction markets. See S. Luckner et al., “Prediction Markets: Fundamentals, Designs, and Applications” (Munich, Germany: Gabler Verlag, 2012). For information aggregation through polling versus the use of experts in political predictions, see also N. Silver, “The Signal and the Noise: Why So Many Predictions Fail — But Some Don’t” (New York: Penguin Press, 2012).

10. A.M. Weinberg, “Scientific Teams and Scientific Laboratories,” Daedalus 99, no. 4 (fall 1970): 1056-1075; and S. Wuchty, B.F. Jones and B. Uzzi, “The Increasing Dominance of Teams in Production of Knowledge,” Science 316, no. 5827 (May 18, 2007): 1036-1038.

Reprint #:

54317

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.