Better Ways to Green-Light New Projects

Organizations can make better choices about which R&D projects gain funding by managing bias and involving more people.

Reading Time: 15 min 

Topics

Permissions and PDF Download

Image courtesy of Michael Morgenstern/theispot.com

In early 1962, an unknown band from Liverpool auditioned for Decca Records. The label rejected the band, saying, “We don’t like their sound, and guitar music is on the way out.” About 18 months later, the Beatles would release their first album.1 The rest is history.

The business world is full of anecdotes about businesses that passed on an idea that later became a huge success. The reverse is also true; in some cases, companies invest in promising ideas that prove disastrous. A famous example is Iridium Communications, a former division of Motorola that sought to market satellite phones broadly. After the company sent satellites into orbit in 1998, a host of issues prevented the business from gaining traction with customers, and the company filed for bankruptcy the next year. (Iridium was restructured and is still around; its technology is used by the U.S. military.)2

Selecting innovative new projects for further investment and development is critical — and hard. The best R&D projects can renew an organization’s product lines, processes, and services, improving its performance and competitiveness. But deciding which new ideas are winners and which are duds is tough, because new initiatives are characterized by fundamental technological and market uncertainty. And our research shows that at many companies, bias and process issues can imperil good decisions.

To improve their track record of choosing the right innovations to bring forward, leaders must first understand where R&D selection panels go wrong. Based on our research, we have identified five main categories of such issues. We suggest specific steps that leaders can take before, during, and after the selection process in order to make more objective, fact-based decisions about which new ideas to green-light. While nothing can eliminate all risk from an inherently speculative endeavor, improving the process can tip the odds in companies’ favor.

Common Problems in How Companies Select Projects

Many organizations have created expert panels that invest significant time and effort in reviewing project pitches and deliberating on their merits. Because these panels are usually composed of senior members of the organization, they are an expensive resource. But research shows that these expert panels can be highly problematic for five principal reasons.

First, panels tend to show a strong bias against highly novel ideas, even though generating them is the explicit goal of innovation efforts.3 Decision makers often reject them, even if they claim to want breakthrough innovations, because they are uncomfortable with the risk involved in pursuing them. We conducted a study in a leading professional services firm and found that project review panels were more likely to fund projects with intermediate levels of novelty. Some degree of novelty increased the chance of funding, but too much reduced those odds.4

Second, a broad range of studies has found that expert panels suffer from a lack of diversity. Organizations commonly staff panels with “the usual suspects”: highly senior men. The inherent uncertainty of green-lighting innovation pushes panelists to revert to established thinking about people and their backgrounds, favoring projects from people who look and sound like themselves rather than basing a decision on the merits of the idea itself.5 Biases may manifest as a preference for men over women, people with familiar names as opposed to those with “foreign-sounding” names, people with greater experience within the organization, people from a particular location, or people with high-status affiliations (such as a famous university). These biases can become self-reinforcing over time among homogeneous groups of people — all the more reason for organizations to increase diversity on innovation panels.

A recent study examining the selection of startups applying to the MassChallenge accelerator program found that when judges evaluated startups alone — without consulting with other judges, and on the basis of a purely textual description of the venture — they were less likely to be influenced by the founder’s gender and where they had earned their university degree. In contrast, when judges performed their assessments after watching a short pitch and Q&A session, gender and educational background became more important in their evaluations.6 When individual characteristics are more apparent, they affect the outcome of the decision-making process. This effect is compounded when an evaluation must be performed quickly, suggesting that fatigue and high workloads could lead to more biased choices.

The lack of diversity in expert panels is also problematic given that people with different demographic backgrounds have different experiences and consequently different opinions about which ideas to pursue.7 Thus, leaving out women and/or people of different backgrounds and heritages may lead to missed opportunities or failed investments.8

Third, technology companies usually staff expert panels with scientists and engineers, who tend to focus on the technical aspects of an idea without sufficiently considering the business opportunities and challenges. Although some expertise is required, having only experts on a panel can be problematic.9 Panel members may have a bias for ideas originating from their own field of expertise, and experts are prone to systematic errors in assessing truly novel ideas.

Fourth, the panel decision-making process itself may also lead to inferior outcomes. As in many other areas of collective decision-making, applications are often introduced by a panel member, akin to an informal sponsor. This person frames the discussion around the issues they believe the group should consider, creating an artificial consensus in support of that person’s stance. Even if sponsors strive to be objective, their own views on the project may be reflected in their tone and presentation, telegraphing biases and shaping the views of others on the panel.

Finally, the timing of the process can also yield inferior decisions. For example, we know that the timing of meals affects judges’ sentencing decisions, such that the percentage of favorable rulings gradually drops before, and then increases after, session breaks.10 In addition, the order in which projects are reviewed shapes outcomes. Looking at unique data from a professional services firm, we found that the decision to fund one project makes it unlikely that the next project will be funded. This occurs even when the sequence is random.11

Making Smarter Decisions Before, During, and After the Selection Process

There are a number of ways to reduce these biases and improve outcomes before, during, and after selecting innovation projects for further investment. Although these practices may require some resources and effort to deploy, they likely cost less than the traditional model of selection, which heavily taxes senior managers’ and technologists’ time and effort.

Before Selection

Before projects are evaluated, companies can take steps to ensure that they get a fair assessment based on their merits, primarily by revising the process for submitting ideas for consideration.

Remove names and demographic information. To combat latent biases, organizations should mask or remove the names and key demographic characteristics of creators behind any ideas under consideration. Some research has found that masking submissions in science increases the likelihood of women receiving grants.12 One simple, low-cost experiment that an organization can run is to mask the identity of the idea creator to test whether there is a bias for certain types of individuals in the organization.

Notably, although masking submissions is a start, companies will likely need to take additional steps to identify biases. A study concerning grant proposal acceptance by the Gates Foundation found that blind reviews do not always comprehensively eliminate gender biases.13 Gender-based differences can arise in the writing styles of the project proposers, which suggests that organizations need to continuously and proactively gauge whether they are falling prey to bias.

Standardize submissions. It’s critical that ideas competing against each other are comparable. This can be achieved by providing a detailed, standardized template for submissions. At the Defense Advanced Research Projects Agency (DARPA), program managers must frame their proposals following the Heilmeier Catechism, a set of questions first formulated by George Heilmeier, director of the agency in the mid-1970s.14 It establishes the criteria used by selectors to decide whether to fund a project and also acts as a screening device to help project managers judge whether their ideas have a chance of being approved. (See “The Heilmeier Catechism.”)

Another advantage of standardizing submissions is that companies can more easily build up searchable repositories of accepted and rejected ideas that can be easily compared, making the process more transparent and objective. A company we studied posts all project pitches and decisions on an internal webpage so that people can learn about what has worked in the past. Like pre-publication review in science, this open model also allows anyone in the organization to comment on pending applications, ensuring that the panel has access to the views of others in the organization before making its decisions.

Amazon also requires employees to present their ideas using a standardized approach. The basic criteria for evaluation are similar to those of the Heilmeier Catechism: what the estimated market potential of the idea is, whether Amazon can build it, and whether customers will love it.15

During Selection

The selection process itself is the most ripe for rethinking. There are several measures that organizations can take to improve outcomes.

Seek diverse voices from inside the company. Research has highlighted the importance of diversity across multiple dimensions — demographic characteristics, but also professional expertise and backgrounds. Having a more diverse selection panel will not only help it to overcome biases against women and people of different backgrounds and heritages — it will also lead to products that are more appealing to people with different needs and interests. Second, greater knowledge diversity increases the odds of more novel projects being funded. One relatively easy method to make selection teams more diverse is to include people with both technical and nontechnical backgrounds. This ensures that projects are evaluated not solely on technical aspects but also on market potential, business planning, strategic fit, and financing. For instance, many pharmaceutical companies involve experts in different therapeutic areas, as well as marketing representatives, in assessing drugs in development.

In addition, companies can impanel selection juries or create citizen assemblies among employees. If companies pursue this approach, they should also ensure that panel members feel that they can freely express their views, especially among senior managers and technologists. The benefits of diversity are realized only when people speak up and give voice to their different perspectives.

Use crowdsourcing principles, both internally and externally. The basic principle of crowdsourcing is that the collective wisdom of a large group can sometimes lead to a better outcome than a decision by a small number of experts. Companies can use this approach both internally and externally. In internal crowdsourcing, companies provide a fictional currency to all employees or a large group of individuals and let them “invest” that currency in the idea that they think has the most potential. BMW, for example, has experimented with giving shop floor workers a limited budget of blue buttons to prioritize among project proposals collected from the workforce.16 Siemens has used a prediction market to evaluate new ideas suggested by its employees.17 An interesting facet of these methods is that they sometimes point to conclusions that run counter to those of company executives.

Some companies also involve external crowds in innovation selection. Lego Ideas, a well-known crowdsourcing platform, lets people around the world share their ideas for new Lego sets, vote on proposed ideas, and indicate how much they would pay for them. From Lego’s perspective, this reduces demand uncertainty and indicates which ideas may be successful. Since the platform’s inception in 2014, Lego has received more than 100,000 ideas. By letting the crowd prefilter the best ideas, Lego has had to carefully evaluate only a small subset, which has reduced the selection burden and allowed the company to focus its attention on the most promising ideas and their fit with Lego’s business model. Some products now on store shelves were originally developed through the platform.18

Use a workshop approach. Another way to assess innovation projects is to use workshops, which bring together experts from different fields to work together. The approach arose in the U.K., where government-funded research councils wanted to break away from the limitations of anonymous peer review, which tends to be conservative and intolerant of interdisciplinary projects.19 (In the U.K., these workshops are called “sandpits.”) Workshops bring together scientists working in a particular area for a weeklong intensive retreat, where participants discuss their research ideas, get feedback from other experts, and collectively select which projects will be funded.

Leave it up to chance. It may sound heretical, but some organizations are incorporating randomness into R&D decisions, particularly when choosing among projects of midlevel quality. (It is typically easier to find agreement about outlier projects — those that are either extremely promising or extremely weak.) For example, New Zealand’s Health Research Council created a lottery system for randomly allocating scientific funding. Scientists prepare a full proposal, and every proposal that meets a set of basic requirements is entered into the lottery. Hence, chance decides which initiatives get funding.20

Given the difficulty of predicting outcomes for midlevel projects, random selection is likely to be as effective as educated guesswork. Recently, the Swiss National Science Foundation took up this practice. When reviewers cannot agree on a ranking of two or more research projects, rank is determined via a random drawing.21

Early research shows that these approaches can generate science of equal or better quality than the formal selection process.22 Of course, such methods are unlikely to fully replace other means of selection, but they can provide useful antidotes to conventional thinking.

Stage head-to-head comparisons. When companies need to rank a set of ideas, they can pit them against each other in head-to-head competitions. In chess, this is known as an Elo comparison — named for Arpad Elo, a physics professor who developed the concept — but there are other methodologies as well. In chess, if you beat a good player, you move up much higher in the rankings than if you beat a low-ranked player. The strength of competition matters. Organizations can apply that same logic to select novel ideas. By comparing just two ideas with each other and repeating those comparisons across other combinations, companies can develop a ranking of ideas from best to worst. This approach simplifies the process of assessing a potentially overwhelming set of options.23 For example, German auto manufacturer Smart used an Elo approach to help it select among several thousand designs for automobile “skins” from people in 110 countries.24

Similarly, Shell used a head-to-head approach in an annual innovation competition that it ran from 2013 to 2018 with the goal of improving energy, water, and food sustainability. In 2018, after receiving over 1,100 ideas submitted from 140 countries, the company selected 72 of the most promising and then had them assessed by master reviewers and an external panel, which used head-to-head comparisons until a final winner was chosen.

After Selection

Finally, once organizations have determined which projects to green-light, they can take actions that help innovators develop more successful proposals, and help decision makers make better choices the next time.

Provide feedback on proposals. Selection panels should provide specific feedback on all proposals and make it accessible across the organization. This kind of feedback should help idea creators to better structure future submissions, alleviate the potential demotivating effect of having an idea rejected, and increase trust that the decision process is fair. And, critically, requiring this feedback creates an accountability mechanism that will encourage selectors to consider what motivated them to reject or accept a given project.

Ericsson has set up an online system called IdeaBoxes for collecting and managing employees’ ideas. Critically, the system includes moderators who are responsible for providing feedback within one month of an idea’s submission and explaining why it has been selected or rejected. Because these comments are visible to the entire organization, employees can learn from their own and others’ experiences.25

Similarly, Bristol-Myers Squibb has made significant changes to its selection process in order to increase fairness and the accountability of decision makers. Committees in charge of managing the company’s portfolio of drugs and deciding which compounds to progress through the pipeline are now required to communicate their decisions and provide feedback to R&D project leaders within hours. Most important, R&D project leaders are asked to complete a survey evaluating not only the selection process but also each of the individual members of the committee.26

Track and learn from failures. Too few organizations conduct a systematic, quantitative review of their R&D selection process. In particular, organizations fail to capture sufficient information on their failures. A big component of cultivating a tolerance for failure is to show failures and learn from them. Without such information, it is difficult to know whether selected projects met expectations or whether the organization missed an opportunity that another entity ultimately pursued.

By monitoring outcomes of the selection process — both successes and failures — it is possible to assess how well the current selection process works. Such analysis will reveal not only whether the process can identify high-value ideas but also whether the pool of projects and/or creators fully reflects the wide range of talents and skills of the organization, and whether selection panels are overly risk-averse.

No one wants to be the decision maker who passed on a good investment — like the senior partner of venture capital firm Bessemer Venture Partners, who counseled one of Facebook’s founders, “Kid, haven’t you heard of Friendster? Move on. It’s over!”27 Likewise, the Bic executives who green-lighted a pen for women that unsurprisingly failed in the marketplace almost certainly had regrets.28 But there will always be some hits and some misses in the process of assessing innovation projects for funding. Our research shows that by understanding the potential pitfalls and improving the process, companies can make smarter decisions and generate better outcomes. Innovation will always be tough to assess, but by creating a process that is more open, fluid, and collaborative, organizations can spot the true gold nuggets among the rocks.

Topics

References

1. H. Davies, “The Beatles: The Authorized Biography” (New York: McGraw-Hill, 1968).

2. J. Bloom, “Eccentric Orbits: The Iridium Story” (New York: Grove Atlantic, 2016).

3. S. Harvey and J.S. Mueller, “Staying Alive: Toward a Diverging Consensus Model of Overcoming a Bias Against Novelty in Groups,” Organization Science 32, no. 2 (March-April 2021): 293-314.

4. P. Criscuolo, L. Dahlander, T. Grohsjean, et al., “Evaluating Novelty: The Role of Panels in the Selection of R&D Projects,” Academy of Management Journal 60, no. 2 (April 2017): 433-460.

5. M. Reitzig and O. Sorenson, “Biases in the Selection Stage of Bottom-Up Strategy Formulation,” Strategic Management Journal 34, no. 7 (July 2013): 782-799; and Criscuolo et al., “Evaluating Novelty,” 433-460.

6. D. Fehder and F. Murray, “Evaluation of Early-Stage Ventures: Bias Across Different Evaluation Regimes,” Academy of Management Proceedings 108, no. 1 (August 2018): 477-482.

7. R. Koning, S. Samila, and J. Ferguson, “Inventor Gender and the Direction of Invention,” AEA Papers and Proceedings 110 (May 2020): 250-254.

8. R. Cao, R.M. Koning, and R. Nanda, “Biased Sampling of Early Users and the Direction of Startup Innovation,” working paper 28882, National Bureau of Economic Research, Cambridge, Massachusetts, June 2021.

9. K.J. Boudreau, E.C. Guinan, K.R. Lakhani, et al., “Looking Across and Looking Beyond the Knowledge Frontier: Intellectual Distance, Novelty, and Resource Allocation in Science,” Management Science 62, no. 10 (October 2016): 2765-2783.

10. S. Danziger, J. Levav, and L. Avnaim-Pesso, “Extraneous Factors in Judicial Decisions,” Proceedings of the National Academy of Sciences of the United States of America 108, no. 17 (April 2011): 6889-6892.

11. P. Criscuolo, L. Dahlander, T. Grohsjean, et al., “The Sequence Effect in Panel Decisions: Evidence From the Evaluation of Research and Development Projects,” Organization Science 32, no. 4 (July-August 2021): 987-1008.

12. R. Tamblyn, N. Girard, C.J. Qian, et al., “Assessment of Potential Bias in Research Grant Peer Review in Canada,” Canadian Medical Association Journal 190, no. 16 (April 2018): E489-E499.

13. J. Kolev, Y. Fuentes-Medel, and F. Murray, “Is Blinded Review Enough? How Gendered Outcomes Arise Even Under Anonymous Evaluation,” working paper 25759, National Bureau of Economic Research, Cambridge, Massachusetts, April 2019.

14.Innovation at DARPA,” PDF file (Washington, D.C.: Defense Advanced Research Projects Agency, 2016), https://www.darpa.mil.

15. C. Bryar and B. Carr, “Working Backwards: Insights, Stories, and Secrets From Inside Amazon” (New York: St. Martin’s Press, 2021).

16. C.H. Loch, F.J. Sting, N. Bauer, et al., “How BMW Is Defusing the Demographic Time Bomb,” Harvard Business Review 88, no. 3 (March 2010): 99-102.

17. K.R. Lakhani, R. Hutter, S.H. Pokrywa, et al., “Open Innovation at Siemens,” Harvard Business School case no. 613-100 (Boston: Harvard Business School Publishing, June 2013).

18.What Is Lego Ideas About?” Lego, accessed June 28, 2021, https://ideas.lego.com.

19. “Sandpits,” Engineering and Physical Sciences Research Council, accessed June 28, 2021, https://epsrc.ukri.org.

20. S. Avin, “Mavericks and Lotteries,” Studies in History and Philosophy of Science Part A 76 (August 2019): 13-23.

21. D.S. Chawla, “Swiss Funder Draws Lots to Make Grant Decisions,” Nature, May 6, 2021, www.nature.com.

22. D. Adam, “Science Funders Gamble on Grant Lotteries,” Nature 575, no. 7785 (November 2019): 574-575.

23. J. Herlocker, J.A. Konstan, L.G. Terveen, et al., “Evaluating Collaborative Filtering Recommender Systems,” ACM Transactions on Information Systems 22, no. 1 (January 2004): 5-53.

24. J. Füller, K. Möslein, K. Hutter, et al., “Evaluation Games: How to Make the Crowd Your Jury,” in “Proceedings of the Service Science—Neue Perspektiven für die Informatik. Lecture Notes in Informatics (LNI): Proceedings, Series of the Gesellschaft für Informatik, Vol. P-175,” ed. K.-P. Faehnrich and F. Bogdan (Leipzig, Germany: Springer, 2010), 955-960.

25. M. Beretta, “Idea Selection in Web-Enabled Ideation Systems,” Journal of Product Innovation Management 36, no. 3 (January 2018): 5-23.

26. P. Tollman, V. Panier, D. Dosik, et al., “Unlocking Productivity in Biopharmaceutical R&D: The Key to Outperforming,” PDF file (Boston Consulting Group and Bristol-Myers Squibb, 2016), www.bcg.com.

27.The Anti-Portfolio,” Bessemer Venture Partners, accessed July 15, 2021, www.bvp.com.

28. C. Sieczkowski, “Bic Pens ‘For Her’ Get Hilariously Snarky Amazon Reviews,” HuffPost, Aug. 30, 2012, www.huffpost.com.

Reprint #:

63204

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.