What to Read Next
Companies embarking on AI and data science initiatives in the current economy should strive for a level of economic return higher than those achieved by many companies in the early days of enterprise AI. Several surveys suggest a low level of returns thus far, in part because many AI systems were never deployed: A 2021 IBM survey, for instance, found that only 21% of 5,501 companies said they had “deployed AI across the business,” while the remainder said they are exploring AI, developing proofs of concept, or using pre-built AI applications. Similarly, a VentureBeat analysis suggests that 87% of AI models are never put into production. And a 2019 MIT Sloan Management Review/Boston Consulting Group survey found that 7 out of 10 companies reported no value from their AI investments. This makes sense: If there is no production deployment, there is no economic value.
But other companies have achieved economic return on their AI investments. Their strategies for finding value include establishing close relationships between the data group and interested business units, selecting projects with tangible value and a clear path to production, lining up trust from key stakeholders in advance of development, building reusable AI products, selectively employing “proof of concept” projects, and establishing a management pipeline or funnel leading projects toward production implementation. We describe each of these approaches below.
Email Updates on AI, Data, & Machine Learning
Get monthly email updates on how artificial intelligence and big data are affecting the development and execution of strategy in organizations.
Please enter a valid email address
Thank you for signing up
Six Strategies Toward Value
AI projects typically are led by the company’s data science group, and that group is tasked with both executing the projects and taking responsibility for their achievements. These six strategies can help guide the data science team toward a greater chance of success in these cross-unit projects.
Focus on partnerships with AI-friendly business units. It is widely known that any technology project benefits from partnerships with the business functions or units that will use the new system. With AI projects, however, it is important to work with business unit leaders who understand the technology and its potential. Indicators of likely support include leaders who are familiar with data and analytics, have curated data, and even have an analytical team within their organization.
At BMO Financial Group, where one of us (Ren Zhang) is the chief data scientist, support for AI within the bank’s different business units is closely correlated with how much data is available within that unit. The bank’s digital unit, for example, has large volumes of clickstream data from customers and welcomes AI and analytics to make sense of the data and to personalize customer interactions. The bank’s financial crimes unit also has data on customer and employee behaviors and is always interested in using the latest AI tools to identify and stop criminal activity. Both of these departments also are subject to industry trends that help prioritize the adoption of AI: For the digital unit, it’s the increasing customer demand for personalized experiences, and for the financial crimes unit, it’s the rise of cyberattacks and digital fraud. These groups have been enthusiastic internal customers of AI.
But other groups within the bank are naturally more conservative in their embrace of AI. The commercial bank, for example, serves fewer customers than the consumer bank and prefers a personal touch over more automated processes and interactions. Executives in the credit risk function are supportive of using data and analytics for better credit decisions, but that aspect of the business is heavily regulated. Complex machine learning models may become more desirable after the necessary transparency is solved to satisfy regulators and the complexity is justified by clearer possibilities for incremental gain.
Still, the more reticent internal organizations should by no means be ignored by the data science team. They can be educated about AI and possible use cases in their units, with an eye toward greater AI usage over the longer term. Small pilot projects may also be undertaken in organizations where stakeholders are still exploring the idea of AI. But large-scale production deployments should probably be planned elsewhere.
Select projects with tangible values and a clear path to production. AI groups should always try to describe projects in terms of the business case and benefits in outcomes. The demonstration of value can help build credibility and expand to future use cases in other groups. Joint efforts with the business team also help plan how AI will be used in practice, an extra assurance of the planned implementation.
AI and analytics projects vary considerably in their ability to create tangible value, and any AI team needs some low-hanging fruit to balance out more ambitious efforts. For example, projects involving task automation, such as those employing robotic process automation (RPA), are relatively inexpensive to develop and often provide rapid payback, particularly when replacing outsourcing arrangements. The Spanish telecommunications company Telefonica O2, for example, adopted RPA several years ago for back-office administrative tasks and got ROIs of between 650% and 800%.
Other projects that typically create clear value are those that involve acquisition of new customers, prevention of customer attrition or churn, or prevention of credit risk or default. In manufacturing companies, improvements in quality (as at Seagate in wafer manufacturing) or reductions in supply chain inventory are relatively easy to value. Machine learning projects involving improved decisions can be measured and valued when compared before implementation to the previous approach used for decision-making.
Other AI or analytics projects are more difficult to measure and justify. Applications intended to inform executives, whether simple dashboards or AI-enabled environmental scanning platforms, are difficult to put a value on because of the challenge of attributing executive decisions to them. Segmentation of customers, a classic marketing analytics activity, offers little value in itself and only becomes valuable when different customers are treated differently and cross-selling or upselling takes place successfully.
In a webinar last summer on the challenges of achieving a return on ROI, several companies said they have established partnerships with the CFO or finance team to evaluate projects before and after implementation. Others said they minimize the discussion of AI with business stakeholders in order to avoid some of the hype associated with the field.
Foster stakeholder trust and sponsorship in advance of development. Production deployments of AI are time-consuming and expensive because they involve extensive testing, integration with existing systems architectures, redesign of business processes, and upskilling or reskilling of employees. As a result, it is important to have strong sponsorship from the leaders of the functions or units in which the systems will be implemented, including collaboration on cost estimates and expectations for necessary changes in the business.
For example, we spoke to Vipin Gopal, the chief data and analytics officer at the global pharmaceutical company Eli Lilly and Co. One of his first activities when he joined the company was to interview senior business leaders across the organization. After the interviews, he recommended three areas to focus on for use cases, with articulation of the costs and benefits for each. He also presented the use case ideas to the entire senior executive team. The projects were all endorsed and moved ahead successfully. He attributes the early business leader buy-in and alignment as a significant enabler to the value generated by these use case solutions.
Build reusable AI products to drive scale. AI projects are expensive to develop, and it’s only logical to reuse them (or parts of them) whenever possible. At BMO, for example, the data science group created a machine learning-based pricing offering for internal business banking clients. After that project was completed, the team created a generic pricing capability and now reuses it for other projects. Looking for reusable components helps to both reduce cost and speed implementation of future projects, and makes effective use of scarce data science talent. It helps avoid the potential pitfall of investing in capabilities with limited usage.
A health care provider told us that its data science team created a natural language processing (NLP) capability that was initially used for surgical appointments but later applied to other types of appointments, revenue cycle inquiries, and even COVID-19 test results. While wise AI leaders wait for demand and then create a solution for a particular customer, they also invest a bit in thinking about how to generalize that solution later for broader application.
Use PoCs selectively, but create a path to implementation. Given the cost of a full deployment of an AI effort, many stakeholders need to see evidence of value before proceeding. A proof of concept (PoC) project can be used to measure and demonstrate the value, and shape an effective solution. It also allows for identification of solution components that are costly but don’t add much value. PoC projects are a great way to gain traction with stakeholders.
DBS Bank in Singapore, for example, developed a PoC of an anti-money laundering system that used more data than the previous rules-based system and led to many fewer false-positive alerts. The bank’s head of account surveillance estimated that with the system, surveillance analysts could be one-third more productive and that 50% fewer customers would need to be sent anti-money laundering notices. He was enthusiastic about a full deployment of the solution because it exceeded the estimates during the PoC phase.
As we noted at the beginning of this article, in many organizations PoCs never lead to full potential value because they are never fully deployed. But the assumption should be that if the PoC demonstrates the estimated level of value to the business, it will go forward into operation.
Manage the project pipeline toward full implementation. As with sales leads or new product development projects, data science projects should have clear intake criteria and be viewed in terms of progress down a pipeline or funnel, with idea conception and PoC at the beginning and production deployment at the end. Not every project will make it to full deployment, but that should be the goal.
Simply establishing such a pipeline will help to achieve value, but it should be accompanied by management mechanisms like stakeholder identification and communications, resource requirement analysis for various stages, and change management analysis. Data science teams should have regular — usually monthly — pipeline reviews of the benefits, resource consumption, and business partner commitment levels for both PoCs and production deployments. At these meetings, all of the tactics described above can be discussed with regard to individual AI projects.
AI is a somewhat unfamiliar topic to many businesspeople, but the usual disciplines for achieving financial benefits substantially in excess of costs still apply.