A gap already exists between companies’ ability to collect data and managers’ skills at putting it to use. Will AI increase the divide?

The use of artificial intelligence in the criminal justice system offers a stark example of the contrast between knowing how to produce results and knowing how to consume them intelligently. Systems recommend bail and sentencing but offer little transparency about the basis for the recommendation, leaving the humans who digest the recommendations potentially under informed.

What if we knew so little about the production processes of the food we eat? We know more about what we put into our mouths than what we put into our minds.

Are Organizations Biting Off More Analytics Than They Can Chew?

In 2015, we observed a growing gap between the production and consumption abilities of analytics in organizations. The article “Minding the Analytics Gap” describes how organizations struggle to consume the analytics results they produce. If that wasn’t bad enough, not only did we observe a gap, but it was a gap that grew, not shrank, as organizations got better at analytics.

Yes, organizations were rapidly improving their ability to produce analytical results. They were gathering more and more data. They were building digital infrastructures to process these vast quantities of data. They were developing (or acquiring) the talent required to develop complex models of market behavior. When these pieces all came together, organizations could create sophisticated analytical results.

Unfortunately, managers and executives in those organizations often did not have the expertise to consume the analytics results that the organization was able to produce. Just having the analytics results available wasn’t enough. The organizational ability to develop business insight and strategy based on those analytical results was more limited.

The difficulty lies in the individual rates of improvement in production abilities and consumption abilities. As organizations matured analytically, they were able to improve their analytics production capabilities more quickly than they were able to improve their consumption abilities. As a result, maturing organizations found that, despite the fact that their consumption abilities were improving, they were able to consume less and less of what they produced. The analytics gap gets worse as organizations improve — the opposite of what leaders would hope and expect.

And yet this may have just been the tip of the iceberg. When it comes to artificial intelligence in business, the divergence and resulting gap between production and consumption of data analytics may be an even bigger concern.

Artificial Intelligence Widens the Analytics Gap

Artificial intelligence in business builds off of an analytics foundation. (Stay tuned — we’ve got much more coming about that in our forthcoming report on artificial intelligence and business strategy this fall.) But as a result, organizations will similarly experience a growing gap between artificial intelligence production and artificial intelligence consumption. What’s worse, the rate at which the artificial intelligence production-consumption divide grows stands to be greater than what we’ve observed with standard data analytics. Everything hinges on the relative rates of change for the sophistication of AI data production vs. AI data consumption.

AI production sophistication seems poised to grow rapidly. AI is building quickly on what organizations have learned from analytics sophistication. As new techniques are developed, tools seem to incorporate them quickly — the scarce resource for most AI is data, not algorithms. Algorithms, by definition, are software; they are easily and perfectly copied. At the extreme, complex AI algorithms can be incorporated into AI production processes perhaps without data scientists understanding their details — they just use the library or tool. The result is rapid increase in the sophistication of AI in an organization.

Conversely, managers and executives may find that their understanding of the AI output improves slowly. As complex as analytical models can be, managers and executives likely have at least some basic statistics background to build from — so they have a starting point. But with artificial intelligence models, managers probably have less background. Machine learning is rarely part of a business curriculum core.

Not to mention that many of the algorithms themselves are “black boxes,” particularly when offered by vendors that want to protect the investments in their development. Deep learning neural networks can be trained with organization data to yield high predictive accuracy — but unlike many analytical models with coefficients on observable input measures, AI approaches typically contain a large number of weightings on nodes in hidden layers — not exactly the sort of description that will make AI models accessible for easy consumption.

As a result, the divergence between the production and consumption of artificial intelligence in organizations may increase even more quickly than it has for analytics. Managers then may find that their organizations’ AI models work, yet not understand why.

Peeking Inside the Black Box

The 1974 novel Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig is relevant today because it contrasts romantic and rational relationships with technology. Along the narrator’s motorcycle road trip, maintenance of the motorcycle was inevitable. Treating the machine as a black box — romantically, in other words — led to frustration, breakdowns, and unhealthy reliance on others. Inauthenticity stems from a lack of knowledge. But a rational approach, one that puts in effort to understand the machine, led to independence, stability, and even pleasure in working with the technology.

Without understanding how AI works, we lose the ability to think critically about where the results are strong and where they are weak. We lose the ability to understand how changes outside the scope of the model will adversely affect the model. We lose the ability to know where the AI will fail before it fails. We lose the ability to repair it ourselves when it does inevitably fail.

Stopping gains in artificial intelligence isn’t the right approach, even if it were possible. Instead, managers need to work to close the gap by learning more about AI, by opening the black box, by learning enough to be better managers in a future that relies on AI. Success depends on rational problem-solving approaches to AI, not romantic reliance.

2 Comments On: Romantic and Rational Approaches to Artificial Intelligence

  • Peter Evans | June 20, 2017

    Sounds a lot like my elementary school teacher explaining why calculators have not made memorizing multiplication tables obsolete. –PE

  • Nik Zafri Abdul Majid | June 25, 2017

    I do feel that we should know “what to apply” and “when to apply”. It is not possible to apply “everything”. It’s like having so many systems but not everything can be implemented.

    Thus, the managers and executives will soon feel that this is another way of “control” (both negative and regulative) and they will feel AI as a kind of constraint for them to perform – just like the case with organization attempting to apply too many systems.

    We should be asking ourselves : “Do we need everything? or simply following “the trend”?

    In my humble experience consulting and implementing AI in any organization in South East Asia, what the organization need to understand is “cognitive technologies” and it is already around them but how to make use or upgrade them are another issue.

    When I say it is already around them, I am referring to e.g. Customer Relationship Management (CRM) and how to upgrade the current system combining and integration of cognitive technologies. All my customers want is to “make things more easier than before” example on marketing, service and sales (the ability to predict client’s need and multi-demographic classification). Cognitive technologies are among others to incorporate e.g. cloud, automated e-mail reader and standard reply having scanning the keywords and counter-reference to a specific database, images classification, sales leads auto-scoring, auto-voice interactions etc – all these “value added” elements will depend on the organizational capacities in analytics (eg. models based on automated generation) etc.

    Start small – take it one at a time – learn from past lessons and hopefully end big.

    The other part such as machine learning or robotic technology (especially for manufacturing), partial responsibility to understand the client’s needs/specs are the vendors. If the vendor is not effective, the system will end ineffective as well. To me as a consultant, I will likely follow the “IBM style” – start customers out with a “Cognitive Value Assessment” which requires some help and inputs from other subject matter experts.

    Eventually, once the transfer of technology from the vendors have reached the maturity stage then the organization can start building on their own. There are so many open source cognitive-based applications/softwares – and it’s cheap.

Add a comment