How Managers Can Enable AI Talent in Organizations

Leading a successful AI-enabled workforce requires key hiring, training, and risk management considerations.

Reading Time: 8 min 

Topics

The AI & Machine Learning Imperative

“The AI & Machine Learning Imperative” offers new insights from leading academics and practitioners in data science and artificial intelligence. The Executive Guide, published as a series over three weeks, explores how managers and companies can overcome challenges and identify opportunities by assembling the right talent, stepping up their own leadership, and reshaping organizational strategy.

Brought to you by

AWS
See All Articles in This Series
Like what you're reading?
Join our community
Member
Free

5 Free Articles per month, $6.95/article thereafter. Free newsletter.

Subscribe
$89 $44/Year

Unlimited digital content, quaterly magazine, free newsletter, entire archive.

Sign me up

Recent progress on the technical side of machine learning, particularly within deep learning, has followed an accelerating trend of businesses adopting AI technologies into their processes and workflows in the past decade.1 Some of these advances, such as Google DeepMind’s AlphaGo and OpenAI’s GPT-2 and GPT-3 models, have demonstrated expert-level performance in domains previously held up as examples of areas where bots would be incapable of challenging human abilities.2

With respect to business outcomes, most of the exciting developments involve using deep learning for supervised learning problems. Supervised learning is a form of machine learning where you have input and output variables and use an algorithm to learn the function that relates input to output. The algorithm is “supervised” because it learns from training data where input and output are known in advance. These deep learning algorithms enable a different kind of software development — where instead of explicitly writing a recipe in code to complete a task, a model is trained with data to learn how to complete the task on its own. These types of algorithms are also especially useful for different types of prediction.3

Finding and enabling talented individuals to succeed in engineering these kinds of AI systems can be a daunting challenge for companies. Building organizational AI/machine learning capabilities requires a fundamental reengineering of existing business processes. These efforts naturally include hiring or training technical talent.4 Effective AI management, however, is perhaps even more critical. Ultimately, managers are responsible for shaping the design and direction of the organization’s strategy to maximize the returns of any new technology. With this comes the responsibility to manage the associated risks of building AI systems. Done properly, effective AI management can drive faster productivity growth and provide companies with a competitive advantage.

Hiring and Training Considerations for Managers

The first requirement for leaders in building a successful AI system is hiring and training the right talent. An AI team is effectively a type of data science team, but it builds a different suite of products. For example, instead of running experiments to determine the effect of a new ad campaign, an AI team might build a product image classifier to determine how store shelves are organized. These teams use many of the same tools, including common programming languages like Python and R, cloud-based computing environments, and database technologies. Provisioning a team to build machine learning models involves familiarity and knowledge of the organization’s technological hierarchy. Questions for leaders and teams to keep in mind include the following:

  • Is there a way to access a lot of computational power quickly? Running production-quality AI systems is often best handled with cloud services, but building out a data center can be a better option for some companies. Either way, AI engineers are going to need access to the right machines.
  • Is there technical talent supporting the stability of the computational systems? Stability of data infrastructure and computational resources is key to building out systems that scale. That means hiring IT talent that can make it easy for data science and AI engineers to produce reliable models.
  • Is data collected, cleaned, and accessed in a reliable and compliant way? Professional data engineers can make sure that the raw data inputs are available in the format and quality needed to maximize AI value while minimizing risks.

AI, like other forms of IT, requires a lot of preexisting investment in various other assets, such as technical expertise, businesses processes, data, and culture, to be productive and provide value in a new context.5 Early on, all of this additional complementary investment and change management can make it seem like AI (and data science as well) is a drag on productivity. After all, more resources are committed to generate some of the same outcomes. However, in time, what may have looked like initial dips in measured productivity will pay off with real returns. My research colleagues and I refer to this phenomenon as the Productivity J-Curve, and our research supports the idea that these up-front investments help organizations move toward the objectives stakeholders want to reach.6

In my own work partnering with LinkedIn’s Economic Graph Research and Insights team, I found that a major portion of the business value of AI talent is reflected in these complementary assets. This makes sense given that many of these intangible assets, such as new processes, provide more value when AI skills become easier to acquire.

New tooling and platforms, such as Google’s TensorFlow and PyTorch open-source machine learning libraries, have made it easier to train deep learning models and build skills more quickly on AI teams. In my research, I used LinkedIn data to track the prevalence of AI skills across companies and found that the market value of publicly traded companies that were already using AI increased by as much as 3% to 7% after TensorFlow came into the market at the end of 2015.7

These types of open-source solutions allow companies to accelerate machine learning initiatives without undertaking the cost of building out new development frameworks themselves. Meanwhile, the technical requirements needed to support production systems written with TensorFlow or PyTorch are already integrated into the major cloud providers. For managers, the best bet is to hire people who either know one or more of these frameworks or can learn them quickly. On the training side, these frameworks emphasize concepts over difficult programming syntax. That means existing employees in software engineer and data analyst roles can quickly learn the skills they need to be AI engineers. Programs such as Deeplearning.ai and Fast.ai offer ways to pick up these additional frameworks through online instruction.

Developing Effective AI Management

Even with a strong technical team in place, every AI-powered organization needs to successfully invest in organizational complements to maximize the return of AI. There are many perils and pitfalls to using AI systems. Managing these risks requires designing an effective management and reporting structure.8 Organizational design choices play an important role here. For instance, are the AI engineers developing products for internal clients, or are they part of those client teams? Some companies prefer a hub-and-spoke model, where a core analytics team supports many different internal groups, while others might embed data scientists within each of those groups. The same organizational models can be applied to AI. When AI developers aren’t embedded within internal groups, some of those clients might be worried about AI replacing them or challenging their position in the company. They might stand in the way of implementing a new process if it’s perceived to be a threat.9 In these instances, managers need to prioritize buy-in. Effective communication and education about AI is therefore paramount. AI is only (very) useful for a subset of the tasks that people do in the workforce.10 Managers can assuage internal fears about AI with clear planning for how work will change with the adoption of the new technology.

Another technique involves arming senior management with enough information to motivate subordinates to be data-driven. With AI training for senior executives and information pipelines giving these executives a granular view of their businesses, others in the organization will need to get on board with the new technology to keep up. If the organization is forecasting sales using a new technique, for instance, AI leaders should send those reports to the top of the organization for executives to reference in their meetings with midlevel managers. In the scenario where AI teams are spread out throughout the business, management is more responsible for the big-picture view of where new technology investments should happen. In either case, the organization is better off if decision makers understand which problems AI can solve and which problems are better tackled with other tools.

Even with a strong technical team in place, every AI-powered organization needs to successfully invest in organizational complements to maximize the return of AI.

Another area that managers must handle aggressively involves bias. With machine learning, it can be difficult to interpret the “why” beyond certain predictive capabilities. For instance, with black-box models, determining the reasons someone’s credit score went up or down can be tough to do. This goes beyond biased data sets leading to ineffective and inappropriate model outputs in production contexts. Algorithms are designed by humans; choices made by biased human designers or within complex social systems can also lead to outcomes in conflict with the organization’s goals and values.11 Managers need to closely monitor how their organization ingests, processes, and exports data. Whenever possible, systems should be proactively audited to make sure they serve the right purposes.

Lastly, managing the risks of AI systems requires the ability to recognize the difference between correlation and causation. Machine learning is most often used for predictive purposes in supervised learning. What is the model meant to do? It might not matter why a cat is recognized in a picture — that’s a prediction (effectively, a correlation). It does matter why a given product line’s customer acquisition costs are going up. That’s a question about cause and effect. Managers in both cases need to think like social scientists: Develop a hypothesis, find the right tool kit and data to make an assessment, and then make decisions armed with better information.

Incorporating AI and machine learning into organizational workflows is risky, yet the returns are potentially very high if the right complementary investments are made. As with previous waves of information technology, AI requires management to grow with the new capabilities of the organization. AI talent is becoming more abundant across the globe. It is up to a new breed of managers with complementary managerial talent to bring out the best from their technical engineering and research teammates.

Topics

The AI & Machine Learning Imperative

“The AI & Machine Learning Imperative” offers new insights from leading academics and practitioners in data science and artificial intelligence. The Executive Guide, published as a series over three weeks, explores how managers and companies can overcome challenges and identify opportunities by assembling the right talent, stepping up their own leadership, and reshaping organizational strategy.

Brought to you by

AWS
See All Articles in This Series

References

1. R. Perrault, Y. Shoham, E. Brynjolfsson, et al., “Artificial Intelligence Index 2019 Annual Report,” Human-Centered Artificial Intelligence Institute (Stanford, California: Stanford University, December 2019).

2. D. Silver, A. Huang, C.J. Maddison, et al., “Mastering the Game of Go With Deep Neural Networks and Tree Search,” Nature 529, no. 7587 (Jan. 28, 2016): 484-489; A. Radford, J. Wu, R. Child, et al., “Language Models Are Unsupervised Multitask Learners,” OpenAI (2019): 9; and T.B. Brown, B. Mann, N. Ryder, et al., “Language Models Are Few-Shot Learners,” arXiv, June 5, 2020, https://arxiv.org.

3. A. Agrawal, J. Gans, and A. Goldfarb, “Prediction Machines: The Simple Economics of Artificial Intelligence” (Boston: Harvard Business Review Press, 2018).

4. C. Cornwell, I.M. Schmutte, and D. Scur, “Building a Productive Workforce: The Role of Structured Management Practices,” discussion paper no. 1644, Centre for Economic Performance, London, August 2019.

5. P. Tambe, “Big Data Investment, Skills, and Firm Value,” Management Science 60, no. 6 (June 2014): 1452-1469.

6. E. Brynjolfsson, D. Rock, and C. Syverson, “The Productivity J-Curve: How Intangibles Complement General Purpose Technologies,” American Economic Journal: Macroeconomics, forthcoming.

7. D. Rock, “Engineering Value: The Returns to Technological Talent and Investments in Artificial Intelligence,” unpublished working paper, MIT Sloan School of Management, Cambridge, Massachusetts, May 2019.

8. S. Helper, R. Martins, and R. Seamans, “Who Profits From Industry 4.0? Theory and Evidence From the Automotive Industry,” NYU Stern School of Business, New York, Jan. 31, 2019.

9. A. Goldfarb, B. Taska, and F. Teodoridis, “Artificial Intelligence in Health Care? Evidence From Online Job Postings,” AEA Papers and Proceedings 110 (May 2020): 400-404.

10. E. Brynjolfsson, T. Mitchell, and D. Rock, “What Can Machines Learn, and What Does It Mean for Occupations and the Economy?” AEA Papers and Proceedings 108 (May 2018): 43-47; E.W. Felten, M. Raj, and R. Seamans, “A Method to Link Advances in Artificial Intelligence to Occupational Abilities,” AEA Papers and Proceedings 108 (May 2018): 54-57; and M. Webb, “The Impact of Artificial Intelligence on the Labor Market,” unpublished working paper, Stanford University, Stanford, California, January 2020.

11. B. Cowgill and C.E. Tucker, “Algorithmic Fairness and Economics,” Journal of Economic Perspectives, forthcoming; and A. Lambrecht and C. Tucker, “Algorithmic Bias? An Empirical Study of Apparent Gender-Based Discrimination in the Display of STEM Career Ads,” Management Science 65, no. 7 (July 2019): 2966-2981.

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.