How Managers Can Enable AI Talent in Organizations

Leading a successful AI-enabled workforce requires key hiring, training, and risk management considerations.

Reading Time: 8 min 

Topics

The AI & Machine Learning Imperative

“The AI & Machine Learning Imperative” offers new insights from leading academics and practitioners in data science and artificial intelligence. The Executive Guide, published as a series over three weeks, explores how managers and companies can overcome challenges and identify opportunities by assembling the right talent, stepping up their own leadership, and reshaping organizational strategy.

Brought to you by

AWS
See All Articles in This Series
Already a member?
Not a member?
Sign up today
Member
Free

5 free articles per month, $6.95/article thereafter, free newsletter.

Subscribe
$75/Year

Unlimited digital content, quarterly magazine, free newsletter, entire archive.

Sign me up

Recent progress on the technical side of machine learning, particularly within deep learning, has followed an accelerating trend of businesses adopting AI technologies into their processes and workflows in the past decade.1 Some of these advances, such as Google DeepMind’s AlphaGo and OpenAI’s GPT-2 and GPT-3 models, have demonstrated expert-level performance in domains previously held up as examples of areas where bots would be incapable of challenging human abilities.2

With respect to business outcomes, most of the exciting developments involve using deep learning for supervised learning problems. Supervised learning is a form of machine learning where you have input and output variables and use an algorithm to learn the function that relates input to output. The algorithm is “supervised” because it learns from training data where input and output are known in advance. These deep learning algorithms enable a different kind of software development — where instead of explicitly writing a recipe in code to complete a task, a model is trained with data to learn how to complete the task on its own. These types of algorithms are also especially useful for different types of prediction.3

Finding and enabling talented individuals to succeed in engineering these kinds of AI systems can be a daunting challenge for companies. Building organizational AI/machine learning capabilities requires a fundamental reengineering of existing business processes. These efforts naturally include hiring or training technical talent.4 Effective AI management, however, is perhaps even more critical. Ultimately, managers are responsible for shaping the design and direction of the organization’s strategy to maximize the returns of any new technology. With this comes the responsibility to manage the associated risks of building AI systems. Done properly, effective AI management can drive faster productivity growth and provide companies with a competitive advantage.

Hiring and Training Considerations for Managers

The first requirement for leaders in building a successful AI system is hiring and training the right talent. An AI team is effectively a type of data science team, but it builds a different suite of products. For example, instead of running experiments to determine the effect of a new ad campaign, an AI team might build a product image classifier to determine how store shelves are organized.

Read the Full Article

Topics

The AI & Machine Learning Imperative

“The AI & Machine Learning Imperative” offers new insights from leading academics and practitioners in data science and artificial intelligence. The Executive Guide, published as a series over three weeks, explores how managers and companies can overcome challenges and identify opportunities by assembling the right talent, stepping up their own leadership, and reshaping organizational strategy.

Brought to you by

AWS
See All Articles in This Series

References

1. R. Perrault, Y. Shoham, E. Brynjolfsson, et al., “Artificial Intelligence Index 2019 Annual Report,” Human-Centered Artificial Intelligence Institute (Stanford, California: Stanford University, December 2019).

2. D. Silver, A. Huang, C.J. Maddison, et al., “Mastering the Game of Go With Deep Neural Networks and Tree Search,” Nature 529, no. 7587 (Jan. 28, 2016): 484-489; A. Radford, J. Wu, R. Child, et al., “Language Models Are Unsupervised Multitask Learners,” OpenAI (2019): 9; and T.B. Brown, B. Mann, N. Ryder, et al., “Language Models Are Few-Shot Learners,” arXiv, June 5, 2020, https://arxiv.org.

3. A. Agrawal, J. Gans, and A. Goldfarb, “Prediction Machines: The Simple Economics of Artificial Intelligence” (Boston: Harvard Business Review Press, 2018).

4. C. Cornwell, I.M. Schmutte, and D. Scur, “Building a Productive Workforce: The Role of Structured Management Practices,” discussion paper no. 1644, Centre for Economic Performance, London, August 2019.

5. P. Tambe, “Big Data Investment, Skills, and Firm Value,” Management Science 60, no. 6 (June 2014): 1452-1469.

6. E. Brynjolfsson, D. Rock, and C. Syverson, “The Productivity J-Curve: How Intangibles Complement General Purpose Technologies,” American Economic Journal: Macroeconomics, forthcoming.

7. D. Rock, “Engineering Value: The Returns to Technological Talent and Investments in Artificial Intelligence,” unpublished working paper, MIT Sloan School of Management, Cambridge, Massachusetts, May 2019.

8. S. Helper, R. Martins, and R. Seamans, “Who Profits From Industry 4.0? Theory and Evidence From the Automotive Industry,” NYU Stern School of Business, New York, Jan. 31, 2019.

9. A. Goldfarb, B. Taska, and F. Teodoridis, “Artificial Intelligence in Health Care? Evidence From Online Job Postings,” AEA Papers and Proceedings 110 (May 2020): 400-404.

10. E. Brynjolfsson, T. Mitchell, and D. Rock, “What Can Machines Learn, and What Does It Mean for Occupations and the Economy?” AEA Papers and Proceedings 108 (May 2018): 43-47; E.W. Felten, M. Raj, and R. Seamans, “A Method to Link Advances in Artificial Intelligence to Occupational Abilities,” AEA Papers and Proceedings 108 (May 2018): 54-57; and M. Webb, “The Impact of Artificial Intelligence on the Labor Market,” unpublished working paper, Stanford University, Stanford, California, January 2020.

11. B. Cowgill and C.E. Tucker, “Algorithmic Fairness and Economics,” Journal of Economic Perspectives, forthcoming; and A. Lambrecht and C. Tucker, “Algorithmic Bias? An Empirical Study of Apparent Gender-Based Discrimination in the Display of STEM Career Ads,” Management Science 65, no. 7 (July 2019): 2966-2981.

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.