Executive Scholar Exchange
created for Oracle Cloud

Oracle Cloud

The Dawn of the Intelligent Enterprise

Executive Scholar Exchange
created for Oracle Cloud

Oracle Cloud

Artificial Intelligence and Machine Learning Power the New Workforce

 

The content on this page was commissioned by our sponsor, Oracle Cloud.

MIT SMR Connections

MIT SMR Connections was an independent unit within MIT Sloan Management Review. Operating separately from the editorial group, it created high-quality, sponsor-funded content. Connections no longer produces new content, but previously published sponsor-funded projects remain available on the MIT SMR website.

Learn More

AI-powered systems are increasingly joining humans in the workforce. The following perspectives from an industry executive and a scholar explore how these systems will change the enterprise and require new strategies for process and organizational design.

Executive Perspective

An executive perspective by Dain Hansen, VP Cloud Business product marketing, Oracle

Artificial intelligence (AI) is transforming many aspects of our personal and professional lives, from logistics systems that select the fastest shipping routes to digital assistants that unlock doors, turn on lights, and get to know our shopping preferences. The most advanced AI systems use machine learning technology to analyze current conditions and learn from experience. Within the workplace, these self-directed agents are giving rise to the intelligent enterprise: organizations where people make decisions with the help of intelligent machines.

AI is no longer the far-out realm of science fiction. These autonomous agents enhance the routine decisions people make every day. They are ideally suited for analyzing real-time conditions to optimize business activities, such as pricing products based on shifting demand, replenishing inventory as warehouse stocks are depleted, and flagging financial transactions that appear to be fraudulent. According to De’Onn Griffin, research director at Gartner, during the next 10 years, CIOs need to prepare for this “killer combo” of people plus technology.1 The digital component of most jobs will accelerate, she predicts, putting an emphasis on “digital dexterity” within the workforce — that is, the ability to use technology for better business outcomes.

AI is having a particular impact on information technology (IT) operations such as cybersecurity, database management, and business process automation. In all three cases, the whole — people plus technology — is quickly becoming greater than the sum of its parts.

Business Process Automation

AI is reshaping the business applications we use every day. For example, an HR department can use AI to identify the best possible candidates for open positions. If a recruiting manager is filtering graduates from nearby colleges and universities, an autonomous agent can help identify ideal candidates, such as multidisciplinary students who combine a degree in the sciences with strong communication skills.

Thanks to machine learning techniques, the more data that is introduced to these intelligent agents and the more people who interact with them, the more accurate and personalized the responses become, allowing HR pros to quickly narrow down the most promising candidates. As the system filters prospective employees, it will gradually become aware of relevant trends, perhaps noting schools that produce greater numbers of graduates with engineering majors and liberal arts minors. This insight could direct the company to host job fairs on those campuses or in the towns where those schools are located, maximizing the use of its recruiting resources.

Cybersecurity

Today’s security operations teams struggle to make sense of a relentless barrage of alerts about everything from system and application logs and user session activity to how sensitive IT resources are being accessed and security configurations are being changed. AI can help cybersecurity professionals correlate events and apply heuristics to detect patterns, trends, and anomalies in the data, then forward the insights to a human agent who can intervene, if necessary. Ideally, these self-learning systems get smarter over time — to the point where the machines can secure themselves without assistance. The more users they get to know and the more applications that come under their purview, the better they can identify rogue or suspicious behavior, such as when a finance user tries to access an HR database or an employee who works in Canada suddenly appears to be trying to log in from Ukraine.

Thanks to machine learning, these security algorithms gradually learn to distinguish normal from abnormal behavior — a capability known as adaptive response — and automatically detect and fix problems. For example, an autonomous database can temporarily lock out users, issue second-level security challenges, and escalate issues to a human agent to determine if legitimate account credentials have been hijacked or compromised. It can take fast action to stop a security breach or minimize its impact. These self-securing and self-repairing capabilities can have a tremendous effect on a company by preventing breach costs, potential reputation damage, and revenue loss.

Data Management

Businesses are interested in the promise of freeing employees from mundane or repetitive tasks, but can a human agent with an autonomous assistant do more than either entity operating separately? Within the realm of database management, the answer is a resounding yes. Autonomous databases can automate routine maintenance, monitoring, and tuning tasks, while also assisting with security, data encryption, backups, and many infrastructure-management tasks. Machine learning algorithms can monitor workload fluctuations and automatically adjust query execution plans and indexes to maximize performance. By eliminating these management chores, database administrators have more time to work with developers, collaborate with data scientists, as well as architect, model, and tune critical business applications. AI technology also makes these workers more productive by detecting patterns within large data sets and uncovering insights that may be difficult for humans to discern. In this sense, an autonomous database augments human skills rather than replacing them.

A Forward-Looking View

As these examples illustrate, autonomous systems can help people work smarter, more efficiently, and more securely, while improving service levels and increasing application performance. By 2020, Oracle predicts that 90% of all applications and services will incorporate AI at some level and that more than half of all enterprise data will be managed autonomously. While some workers fear a future in which artificial intelligence and machine learning will make the current workforce obsolete, history suggests otherwise. Spreadsheets didn’t put accountants out of work; they simply helped them do more work in less time and expanded the horizon of possibilities by handling routine calculations and solving equations much more quickly than humans could do.

AI tools will not replace humans; today they are helping us invent new jobs, improve productivity, and ultimately improve business outcomes.

Of course, to fully leverage the rise of AI-based systems in the workplace, we must design different workflows and systems. Humans must learn to work with machines synergistically, which means business leaders must plan for the impact of autonomous technologies on staffing, processes, and management.

Clearly, working with intelligent information systems requires an adjustment. But while some workers may be leery about an incursion onto their traditional turf, forward-looking businesses see the potential of AI to liberate their employees from routine activities. As workers take on new responsibilities and focus on the valuable, strategic tasks that require human knowledge and discernment, AI gives the business a faster path to insights, innovation, and ultimately, time to market.


Scholar Perspective

Redesigning Work for Human-Machine Collaboration

A scholar perspective by Thomas W. Malone, Patrick J. McGovern Professor of Management, MIT Sloan School of Management

The shifting boundaries between what humans do and what machines do have important implications for managers. We will need to be organized in such a way that we can accommodate continuing changes in what people do versus what computers do.

Let me contrast that to what happened in the 1990s when there was a big emphasis on what was called business process reengineering. In a certain sense, BPR was one of the early ways of using computers to automate and redesign work. But when people did BPR, they would typically say something like, “We’re going to have this big reengineering project. We’re going to analyze everything. We’re going to come up with a way of making everything much better using new technology. We’ll do a whole lot of work to make it happen, and then, we’ll breathe a big sigh of relief and say, ‘Whew. Now we can relax and just keep going with what we’ve implemented.’”

But now what we will increasingly need to do is build systems that are robust enough to handle continuing change; you won’t have one big push, make all the changes, and then relax for a decade. Instead, some things will get better this week, some the following week, and some a month after that. We’ll have to be ready to continually adapt to the changing capabilities of computers, the changing roles of people, and the changes in what needs to be done in the first place.

The phrase I like to use to describe this is cyber-human learning loops. We need to design from the beginning for the idea that there will be continuing improvement in these loops. You might, for example, start out with humans doing most things. But the computer systems can keep track of all the inputs the people see and all the actions they take in response to those inputs. Then, looking at that data, people may see that some things they’ve been doing manually are simple enough that computers can do them. Or perhaps machine learning programs will see patterns in what the people are doing and start suggesting possible actions for the people to take.

Over time, computers will do more of the work, more data will be generated, and the people may realize that, in addition to the things they’ve already automated, there are even more things for which they can now figure out a way to write algorithms. Later, perhaps, they might realize that it would also be better to do things in a different order or make some other change. We should be designing so that all those changes are easily accommodated over time instead of having one big traumatic redesign every decade or two.

Management in an increasingly automated workplace also needs to reconsider organizational structure.

Much of the routine work that used to be done by large numbers of humans in hierarchical organizations will increasingly get done by computers. The things that are left to do by people will be those tasks that are less routine and require more creativity and adaptability. To manage this kind of work, organizations usually need to be less hierarchical, with more flexibility and flatter structures. A word I like to use to describe this structure is adhocracy. It’s not a bureaucracy, it’s an adhocracy, because so many things change all the time.

One of the reasons for that is that when you’re depending on people to be creative and you’re depending on them to respond flexibly in different situations, you don’t want them to have to get approval for everything from a manager. You do want them to exercise their creativity. You want them to feel dedicated to the work. Those are all things that in general work better in decentralized organizations.

However, we’re stuck in this centralized, hierarchical mindset, and mostly we just do things the way we’ve seen them done before. Most people don’t even have the basic vocabulary of thinking about alternatives in any kind of systematic way. This is one thing I try to teach in my MBA course called Strategic Organizational Design. We have functional hierarchies in some places and matrix organizations in others, and you need to be able to think about that to design traditional hierarchies. But now, there’s a whole new space of organizational design possibilities opened up by these new technologies for hyperconnectivity and artificial intelligence.

In the past, there wasn’t that much reason why anybody needed to think about this. Now, there are lots of opportunities to do things far better using these new possibilities.

MIT SMR Connections

Content Sponsored by Oracle Cloud

References

1. D. Griffin and M. Coleman, “How We Will Work in 2028,” Gartner, February 2018, https://www.gartner.com/doc/3861479/work.