Will Large Language Models Really Change How Work Is Done?
Even as organizations adopt increasingly powerful LLMs, they will find it difficult to shed their reliance on humans.
Topics

Dan Page/theispot.com
Large language models (LLMs) are a paradigm-changing innovation in data science. They extend the capabilities of machine learning models to generating relevant text and images in response to a wide array of qualitative prompts. While these tools are expensive and difficult to build, multitudes of users can use them quickly and cheaply to perform some of the language-based tasks that only humans could do before.
This raises the possibility that many human jobs — particularly knowledge-intensive jobs that primarily involve working with text or code — could be replaced or significantly undercut by widespread adoption of this technology. But in reality, LLMs are much more complicated to use effectively in an organizational context than is typically acknowledged, and they have yet to demonstrate that they can satisfactorily perform all of the tasks that knowledge workers execute in any given job.
LLMs in Organizations
Most of the potential areas of use for LLMs center on manipulating existing information, much of it specific to an individual organization. This includes summarizing content and producing reports (which represents 35% of use cases, according to one survey) and extracting information from documents, such as PDFs containing financial information, and creating tables from them (33% of use cases).1 Other popular and effective uses of LLMs include creating images with tools like Dall-E 2 or generating synthetic data for applications when real data is difficult to obtain, such as data to train voice recognition tools like Amazon’s Alexa.2
Most organizations using LLMs are still in the exploration phase. Customer interactions, knowledge management, and software engineering are three areas of extensive organizational experiments with generative AI. For example, Audi recruited a vendor to build and deploy a customized LLM-based chatbot that would answer employees’ questions about available documentation, customer details, and risk evaluations. The chatbot retrieves relevant information from a variety of proprietary databases in real time and is supposed to avoid answering questions if the available data is insufficient. The company used prompt engineering tools developed by Amazon Web Services for retrieval augmented generation (RAG), a common customization procedure that uses organization-specific data without requiring changes to the underlying foundation model.
References (16)
1. “Beyond the Buzz: A Look at Large Language Models in Production,” PDF (San Francisco: Predibase, 2023), https://go.predibase.com.
2. A. Rosenbaum, S. Soltan, and W. Hamza, “Using Large Language Models (LLMs) to Synthesize Training Data,” Amazon Science, Jan. 20, 2023, www.amazon.science.