Philosophy Eats AI

Generating sustainable business value with AI demands critical thinking about the disparate philosophies determining AI development, training, deployment, and use.

Reading Time: 32 min 

Topics

Permissions and PDF Download

Carolyn Geason-Beissel/MIT SMR | Getty Images

In 2011, coder-turned-venture-investor Marc Andreessen famously declared, “Software is eating the world” in the analog pages of The Wall Street Journal. His manifesto described a technology voraciously transforming every global industry it consumed. He wasn’t wrong; software remains globally ravenous.

Not six years later, Nvidia cofounder and CEO Jensen Huang boldly updated Andreesen, asserting, “Software is eating the world … but AI is eating software.” The accelerating algorithmic shift from human coding to machine learning led Huang to also remark, “Deep learning is a strategic imperative for every major tech company. It increasingly permeates every aspect of work, from infrastructure to tools, to how products are made.” Nvidia’s multitrillion-dollar market capitalization affirms Huang’s prescient 2017 prediction.

But even as software eats the world and AI gobbles up software, what disrupter appears ready to make a meal of AI? The answer is hiding in plain sight. It challenges business and technology leaders alike to rethink their investment in and relationship with artificial intelligence. There is no escaping this disrupter; it infiltrates the training sets and neural nets of every large language model (LLM) worldwide.

Philosophy is eating AI: As a discipline, data set, and sensibility, philosophy increasingly determines how digital technologies reason, predict, create, generate, and innovate. The critical enterprise challenge is whether leaders will possess the self-awareness and rigor to use philosophy as a resource for creating value with AI or default to tacit, unarticulated philosophical principles for their AI deployments. Either way — for better and worse — philosophy eats AI. For strategy-conscious executives, that metaphor needs to be top of mind.

While ethics and responsible AI currently dominate philosophy’s perceived role in developing and deploying AI solutions, those themes represent a small part of the philosophical perspectives informing and guiding AI’s production, utility, and use. Privileging ethical guidelines and guardrails undervalues philosophy’s true impact and influence. Philosophical perspectives on what AI models should achieve (teleology), what counts as knowledge (epistemology), and how AI represents reality (ontology) also shape value creation. Without thoughtful and rigorous cultivation of philosophical insight, organizations will fail to reap superior returns and competitive advantage from their generative and predictive AI investments.

This argument increasingly enjoys both empirical and technical support. There’s good reason investors, innovators, and entrepreneurs such as PayPal cofounder Peter Thiel, Palantir Technologies’s Alex Karp, Stanford professor Fei-Fei Li, and Wolfram Research’s Stephen Wolfram openly emphasize both philosophy and philosophical rigor as drivers for their work.1 Explicitly drawing on philosophical perspectives is hardly new or novel for AI. Breakthroughs in computer science and AI have consistently emerged from deep philosophical thinking about the nature of computation, intelligence, language, and mind. Computer scientist Alan Turing’s fundamental insights about computers, for example, came from philosophical questions about computability and intelligence — the Turing test itself is a philosophical thought experiment. Philosopher Ludwig von Wittgenstein’s analysis of language games and rule following directly influenced computer science development while philosopher Gottlob Frege’s investigations into logic provided the philosophical foundation for several programming languages.2

More recently, Geoffrey Hinton’s 2024 Nobel Prize-winning work on neural networks emerged from philosophical questions about how minds represent and process knowledge. When MIT’s own Claude Shannon developed information theory, he was simultaneously solving an engineering problem and addressing philosophical questions about the nature and essence of information. Indeed, Sam Altman’s ambitious pursuit of artificial general intelligence at OpenAI purportedly stems from philosophical considerations about intelligence, consciousness, and human potential. These pioneers didn’t see philosophy as separate or distinct from practical engineering; to the contrary, philosophical clarity enabled technical breakthroughs.

Executives must invest in their own critical thinking skills to ensure philosophy makes their machines smarter and more valuable.

Today, regulation, litigation, and emerging public policies represent exogenous forces mandating that AI models embed purpose, accuracy, and alignment with human values. But companies have their own values and value-driven reasons to embrace and embed philosophical perspectives in their AI systems. Giants in philosophy, from Confucius to Kant to Anscombe, remain underutilized and underappreciated resources in training, tuning, prompting, and generating valuable AI-infused outputs and outcomes. As we argue, deliberately imbuing LLMs with philosophical perspectives can radically increase their effectiveness.

This doesn’t mean companies should hire chief philosophy officers … yet. But acting as if philosophy and philosophical insights are incidental or incremental to enterprise AI impact minimizes their potential technological and economic impact. Effective AI strategies and execution increasingly demand critical thinking — by humans and machines — about the disparate philosophies determining and driving AI use. In other words, organizations need an AI strategy for and with philosophy. Leaders and developers alike need to align on the philosophies guiding AI development and use. Executives intent on maximizing their return on AI must invest in their own critical thinking skills to ensure philosophy makes their machines smarter and more valuable.

Philosophy, Not Just Ethics, Eats AI

Google’s revealing and embarrassing Gemini AI fiasco illustrates the risks of misaligning philosophical perspectives in training generative AI. Afraid of falling further behind LLM competitors, Google upgraded the Bard conversational platform by integrating it with the tech giant’s powerful Imagen 2 model to enable textual prompts to yield high-quality, image-based responses. But when Gemini users prompted the LLM to generate images of historically significant figures and events — America’s Founding Fathers, Norsemen, World War II, and so on — the outputs consistently included diverse but historically inaccurate racial and gender-based representations. For example, Gemini depicted the Founding Fathers as racially diverse and Vikings as Asian females.

These ahistorical results sparked widespread criticism and ridicule. The images reflected contemporary diversity ideals imposed onto contexts and circumstances where they ultimately did not belong. Given Google’s great talent, resources, and technical sophistication, what root cause best explains these unacceptable outcomes? Google allowed teleological chaos to reign among rival objectives: accuracy and diversity, equity, and inclusion initiatives.3 Data quality and access were not the issue; Gemini’s proactively affirmative algorithms for avoiding perceived bias toward specific ethnic groups or gender identities led to misleading, inaccurate, and undesirable historical outputs. What initially appears to be an ethical AI or responsible AI bug was, in fact, not a technical failure but a teleological one. Google’s trainers, fine-tuners, and testers made a bad bet — not on the wrong AI or bad models but on philosophical imperatives unfit for a primary purpose.

Philosophy Eats Customer Loyalty

These misfires play out wherever organizations fail to rethink their philosophical fundamentals. For example, companies say they want to create, cultivate, and serve loyal customers. Rather than rigorously define what loyalty means, however, they default to measuring loyalty with metrics that serve as quantitative proxies and surrogates. Does using AI to optimize RFM (recency, frequency, and monetary value), churn management, and NPS (net promoter score) KPIs computationally equate to optimizing customer loyalty? For too many marketers and customer success executives, that’s taken as a serious question. Without more considered views of loyalty, such measures and metrics become definitions by executive fiat. Better calculation becomes more substitute than spur for better thinking. That’s a significant limitation.

As von Wittgenstein once observed, “The limits of my language mean the limits of my world.” Similarly, metrics limitations and constraints need not and should not define the limits of what customer loyalty could mean. Strategically, economically, and empathically defined, “loyalty” can have many measurable dimensions. That is the teleological, ontological, and epistemological option that AI’s growing capabilities invite and encourage.

In our research, teaching, and consulting, we see companies combine enhanced quantitative capabilities with philosophically framed analyses about what “loyalty” can and should mean. These analytics embrace ethical as well as epistemological, ontological, and teleological considerations.

Starbucks and Amazon, for instance, developed novel philosophical perspectives on customer loyalty that guided their development and deployment of AI models. They did not simply deploy AI to improve performance on a given set of metrics. In 2019, under then-CEO Kevin Johnson’s guidance, the senior team at Starbucks developed the Deep Brew AI platform to promote what they considered to be the ontological essence of the Starbucks experience: fostering connection among customers and store employees, both in store and online.

Digitally facilitating “connected experiences” became central to how Starbucks enacted and cultivated customer loyalty. Deep Brew also supports the company’s extensive rewards program, whose members account for more than half of Starbucks’ revenues. Given the company’s current challenges and new leadership, these concerns assume even greater urgency and priority: What philosophical sensibilities should guide upgrades and revisions to the Starbucks app? Will “legacy loyalty” and its measures be fundamentally rethought?

While Amazon Prime began as a super-saving shipping service in 2004, founder Jeff Bezos quickly reimagined it as an interactive platform for identifying and preserving Amazon’s best and most loyal customers. An early Amazon Prime executive recalls Bezos declaring, “I want to draw a moat around our best customers. We’re not going to take our best customers for granted.” Bezos wanted Prime to become customers’ default place to buy goods, not just a cost-saving tool.4

Amazon used its vast analytical resources to comb through behavioral, transactional, and social data to better understand, and personalize offerings for, its Prime customers. Importantly, the Prime team didn’t just seek greater loyalty from customers. The organization sought to demonstrate greater loyalty to customers: Reciprocity was central to Prime’s philosophical stance.

Again, Amazon didn’t deploy AI to (merely) improve performance on existing customer metrics; it learned how to identify, create, and reward its best customers. Leaders thought deeply about how to identify and know (i.e., epistemologically) their best customers and determine each one’s role in the organization’s evolving business model. To be clear, “best” and “most profitable” overlapped but did not mean the same thing.

For Starbucks and Amazon, philosophical considerations facilitated metrics excellence. Using ontology (to identify the Starbucks experience), epistemology (knowing the customer at Amazon), and teleology (defining the purpose of customer engagement) led to more meaningful metrics and measures. The values of loyalty learned to enrich the value of loyalty — and vice versa.

Unfortunately, too many legacy enterprises using AI to enhance “customer centricity” defer to KPIs philosophically decoupled from thoughtful connection to customer loyalty, customer loyalty behaviors, and customer loyalty propensities. Confusing loyalty metrics with loyalty itself dangerously misleads; it privileges measurement over rigorous rethinking of customer fundamentals. As the philosopher/engineer Alfred Korzybski observed almost a century ago, “The map is not the territory.”

Philosophy Shapes Agentic AI: From Parametric Potential to Autonomous Excellence

As intelligent technologies transition from language models to agentic AI systems, the ancient Greek warrior/poet Archilochus’s wisdom — “We don’t rise to the level of our expectations; we fall to the level of our training” — becomes a strategic warning. When paired with statistician George Box’s cynical aphorism — “All models are wrong, but some are useful” — the challenge becomes even clearer: When developing AI that independently pursues organizational objectives, mere “utility” doesn’t go far enough. Organizations need more. Creating reliably effective autonomous or semiautonomous agents depends less on technical stacks and/or algorithmic innovation than philosophical training that intentionally embeds meaning, purpose, and genuine agency into their cognitive frameworks. Performance excellence depends on training excellence. High-performance AI is contingent upon high-performance training.

While large and small language models excel at pattern recognition and generation to produce sophisticated outputs based on their training, organizations need AI that goes beyond superior prompt-response performance. Agentic AI systems don’t just process and generate language, they contextually understand goals, formulate plans, and take autonomous actions that should align with enterprise values. This demands philosophical training well beyond the knowledge embeddings instilled for decision-making capabilities and autonomous or quasi-autonomous reasoning. (See “Appendix: An Imaginative Dialogue Between Daniel Kahneman, Richard Thaler, and Robin Hogarth.”)

Recent research like DeepMind’s “Boundless Socratic Learning With Language Games” and studies on ideological reflection in AI systems highlight a crucial insight: Agency emerges not from larger models or more parameters (i.e., scaling laws) but from deliberately selected philosophical frameworks that facilitate autonomous reasoning and action.5 Ultimately, AI agents must develop and deploy their own decisions across philosophical domains while maintaining alignment with human values. In other words, they need to be trained to learn and “learn to learn.”

Consider, for example, how this sensibility might manifest in global business practice:

Scenario: AI System Managing Supply Chain Disruptions

Pattern-matching response:
“Historical data suggests implementing backup suppliers and increasing safety stock.”

Philosophically trained response:
“I’ve analyzed this disruption’s unique characteristics and broader systemic implications. Rather than default to inventory increases, I propose:

1. Targeted relationship development with key suppliers in stable regions
2. Process redesign to increase supply chain visibility
3. Strategic buffer placement based on component criticality

I’ve simulated these interventions across multiple scenarios and can guide implementation while adapting to stakeholder feedback. Shall we examine the detailed analysis?”

As this supply chain scenario illustrates, agentic AI should both draw upon and learn from teleological, epistemological, and ontological contexts to suggest proposals that advance desired enterprise outcomes. These proposals would seek to balance and blend rational strategic objectives with empirical data and analytics. Together, these may be seen as philosophical frameworks for training AI agents that learn to get better at solving problems and exploring/exploiting opportunities.

Philosophical Frameworks for Agentic AI

1. Epistemological Agency: Beyond Information Processing

AI systems achieve epistemological agency when they move beyond passive information processing to actively construct and validate knowledge. This requires training in philosophical frameworks enabling:

  • Self-directed learning: The agents autonomously identify knowledge gaps and pursue new understanding, rather than waiting for queries or prompts. For example, when analyzing market trends, they proactively explore adjacent markets and emerging factors rather than limiting analysis to requested data points.
  • Dynamic hypothesis testing: The agents generate and test possibilities rather than just evaluate given options. When faced with supply chain disruptions, for example, they don’t just assess known alternatives but propose and simulate novel solutions based on deeper causal understanding.
  • Meta-cognitive awareness: Agents maintain active awareness of what they know, what they don’t know, and the reliability of their knowledge. Rather than simply providing answers, they communicate confidence levels and potential knowledge gaps that could affect decisions.

This epistemological foundation transforms how AI systems engage with knowledge — from pattern matching against training data to actively constructing understanding through systematic inquiry and validation. A supply chain AI with strong epistemological training doesn’t just predict disruptions based on historical patterns; it proactively builds and refines causal models of supplier relationships, market dynamics, and systemic risks to generate more nuanced and actionable insights.

2. Ontological Understanding: From Pattern Recognition to Systemic Insights

AI systems require sophisticated ontological frameworks to grasp both their own nature and the complex reality they operate within. This means:

  • Self-understanding: Maintaining dynamic awareness of their capabilities and limitations within human-AI collaborations.
  • Causal architecture: Building rich models of how elements in their environment influence each other — from direct impacts to subtle ripple effects.
  • Systems thinking: Recognizing that business challenges exist within nested systems of increasing complexity, where changes in one area inevitably affect others.

For example, an AI managing retail operations shouldn’t default to optimizing inventory based on sales patterns — it understands how inventory decisions affect supplier relationships, cash flow, customer satisfaction, and brand perception. This ontological foundation transforms pattern matching into contextual intelligence, enabling solutions that address both immediate needs and systemic implications.

 

3. Teleological Architecture: From Task Execution to Purposeful Action

Agentic systems need sophisticated frameworks for understanding and pursuing purpose at multiple levels. This teleological foundation enables them to:

  • Form and refine goals: Move beyond executing predefined tasks to autonomously developing and adjusting objectives based on changing contexts.
  • Navigate purpose hierarchies: Understand how immediate actions serve broader organizational missions, balancing short-term efficiency with long-term value creation.
  • Resolve competing priorities: Actively recognize and reconcile tensions between different organizational objectives, making principled trade-offs that align with strategic intent.

Consider a marketing AI: Rather than optimize click-through rates, it pursues engagement strategies balancing immediate metrics with brand equity, customer lifetime value, and market positioning. This reflects the customer loyalty discussion above. Every action flows from clear understandings of not just what tasks to perform but why they matter in larger organizational contexts.

These learned and entrained sensibilities transform AI from a “tool that achieves given goals” into “systems that intentionally pursue specified organizational purposes and outcomes.” Indeed, this shift from “outputs” to “outcomes” highlights the purpose of purpose.

4. Ethical Agency

Beyond rule-following, agentic AI needs autonomous moral reasoning capabilities and a capacity for ethical deliberation in novel situations. Embedding reasoning frameworks enable models to navigate novel situations while maintaining stakeholder trust and alignment with enterprise values. This goes beyond simple rules or constraints — it’s about installing sophisticated frameworks for evaluating implications and making principled decisions in unprecedented situations. As with all responsible AI models, agentic AI needs its ethical choices to be transparent, interpretable, and explainable.

Practical Implementation: From Theory to Agency

Now consider an AI agent managing a complex organizational system. Unlike a traditional chatbot or language model, this agent offers enhanced responses drawing from its training in a specific philosophical stance — “libertarian paternalism,” the “political economic” underpinnings of Thaler and Cass Sunstein’s Nudge theory:

Legacy LLM response: “Based on current metrics, department productivity could be improved by implementing overtime policies. The standard rate is 1.5x pay for up to 10 hours weekly.”

Agentic AI response: “I’ve autonomously analyzed the productivity challenges and identified three potential intervention points. Before recommending overtime, I’ve simulated the long-term effects of various solutions. My analysis suggests that the root cause lies in process inefficiencies rather than labor hours. I propose initiating a collaborative workflow redesign, which I can facilitate. I’ve already prepared detailed simulations showing expected outcomes and can adapt the plan based on stakeholder input. Would you like me to present these alternatives and their projected impacts?”

Whether this agent AI’s response reflects human-like agency misses the point and purpose of philosophical disquisition: The strategic solution proposed effectively reflects its philosophical training and seeks to both align employee behavior with organizational objectives while preserving individual choice. This philosophical framework constraint doesn’t undermine model efficacy; it amplifies effectiveness in driving desired outcomes. In the Sunstein/Thaler “libertarian paternalism” construct, the agentic AI becomes a “choice architect” for its human users.

Of course, the range of available philosophical frameworks extends far beyond libertarian paternalism. Western and Eastern philosophies offer rich resources for addressing tensions between individual and collective interests. Analytic and Continental traditions provide different approaches to logic, language, and value creation. (See “Appendix: Eastern Versus Western Approaches to LLM Engagement” for an analysis of how Eastern and Western philosophical training approaches would influence agent AI outputs and interactions.) The key is selecting and combining frameworks that align with organizational objectives and stakeholder needs. New genres of philosophical frameworks may be necessary to fully exploit the potential of generative AI.

As Google’s Gemini failure starkly demonstrated, managing conflicts between embedded philosophical stances represents an inherently difficult development challenge. This can’t be delegated or defaulted to technical teams or compliance officers armed with checklists. Leadership teams must actively engage in selecting and shaping the philosophical frameworks and priorities that determine how their AI systems think and perform.

The Strategic Imperative: From Technical to Philosophical Training

We argue that AI systems rise or fall to the level of their philosophical training, not their technical capabilities. When organizations embed sophisticated philosophical frameworks into AI training, they restructure and realign computational architectures into systems that:

  • Generate strategic insights rather than tactical responses.
  • Engage meaningfully with decision makers instead of simply answering queries.
  • Create measurable value by understanding and pursuing organizational purpose.

These should rightly be seen as strategic imperatives, not academic exercises or thought experiments. Those who ignore this philosophical verity will create powerful but ultimately limited tools; those embracing it will cultivate AI partners capable of advancing their strategic mission. Ignoring philosophy or treating it as an afterthought risks creating misaligned systems — pattern matchers without purpose, computers that generate the wrong answers faster.

These shifts from LLMs to agentic AI aren’t incremental or another layer on the stack — they require fundamentally reimagining AI training. These “imaginings” transcend better training data and/or more parameters — they demand embeddings for self-directed learning and autonomous moral reasoning. The provocative implication: Current approaches to AI development, focused primarily on improving language understanding and generation, may be insufficient for creating truly effective AI agents. Instead of training models to better process better information, we need systems that engage in genuine philosophical inquiry and self-directed cognitive development.

Consequently, these insights suggest we’re not just facing technical challenges in AI development — we’re approaching a transformation in how to understand and develop artificial intelligence. The move to agency requires us to grapple with deep philosophical questions about the nature of autonomy, consciousness, and moral reasoning that we’ve largely been able to sidestep in the development of language models.

(See “Appendix: Claude Reflects on Its Philosophical Training,” for a dialogue on how Claude views its own philosophical foundations.)


AI’s enterprise future belongs to executives who grasp that AI’s ultimate capability is not computational but philosophical. Meaningful advances in AI capability — from better reasoning to more reliable outputs to deeper insights — come from embedding better philosophical frameworks into how these systems think, learn, evaluate, and create. AI’s true value isn’t its growing computational power but its ability to learn to embed and execute strategic thinking at scale.

Every prompt, parameter, and deployment encodes philosophical assumptions about knowledge, truth, purpose, and value. The more powerful, capable, rational, innovative, and creative an artificial intelligence learns to become, the more its abilities to philosophically question and ethically engage with its human colleagues and collaborators matter. Ignoring the impact and influence of philosophical perspectives on AI model performance creates greater and greater levels of strategic risk especially when AI takes on a more strategic role in the enterprise. Imposing thoughtfully rigorous philosophical frameworks on AI doesn’t merely mitigate risk — it empowers algorithms to proactively pursue enterprise purpose and relentlessly learn to improve in ways that both energize and inspire human leaders.

Topics

References

1. L. Burgis, “The Philosophy of Peter Thiel’s ‘Zero to One,’” Medium, May 9, 2022, https://luke.medium.com; P. Westberg, “Alex Karp: The Unconventional Tech Visionary,” Quartr, May 8, 2024, https://quartr.com; F.-F. Li, “The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI.” (New York: Flatiron Books, 2023); and S. Wolfram, “How to Think Computationally About AI, the Universe, and Everything,” Stephen Wolfram Writings, October 27, 2023, https://writings.stephenwolfram.com.

2. M. Awwad, “Influences of Frege’s Predicate Logic on Some Computational Models,” Future Human Image Journal 9 (April 14, 2018): 5-19.

3. C. McGinn, “Intelligibility,” Colin McGinn, Dec. 14, 2019, www.colinmcginn.net.

4. J. Del Ray, “The Making of Amazon Prime, the Internet’s Most Successful and Devastating Membership Program,” Vox, May 3, 2019, www.vox.com.

5. T. Schaul, “Boundless Socratic Learning With Language Games,” arXiv, Nov. 25, 2024. https://arxiv.org; and The Physics arXiv Blog, “AI Systems Reflect the Ideology of Their Creators, Say Scientists,” Discover Magazine, Oct. 31, 2024, www.discovermagazine.com.

Reprint #:

66311

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.