Published 14 Nov 2024

A Closer Look at AI FOMO in Corporate America

Table of contents
Table of contents

Over 40% of S&P 500 companies cited “AI” on earnings calls last quarter—but how much of it is real?
Asking the right questions and decoupling long-term impact from short-term ROI can help cut through the noise around AI investments.

In recent months, there’s been no shortage of alarm bells on Wall Street over the lack of tangible ROI from AI infrastructure spending. Sequoia calls it the $600 billion question, Goldman thinks the $1 trillion AI price tag might never pay off, and Barclays estimates that we need 12,000 ChatGPT-sized products to justify current levels of CAPEX. 

To say that market volatility is high is an understatement. Knee-jerk reactions to news wiped over $400 billion from Nvidia’s market cap before a closely watched speech by CEO Jensen Huang triggered a $200 billion rebound. Today, Nvidia is back near all-time highs.

Even in Silicon Valley, where pockets run deep, AI FOMO is raising eyebrows. OpenAI just raised funds at a $157 billion valuation, up from $80 billion eight months ago. They expect to lose $5 billion in 2024.

openai-costs

Source: The Information

Anthropic is seeking a $40 billion valuation, also doubling from earlier this year. Mark Zuckerberg acknowledged that many companies are “overbuilding now,” and Sundar Pichai recently faced scrutiny about Google’s $12 billion per quarter AI spend

Meanwhile, Larry Ellison admitted his FOMO when he begged Jensen Huang for GPUs. Demand is so high that he thinks it’ll soon cost $100 billion just to join the AI race.

But perhaps the most alarming sign of market froth is the recent acquisition of Tabular. Despite generating only $1 million in ARR, Databricks paid a nosebleed valuation of nearly $2 billion to acquire the startup

Are We in Bubble Territory?

There’s no doubt that AI computing is expensive. At issue is the so-called AI stack, which includes advanced semiconductors, hyper-scale cloud infrastructure, vast amounts of data for training LLMs, the software that connects these elements, customer-facing apps like ChatGPT and Claude, and the huge amounts of energy required to power it all.

Sequoia’s David Chan called out the FOMO by breaking down the $600 billion shortfall in revenue to justify the AI infrastructure buildout:

“All you have to do is to take Nvidia’s run-rate revenue forecast and multiply it by 2x to reflect the total cost of AI data centers (GPUs are half of the total cost of ownership—the other half includes energy, buildings, backup generators, etc). Then you multiply by 2x again, to reflect a 50% gross margin for the end-user of the GPU, (e.g., the startup or business buying AI compute from Azure or AWS or GCP, who needs to make money as well).”

ai-revenue-shortfall

Source: AI’s $600B Question

What Wall Street is missing 

Never mind the AI startup boom, the drastic reductions in computing and generative AI costs, or rapidly falling token prices—most people want proof of AI’s ability to invent products and services that open up new markets. Top-line revenue is king to them; without it, even the most promising business models can’t sustain long-term growth. 

Large consulting shops understand this point better than most, with some reporting billions in new AI bookings to appease the markets. Over the short term, consulting houses are positioned to capture growth—especially compared with startups that will probably get “steamrolled” by OpenAI before they can even get off the ground.

However, AI is fundamentally altering how organizations operate, not just what they produce. It’s difficult to pinpoint the exact ROI because either AI already exists (think Apple’s Siri and Netflix’s navigation) or it’s becoming deeply integrated into nearly every routine function in business. 

Furthermore, companies may be reluctant to disclose AI-driven gains to gain a competitive advantage. For example, Meta’s “core AI” investments have dramatically improved its ad targeting and recommendations. But exactly how much is attributed to incremental AI spending is unclear. 

Separating AI Hype From Reality 

Blockbuster AI business models are likely just around the corner, but how do you know what’s real? It starts by asking questions about capital allocation and the durability of the underlying business:

1. What is the destination?

Destination analysis is a long-term investment approach that focuses on understanding the “destination” of a company’s business model—and the probability of reaching that destination. 

In AI, everyone expects a smooth and pleasant journey, but “bumpy rides” are essentially baked in. What matters is the confidence level of reaching the destination. You shouldn’t have to sweat the ups and downs in the interim.

At GoogleX, we tackled massive problems affecting millions by proposing bold solutions and assessing if technology could make them a reality. Failures were expected. Makani, Loon, and others didn’t pan out, but projects like Wing, Waymo, and more have since graduated to independent companies. Time will tell whether these bets pay off, but Alphabet’s massive ad and cloud businesses afford them the luxury of being patient. 

googlex

GoogleX’s blueprint for moonshots.

2. Is there differentiated value creation?

“The only question is, is this important work? And if we didn’t do it, would it happen without us? […] If somebody else can do it, let them do it. We should go select the things that if we didn’t do it, the world would fall apart.”

What matters in AI is differentiated value creation backed by a company’s unique capabilities. Accelerated computing can now tackle complex problems that previously would’ve taken decades to solve. That fundamentally alters the calculus of how to allocate capital. Instead of building generic offerings that fail to deliver lasting value, companies are free to look far out across the horizon to tackle bigger, more complex problems.

Jensen famously pursued projects based on advancing important fields of science rather than short-term profit potential. Today, Nvidia is tech’s apex predator. Its CUDA programming language is an impenetrable moat and its GPUs hold 70% global market share. Amazon and Microsoft are even ceding control of their cloud computing dominance to keep Nvidia happy. 

3. Are there early indicators of future success (EIOFS)?

Corporate America has an ongoing obsession with TAM or total addressable market. The bigger the figure, the better the opportunity. But another great lesson from Nvidia was how Jensen had the patience and conviction to target “zero-billion-dollar” markets

The willingness to spend decades taking calculated risks on unproven industries was made possible by what he calls EIOFS. EIOFS involves near-term positive reinforcements and small wins to validate the direction of the business. EIOFS may come from customers, R&D, testing, or more.

4. Is there a perpetual flywheel for growth?

Back at Amazon, we had a long term lens and every decision we made was about fueling the company’s flywheel. A great customer experience increases traffic, attracting sellers, and increasing selection, which cycles back to improve the customer experience. Meanwhile, growth lowers costs, which are passed down to customers through lower prices, closing the loop. 

The same applies to AI. Data collection trains models and enhances AI capabilities. The user experience is improved, increasing adoption and driving greater revenue and investment in infrastructure. With more significant resources, the cycle restarts with more data collection to fuel the flywheel.

Where to draw the line

Eventually, a market needs to exist to sustain the business. But where do you draw the line?

Unfortunately, there is no clear answer. It took three decades before Nvidia finally caught fire. Tesla took years and nearly went bankrupt multiple times before its fortunes turned around.  

Amazon Web Services (AWS) was an internal system before it launched as Amazon Elastic Compute Cloud in 2006. It took years for the markets (and competitors) to appreciate AWS’ potential. But Amazon was prepared, guided by their leadership principle: “As we do new things, we accept that we may be misunderstood for long periods of time.”

Focus on Long-Term Value Creation

1000x-ai-compute

Source: NVIDIA Company Overview Q2 2024

Marginal utility measures the added value from consuming one more unit of a good or service. With new Blackwell architecture, the marginal utility of Nvidia GPUs will remain high as each additional chip delivers significant improvements in performance and capabilities for AI tasks. 

Former Google CEO Eric Schmidt believes that large context windows, AI agents, and text-to-action capabilities will soon change the world. Additionally, technologies like computational drug design and discovery, autonomous driving, weather simulation, materials design, robotics, digital twins, and Generative AI agentic reasoning apps can potentially open up vast new markets.

These use cases require large amounts of computing power, and there’s no telling when these investments will generate tangible returns. Destination analysis, differentiated value creation, EIOFS, and flywheels can gauge whether long-term value creation justifies the investments or not. 

The Next Obvious Thing: AI in the Workforce

Frontier models are important, but so are the “picks and shovels” of AI. Besides sleep, people spend most of their time working. Shouldn’t optimizing the work environment be a top priority for companies that want happier and more productive employees?

people-time-spent

Source: Our World in Data

But as I wrote previously, productivity is fractured as digital friction and IT challenges continue to grow. Most of corporate America is woefully unprepared for the challenges of the modern workplace, whether employees work in office, at home, or on the go. 

Soaring complexity coupled with limited resources mean work environments still face a barrage of troubleshooting, device management, and security challenges. What’s more, companies are increasingly adding AI copilots to their workflows but aren’t seeing the expected productivity gains.

A better employee experience

The digital workspace is the virtual environment where employees spend the majority of their work hours. It offers secure access to the tools, applications, and data they need to work, collaborate, and access information from any device, anywhere.

However, today’s digital workspace is often fragmented and chaotic, with each function handled by separate, siloed solutions that address only specific needs. This presents a unique opportunity in devices, which serve as the key unifying factor across all aspects of the digital workspace.

Here at HP, we’re fortunate to help lead the creation of a truly unified workspace, combining our expertise in hardware, software, and services to deliver a great digital employee experience. Powered by telemetry data and workforce AI, we’re integrating tools and solutions into a single platform that can drive the next generation of workspace solutions, enabling seamless collaboration across functions and verticals. 

New possibilities emerge for native-AI solutions that understand every aspect of the digital workspace, enabling them to predict and resolve IT issues while autonomously managing complex workflows across departments and organizational layers.

The foundation we build today in AI and workforce technology will help shape the future of work and human-machine collaboration. 

As pioneers, our responsibility is not just to solve current workforce challenges, but to envision and construct the framework for tomorrow’s innovations.

 

HP Workforce Experience Platform (WXP)¹ is a comprehensive and modular digital employee experience solution that enables organizations to optimize IT for every employee’s needs.

Subscribe to the HP Workforce Experience Blog or schedule a consultation with our team to begin optimizing your IT capabilities today.

¹At launch, some advanced features require a subscription.

Faisal Masud
Faisal Masud President of WW Digital Services, previously @ Amazon, Staples, and GoogleX