‘This is a platform shift’: Jensen Huang says the traditional computing stack will never look the same because of AI – ChatGPT and Claude will forge a new generation of applications
The Nvidia chief says new applications will be built “on top of ChatGPT” as the technology redefines software
Generative AI is transforming the traditional computing stack, according to Nvidia CEO Jensen Huang, who says it will eventually form the core in a new era of software development.
Speaking with BlackRock CEO Larry Fink at the World Economic Forum (WEF) in Davos, Huang compared the impact of AI on modern computing to the advent of the web, the personal computing boom, and cloud computing.
“It’s helpful to reason back to the first principles of fundamentally what is happening to the computing stack. This is a platform shift,” he said.
Summary
Jensen Huang, CEO of Nvidia, is arguing that artificial intelligence changes the foundation of computing itself. In the same way that the internet rewired how software worked in the 1990s, AI is now rewiring how software will be built, used, and experienced.
His core claim is that future applications won’t feel like rigid tools you have to learn. Instead, they’ll feel like systems you talk to — and many of them will be built on top of AI systems like ChatGPT and Claude, rather than being programmed entirely from scratch.
That’s what he means by a “platform shift.”
A platform shift happens when the underlying base of computing changes, forcing everything above it to be rebuilt. This has happened several times already, and each time it reshaped society, business, and labor.
- Personal computers enabled desktop software.
- The internet enabled websites, email, and online commerce.
- Cloud computing enabled streaming, shared documents, and subscription software.
In each case, software didn’t just get better — it became different. Huang’s point is that AI belongs in this same category. It doesn’t merely improve existing tools; it changes what software is.
Why ChatGPT and Claude matter so much
When Huang says that new applications will be built “on top of ChatGPT” or Claude, he means these systems become foundational layers, similar to how the web or mobile operating systems once did.
Instead of every company building intelligence from scratch, developers can rely on AI systems as a shared “brain.” The application then focuses on what it does — banking, healthcare, design, logistics — while the AI handles understanding language, context, and intent.
In this model, ChatGPT and Claude function less like individual products and more like universal interfaces between humans and computers.
Why AI software feels fundamentally different
Traditional software required humans to adapt to the machine. Users had to learn menus, forms, commands, and workflows designed around structured data. If the information wasn’t entered exactly right, the software failed.
Huang describes this older software as “pre-recorded.” Humans wrote the instructions in advance, and the computer simply followed them.
AI flips that relationship. Instead of the human learning the machine’s language, the machine learns the human’s. AI can interpret messy, unstructured inputs — normal speech, text, images, or sound — and respond in real time. That’s why interacting with AI feels more like a conversation than an instruction manual. This changes who can use software and how quickly tasks can be done.
The “five-layer cake” behind AI
Huang explains the technology stacks where each layer depends on the one below it:
-
Energy – AI consumes enormous electricity because it operates in real time.
-
Chips and hardware – Specialized processors (like Nvidia’s) make AI fast enough to be useful.
-
Cloud infrastructure – Data centers that run and distribute AI services.
-
AI models – Systems like ChatGPT and Claude.
-
Applications – The tools businesses and people actually use.
The key insight Huang stresses is that economic value lives at the top, in applications — not in the models themselves.
Many organizations assume that if the AI model is powerful enough, value automatically follows. Huang says that’s wrong. Models have advanced much faster than the applications built on top of them, creating a lag.
This explains why AI feels impressive in demos but is harder to translate into daily business results.
Consulting firms like Bain & Company note that companies are still learning how to redesign workflows around AI, rather than simply bolting it onto old systems.
Where the real economic impact will show up
The biggest gains from AI won’t come from chatbots themselves, but from embedding AI deeply into real-world operations. Huang emphasizes that this application layer is where productivity, profits, and competitive advantage will actually be created.
This includes areas like:
- Financial services (risk, compliance, research)
- Healthcare (diagnostics, administration, coordination)
- Manufacturing (automation, maintenance, design)
- Logistics and operations (planning, forecasting, scheduling)
Once AI is woven into these systems, it stops being a novelty and starts becoming infrastructure.
Why this conversation matters beyond tech
Huang made these remarks while speaking with Larry Fink at the World Economic Forum, which signals that this isn’t just a Silicon Valley discussion.
Global finance, governments, and major institutions are treating AI as a structural economic shift, not a gadget cycle. That’s why questions about energy, infrastructure, labor, and regulation are now inseparable from AI itself.
The takeaway
AI is replacing the idea of software as a rigid tool with something conversational, adaptive, and embedded everywhere — a change on the scale of the internet itself.


