7 minute read · Published July 26, 2024

How software built Nvidia's $2.97T hardware empire

Latest Update September 2, 2024

Ask someone what makes Nvidia worth almost $3 trillion, they’ll tell you it’s because they manufacture GPUs for use in AI, aka hardware. But if it was about making chips, why do fellow chipmakers stocks look like this:

Intel & AMD stock charts

While Nvidia’s looks like this: 

Nvidia stock chart

Over the past 5 years, Intel is down 38%, AMD is up 325% and Nvidia is up a staggering 2500%.

If it was all about chipmaking, you’d expect the stocks to rise and fall together. As much as chips are a hardware market, the advantage that made Nvidia the world’s most valuable company is not hardware, but software. 

If you’re wondering why you’ve never used Nvidia software, it’s because their software layer isn’t your usual SaaS product. For Nvidia, software accelerates their core business and creates deep moats for it – across manufacturing, distribution and expansion into new markets. 

In a world where software companies are wrapped in APIs, Nvidia is the opposite - it’s a software company wrapped in hardware. 

In this essay, you’ll discover how Nvidia started off, what it built custom software and how that software unlocked distribution channels to monopolize its expansion into new markets. 

If you’re at a startup trying to enter new markets, you’ll find out how to make your product so adaptable you can sell it anywhere  — which is exactly what Nvidia did.

Let’s enter the rabbit hole, yeah? 

Back when Nvidia was just another hardware company…

Nvidia didn’t always have software. It started as a hardware company building NV1 gaming chips. 

And while it did not have the software advantage it has today, it had another massive advantage - distribution. 

Whether you’re in growth or product or elsewhere, you know distribution is often harder than building a product. It was the same with Nvidia, which is what they used a classic 90s distribution strategy:

Every tech giant followed the same distribution formula. They would create a product, sell it to computer manufacturers and get revenue via annual licenses. 

In 1996, there were 50 million PCs sold in the world. 80% of them were licensed with the Microsoft OS, These generated $1.8 billion in revenue. 

Likewise, Intel partnered with PC manufacturers so its CPUs were included in almost all personal computers. Adobe licensed its PDF technology to hardware & software vendors to boost adoption of the PDF format.

Nvidia did the same thing in the gaming market. The NV1 chip was used in the Sega Saturn PC and in several Sega games ported to PC. 

Nvidia also sold its NV1 chip to third-party graphics card manufacturers, who would integrate the chip into their own branded graphics cards, which were then sold to end consumers & OEMs (original equipment manufacturers) for inclusion in pre-built PCs. 

There’s a great lesson here: You don’t need to reinvent the wheel. Nvidia copied Adobe’s and Intel’s strategy in a different market. You can often copy what worked for someone else. Chances are it works for you as well. 

But despite its distribution, the NV1 chip never really took off. It had real hardware limitations. So Jensen knew exactly how to fix it - with proprietary software. 

How Nvidia moved into software

Jensen first forayed into software in 1999, when the RIVA TNT2 chip & GeForce 256 chips launched. At the time, producing hardware relied on costly prototyping. Instead, Nvidia used software emulation to develop GPUs. 

Emulation means they built software that behaves like their hardware, allowing engineers to test and refine their designs in a virtual environment.

This helped Nvidia test and modify designs without waiting for manufacturing. . And when you produce a better product than before, faster than before, your company grows!

This software innovation coupled with their distribution partnerships was the perfect recipe for Nvidia’s success. 

In its first year as a public company, Nvidia reported revenues of $158 million. With the software-powered success of the GeForce 256 and subsequent models, Nvidia's revenues more than doubled. By the end of 1999, Nvidia reached $374.5 million.

But Jensen didn’t stop there. Seeing the impact software made, Jensen decided Nvidia needed to up its software game.

They entered the AI space with the introduction of the Tesla series in 2007, which had use in scientific computing, machine learning, & AI applications. This chip series is where Nvidia first deployed its famous CUDA software layer to its hardware architecture. 

Nvidia CUDA infographic

CUDA makes GPUs programmable for general purpose computing tasks via its libraries, debuggers & APIs. This might sound a bit abstract, but it’s kind of like making an engine adaptable for sports cars, trucks, tractors and lawnmowers.

This was especially important for the wide use cases Tesla chips needed to be ready for.

Nvidia also integrated application-specific CUDA frameworks into its GPUs to simplify deploying AI within specific industries. 

AI engineers could now focus on coding instead of setting up the software stack. Nvidia gained valuable switching costs with CUDA: Developers became familiar with CUDA (not a more general software stack), which meant they’d prefer to work with Nvidia in the future.

The smartest part about this: CUDA is free, which oddly makes it more valuable for Nvidia: The easier CUDA is to access, the more it spreads, the more likely people are to buy Nvidia chips. 

For the Tesla series, Nvidia sold directly to large enterprises, research institutions, and data centers. It used its old distribution tricks and partnered with system integrators and original design manufacturers (ODMs) who build and deploy HPC systems & data center solutions. 

But Nvidia wasn’t done: The company collaborated with cloud service providers (CSPs) like AWS, Google Cloud, and Microsoft Azure, which offered Tesla GPUs as part of their cloud computing services. This is where Nvidia’s software layer really comes into play: Because engineers are familiar with Nvidia’s abstraction (CUDA), they can use those skills in the cloud platforms they use anyway.

Armed with his application-specific, programmable GPUs & solid distribution channels, Jensen decided to expand into other markets. This is how they moved beyond gaming and became the prevalent AI chipmaker.

It was time to unbundle Nvidia. 

Expanding into industry verticals via horizontal unbundling 

Nvidia, a hardware company with a software advantage, is essentially a horizontal solution unbundled for industry-specific verticals. If that sounds abstract, it means it’s a generalist product customized for specific applications.

A great example of this is Salesforce: It’s a horizontal solution (a CRM) which is unbundled for various industry verticals (healthcare, infra etc). 

They offer a wide, but customizable solution. For a product to be applicable to a wide market, it needs to be abstracted away at the right levels yet remain compact & stable at the others. 

For Salesforce, this means there are certain core ingredients - contacts and other data - that are set in stone. But end users only interact with customized applications like a CRM.

This enables Salesforce to offer varying levels of CRMs, marketing automation, data cloud, etc.

For Nvidia, the unbundling works something like this: 

  1. Core Technology Development:
    • Nvidia develops a unified foundational architecture for its GPUs, such as the Ampere or Hopper architectures (The A100 & H100 chips). 
    • This core technology includes the basic design, features, and capabilities of the GPUs, such as CUDA cores, tensor cores, etc. 
  2. Integration into Products:
    • The core GPU technology is integrated into various product lines – like GeForce for gaming, A100 for data centers & AI, Clara for healthcare. 
    • All Nvidia products share a common set of advanced features and capabilities derived from the core technology.
  3. Customization for Specialized Verticals:
    • Nvidia tailors these products to meet the specific needs of different industries.
    • For example, GPUs for data centers (such as the A100 and H100) are optimized for parallel processing and AI workloads, while GeForce GPUs are optimized for gaming performance and visual quality.
    • Nvidia also provides specialized software, tools, and SDKs (like CUDA, TensorRT, and Nvidia Clara) to enhance the functionality of these GPUs in specific applications, such as AI, healthcare, and automotive.

This unbundling allows Nvidia to ship quickly across multiple product lines & markets without creating separate development projects for each vertical. The software advantage let Nvidia ship rapidly: 

For example, it launched the Drive series in 2015 which was focused on autonomous driving and advanced driver-assistance systems. In 2018, it launched Clara - a platform for AI-powered healthcare applications, including medical imaging and genomics. The H100 GPUs were launched in 2022 designed for AI, HPC, and large-scale data center workloads. Those are the same H100s powering much of the data centers behind the most popular LLM models. 

This is what makes horizontal products so valuable. If you can hit the sweet spot between abstraction & structure, you can enter different industry verticals with no cap to your upside. 

And Jensen is doubling down on his unbundling strategy moving forward. 

2024 is already a big year for Nvidia & it’s still going strong. It’s ramping up its manufacturing with a new line of its Blackwell & Rubin GPUs ready for launch in industry-specific verticals. 

It collaborated with server manufacturers like Dell, HPE, and Lenovo to include Nvidia GPUs in their data center solutions. In 2024, its data center business generated $22.6 billion in revenue.

So coupled with its software layer & its distribution game, what does the future hold for Nvidia? 

Some might say that Nvidia is riding the AI wave & its valuations are way off base. There’s certainly some hype, but I’d argue that Nvidia's business fundamentals are rock solid: Their GPUs are great and their software cements their advantage.

Nvidia is in the shovel-selling business. It always has been. Today, AI is the gold. Tomorrow, it will be something else. 

With bio-computing & quantum computing coming up, the need for high performance GPUs will only increase. As more SaaS businesses get created, the more CSPs like AWS & Azure will rely on companies like Nvidia. The more AI develops, the more existing companies will integrate AI into their products (For example, Meta purchases 60,000 GPUs for the recommendation algorithm). As AI itself moves closer to AGI, the more AI companies will rely on companies like Nvidia. 

And I bet Nvidia will be right there to sell its programmable GPU shovels to the next generation of gold diggers. 

Copy icon
X logo
LinkedIn logo

Zero fluff.

Our stories 🤝 your inbox

Email Icon

Up Next

How Amplitude went from mobile app to product analytics unicorn

Never miss another
fabulous article.

Weekly digest in your inbox.

Email Icon
Continue Reading

Up Next

Selecting keyboard shortcuts for your app

Read Now

What's hot