Photo: RAMIN RAHIMIAN, NYT
Engineers at CTA.ai, an imaging-technology startup in Poland, are trying to popularize a more comfortable alternative to the colonoscopy. To do so, they are using computer chips that are best known to video game fans.
The chips are made by Nvidia. The Santa Clara company’s technology can help sift speedily through images taken by pill-size sensors that patients swallow, allowing doctors to detect intestinal disorders 70 percent faster than if they pored over videos. As a result, procedures cost less and diagnoses are more accurate, said Mateusz Marmolowski, CTA’s chief executive.
Health care applications like the one CTA is pioneering are among Nvidia’s many new targets. The company’s chips — known as graphics processing units, or GPUs — are finding homes in drones, robots, self-driving cars, servers, supercomputers and virtual-reality gear. A key reason for their spread is how rapidly the chips can handle complex artificial-intelligence tasks like image, facial and speech recognition.
Excitement about AI applications has turned 24-year-old Nvidia into one of the technology sector’s hottest companies. Its market capitalization has swelled more than sevenfold in the past two years, topping nearly $100 billion, and its revenue jumped 56 percent in the most recent quarter.
Nvidia’s success makes it stand out in a chip industry that has experienced a steady decline in sales of personal computers and a slowing in demand for smartphones. Intel, the world’s largest chip producer and a maker of the semiconductors that have long been the brains of machines like PCs, had revenue growth of just 9 percent in the most recent quarter.
“They are just cruising,” Hans Mosesmann, an analyst at Rosenblatt Securities, said of Nvidia, which he has tracked since it went public in 1999.
Driving the surge is Jen-Hsun Huang, an Nvidia founder and the company’s chief executive, whose strategic instincts, demanding personality and dark clothes prompt comparisons to Steve Jobs.
Huang — who, like Jobs at Apple, pushed for a striking headquarters building, which Nvidia will soon occupy — made a pivotal gamble more than 10 years ago on a series of modifications and software developments so that GPUs could handle chores beyond drawing images on a computer screen.
“The cost to the company was incredible,” said Huang, 54, who estimated that Nvidia had spent $500 million a year on the effort, known broadly as CUDA (for compute unified device architecture), when the company’s total revenue was around $3 billion. Nvidia puts its total spending on turning GPUs into more general-purpose computing tools at nearly $10 billion since CUDA was introduced.
Huang bet on CUDA as the computing landscape was undergoing broad changes. Intel rose to dominance in large part because of improvements in computing speed that accompanied what is known as Moore’s Law: the observation that, through most of the industry’s history, manufacturers packed twice as many transistors onto chips roughly every two years. Those improvements in speed have slowed.
The slowdown led designers to start dreaming up more specialized chips that could work alongside Intel processors and wring more benefits from the miniaturization of chip circuitry. Nvidia, which repurposed existing chips instead of starting from scratch, had a big head start. Using its chips and software it developed as part of the CUDA effort, the company gradually created a technology platform that became popular with many programmers and companies.
“They really were well led,” said John Hennessy, a computer scientist who stepped down as Stanford University’s president last year.
Now Nvidia chips are pushing into new corporate applications. German business software giant SAP, for example, is promoting an artificial-intelligence technique called deep learning and using Nvidia GPUs for tasks like accelerating accounts-payable processes and matching resumes to job openings.
SAP has also demonstrated Nvidia-powered software to spot company logos in broadcasts of sports like basketball or soccer, so advertisers can learn about their brands’ exposure during games and take steps to try to improve it.
“That could not be done before,” said Juergen Mueller, SAP’s chief innovation officer.
Such applications go far beyond the original ambitions of Huang, who was born in Taiwan and studied electrical engineering at Oregon State University and Stanford before taking jobs at Silicon Valley chipmakers. He started Nvidia with Chris Malachowsky and Curtis Priem in 1993, setting out initially to help PCs offer visual effects to rival those of dedicated video game consoles.
The company’s original product was a dud, Malachowsky said, and the graphics market attracted a mob of rivals.
But Nvidia retooled its products and strategy and gradually separated itself from the competition to become the clear leader in the GPU-accelerator cards used in gaming PCs.
GPUs generate triangles to form framelike structures, simulating objects and applying colors to pixels on a display screen. To do that, many simple instructions must be executed in parallel, which is why graphics chips evolved with many tiny processors. A new GPU announced by Nvidia in May, called Volta, has more than 5,000 such processors; a new, high-end Intel server chip, by contrast, has just 28 larger, general-purpose processor cores.
Nvidia began its CUDA push in 2004 after hiring Ian Buck, a Stanford doctoral student and company intern who had worked on a programming challenge that involved making it easier to harness a GPU’s many calculating engines. Nvidia soon made changes to its chips and developed software aids, including support for a standard programming language rather than the arcane tools used to issue commands to graphics chips.
The company built CUDA into consumer GPUs and high-end products. That decision was critical, Buck said, because it meant researchers and students who owned laptops or desktop PCs for gaming could tinker on software in campus labs and dorm rooms. Nvidia also convinced many universities to offer courses in its new programming techniques.
Programmers gradually adopted GPUs for applications used in, among other things, climate modeling and oil and gas discovery. A new phase began in 2012 after Canadian researchers began to apply CUDA and GPUs to unusually large neural networks, the many-layered software required for deep learning.
Those systems are trained to perform tricks like spotting a face by exposure to millions of images instead of through definitions established by programmers. Before the emergence of GPUs, Buck said, training such a system might take an entire semester.
Aided by the new technology, researchers can now complete the process in weeks, days or even hours.
“I can’t imagine how we’d do it without using GPUs,” said Silvio Savarese, an associate professor at Stanford who directs the SAIL-Toyota Center for AI Research at the university.
Competitors argue that the AI battle among chipmakers has barely begun.
Intel, whose standard chips are widely used for AI tasks, has also spent heavily to buy Altera, a maker of programmable chips; startups specializing in deep learning and machine vision; and the Israeli car technology supplier Mobileye.
Google recently unveiled the second version of an internally developed AI chip that helped beat the world’s best player of the game Go. The search giant claims the chip has significant advantages over GPUs in some applications. Startups like Wave Computing make similar claims.
But Nvidia will not be easy to dislodge. For one thing, the company can afford to spend more than most of its AI rivals on chips — Huang estimated Nvidia had plowed an industry record $3 billion into Volta — because of the steady flow of revenue from the still-growing gaming market.
Nvidia said more than 500,000 developers are using GPUs. And the company expects other chipmakers to help expand its fan base once it freely distributes an open-source chip design they can use for low-end deep learning applications — lightbulbs or cameras, for instance — that it does not plan to target itself.
AI, Huang said, “will affect every company in the world. We won’t address all of it.”
Don Clark is a New York Times writer.