www.newyorker.com Open in urlscan Pro
151.101.0.239  Public Scan

URL: https://www.newyorker.com/magazine/2023/12/04/how-jensen-huangs-nvidia-is-powering-the-ai-revolution
Submission: On December 03 via api from AE — Scanned from DE

Form analysis 1 forms found in the DOM

Name: newsletterPOST

<form class="form-with-validation NewsletterSubscribeFormValidation-iCYa-Dt dweEln" id="newsletter" name="newsletter" novalidate="" method="POST"><span class="TextFieldWrapper-Pzdqp hNhevp text-field" data-testid="TextFieldWrapper__email"><label
      class="BaseWrap-sc-gjQpdd BaseText-ewhhUZ TextFieldLabel-klrYvg iUEiRd stdEm fvoOvz text-field__label text-field__label--single-line" for="newsletter-text-field-email" data-testid="TextFieldLabel__email">
      <div class="TextFieldLabelText-cvvxBl eeDYTb">E-mail address</div>
      <div class="TextFieldInputContainer-jcMPhb oFrOs"><input aria-describedby="privacy-text" aria-invalid="false" id="newsletter-text-field-email" required="" name="email" placeholder="E-mail address"
          class="BaseInput-fAzTdK TextFieldControlInput-eFUxkf eGzzTT laFPCK text-field__control text-field__control--input" type="email" data-testid="TextFieldInput__email" value=""></div>
    </label><button class="BaseButton-bLlsy ButtonWrapper-xCepQ bqVKKv dwpimO button button--utility TextFieldButton-csBrgY edxbrw" data-event-click="{&quot;element&quot;:&quot;Button&quot;}" data-testid="Button" type="submit"><span
        class="ButtonLabel-cjAuJN hzwRuG button__label">Sign up</span></button></span>
  <div id="privacy-text" tabindex="-1" class="NewsletterSubscribeFormDisclaimer-bTVtiV gGHGV"><span>
      <p>By signing up, you agree to our <a href="https://www.condenast.com/user-agreement" rel="nofollow noopener noreferrer" target="_blank">User Agreement</a> and
        <a href="https://www.condenast.com/privacy-policy" rel="nofollow noopener noreferrer" target="_blank">Privacy Policy &amp; Cookie Statement</a>. This site is protected by reCAPTCHA and the
        Google<a href="https://policies.google.com/privacy" rel="nofollow noopener noreferrer" target="_blank"> Privacy Policy</a> and<a href="https://policies.google.com/terms" rel="nofollow noopener noreferrer" target="_blank"> Terms of Service</a>
        apply.</p>
    </span></div>
</form>

Text Content

Skip to main content

Get 12 weeks for $29.99 $6

 * Newsletter

Story Saved

To revisit this article, select My Account, then View saved stories

Close Alert

Sign In
Subscribe
Cyber Week Sale
Get 12 weeks for $29.99 $6, plus a free tote.
Subscribe
Cancel anytime.


Search
Search
 * The Latest
 * 2023 in Review
 * News
 * Books & Culture
 * Fiction & Poetry
 * Humor & Cartoons
 * Magazine
 * Puzzles & Games
 * Video
 * Podcasts
 * Goings On
 * Shop

Open Navigation Menu
Menu
Story Saved

Find anything you save across the site in your account

Close Alert




Chevron
Cyber Week SaleSubscribe to The New Yorker for $29.99 $6, plus get a free tote.
Cancel anytime.Subscribe
Already a subscriber? Sign in

You are reading your last free article. 12 weeks for $29.99 $6. Cancel
anytime.Subscribe now
Brave New World Dept.


HOW JENSEN HUANG’S NVIDIA IS POWERING THE A.I. REVOLUTION

The company’s C.E.O. bet it all on a new kind of chip. Now that Nvidia is one of
the biggest companies in the world, what will he do next?

By Stephen Witt

November 27, 2023
 * Facebook
 * X
 * Email
 * Print
 * Save Story


“There’s a war going on out there in A.I., and Nvidia is the only arms dealer,”
a Wall Street analyst said.Illustration by Javier Jaén
Save this storySave this story
Save this storySave this story

Listen to this article.

The revelation that ChatGPT, the astonishing artificial-intelligence chatbot,
had been trained on an Nvidia supercomputer spurred one of the largest
single-day gains in stock-market history. When the Nasdaq opened on May 25,
2023, Nvidia’s value increased by about two hundred billion dollars. A few
months earlier, Jensen Huang, Nvidia’s C.E.O., had informed investors that
Nvidia had sold similar supercomputers to fifty of America’s hundred largest
companies. By the close of trading, Nvidia was the sixth most valuable
corporation on earth, worth more than Walmart and ExxonMobil combined. Huang’s
business position can be compared to that of Samuel Brannan, the celebrated
vender of prospecting supplies in San Francisco in the late eighteen-forties.
“There’s a war going on out there in A.I., and Nvidia is the only arms dealer,”
one Wall Street analyst said.

Huang is a patient monopolist. He drafted the paperwork for Nvidia with two
other people at a Denny’s restaurant in San Jose, California, in 1993, and has
run it ever since. At sixty, he is sarcastic and self-deprecating, with a
Teddy-bear face and wispy gray hair. Nvidia’s main product is its
graphics-processing unit, a circuit board with a powerful microchip at its core.
In the beginning, Nvidia sold these G.P.U.s to video gamers, but in 2006 Huang
began marketing them to the supercomputing community as well. Then, in 2013, on
the basis of promising research from the academic computer-science community,
Huang bet Nvidia’s future on artificial intelligence. A.I. had disappointed
investors for decades, and Bryan Catanzaro, Nvidia’s lead deep-learning
researcher at the time, had doubts. “I didn’t want him to fall into the same
trap that the A.I. industry has had in the past,” Catanzaro told me. “But, ten
years plus down the road, he was right.”



In the near future, A.I. is projected to generate movies on demand, provide
tutelage to children, and teach cars to drive themselves. All of these advances
will occur on Nvidia G.P.U.s, and Huang’s stake in the company is now worth more
than forty billion dollars.

In September, I met Huang for breakfast at the Denny’s where Nvidia was started.
(The C.E.O. of Denny’s was giving him a plaque, and a TV crew was in
attendance.) Huang keeps up a semi-comic deadpan patter at all times. Chatting
with our waitress, he ordered seven items, including a Super Bird sandwich and a
chicken-fried steak. “You know, I used to be a dishwasher here,” he told her.
“But I worked hard! Like, really hard. So I got to be a busboy.”

Huang has a practical mind-set, dislikes speculation, and has never read a
science-fiction novel. He reasons from first principles about what microchips
can do today, then gambles with great conviction on what they will do tomorrow.
“I do everything I can not to go out of business,” he said at breakfast. “I do
everything I can not to fail.” Huang believes that the basic architecture of
digital computing, little changed since it was introduced by I.B.M. in the early
nineteen-sixties, is now being reconceptualized. “Deep learning is not an
algorithm,” he said recently. “Deep learning is a method. It’s a new way of
developing software.” The evening before our breakfast, I’d watched a video in
which a robot, running this new kind of software, stared at its hands in seeming
recognition, then sorted a collection of colored blocks. The video had given me
chills; the obsolescence of my species seemed near. Huang, rolling a pancake
around a sausage with his fingers, dismissed my concerns. “I know how it works,
so there’s nothing there,” he said. “It’s no different than how microwaves
work.” I pressed Huang—an autonomous robot surely presents risks that a
microwave oven does not. He responded that he has never worried about the
technology, not once. “All it’s doing is processing data,” he said. “There are
so many other things to worry about.”

In May, hundreds of industry leaders endorsed a statement that equated the risk
of runaway A.I. with that of nuclear war. Huang didn’t sign it. Some economists
have observed that the Industrial Revolution led to a relative decline in the
global population of horses, and have wondered if A.I. might do the same to
humans. “Horses have limited career options,” Huang said. “For example, horses
can’t type.” As he finished eating, I expressed my concerns that, someday soon,
I would feed my notes from our conversation into an intelligence engine, then
watch as it produced structured, superior prose. Huang didn’t dismiss this
possibility, but he assured me that I had a few years before my John Henry
moment. “It will come for the fiction writers first,” he said. Then he tipped
the waitress a thousand dollars, and stood up to accept his award.

Huang was born in Taiwan in 1963, but when he was nine he and his older brother
were sent as unaccompanied minors to the U.S. They landed in Tacoma, Washington,
to live with an uncle, before being sent to the Oneida Baptist Institute, in
Kentucky, which Huang’s uncle believed was a prestigious boarding school. In
fact, it was a religious reform academy. Huang was placed with a
seventeen-year-old roommate. On their first night together, the older boy lifted
his shirt to show Huang the numerous places where he’d been stabbed in fights.
“Every student smoked, and I think I was the only boy at the school without a
pocketknife,” Huang told me. His roommate was illiterate; in exchange for
teaching him to read, Huang said, “he taught me how to bench-press. I ended up
doing a hundred pushups every night before bed.”

Although Huang lived at the academy, he was too young to attend its classes, so
he went to a nearby public school. There, he befriended Ben Bays, who lived with
his five siblings in an old house with no running water. “Most of the kids at
the school were children of tobacco farmers,” Bays said, “or just poor kids
living in the mouth of the holler.” Huang arrived with the school year already
in session, and Bays remembers the principal introducing an undersized Asian
immigrant with long hair and heavily accented English. “He was a perfect
target,” Bays said.

“But I use all of them!”
Cartoon by Ali Solomon
Copy link to cartoon
Copy link to cartoon

Link copied

Shop
Shop
Open cartoon gallery
Open Gallery

Huang was relentlessly bullied. “The way you described Chinese people back then
was ‘Chinks,’ ” Huang told me, with no apparent emotion. “We were called that
every day.” To get to school, Huang had to cross a rickety pedestrian footbridge
over a river. “These swinging bridges, they were very high,” Bays said. “It was
old planks, and most of them were missing.” Sometimes, when Huang was crossing
the bridge, the local boys would grab the ropes and try to dislodge him.
“Somehow it never seemed to affect him,” Bays said. “He just shook it off.” By
the end of the school year, Bays told me, Huang was leading those same kids on
adventures into the woods. Bays recalled how carefully Huang stepped around the
missing planks. “Actually, it looked like he was having fun,” he said.



Huang credits his time at Oneida with building resiliency. “Back then, there
wasn’t a counsellor to talk to,” he told me. “Back then, you just had to toughen
up and move on.” In 2019, he donated a building to the school, and talked fondly
of the (now gone) footbridge, neglecting to mention the bullies who had tried to
toss him off it.

Video From The New Yorker

Parker: One Black Family’s Quest to Reclaim Their Name



After a couple of years, Huang’s parents secured entry to the United States,
settling in Oregon, and the brothers reunited with them. Huang excelled in high
school, and was a nationally ranked table-tennis player. He belonged to the
school’s math, computer, and science clubs, skipped two grades, and graduated
when he was sixteen. “I did not have a girlfriend,” he said.



Huang attended Oregon State University, where he majored in electrical
engineering. His lab partner in his introductory classes was Lori Mills, an
earnest, nerdy undergraduate with curly brown hair. “There were, like, two
hundred and fifty kids in electrical engineering, and maybe three girls,” Huang
told me. Competition broke out among the male undergraduates for Mills’s
attention, and Huang felt that he was at a disadvantage. “I was the youngest kid
in the class,” he said. “I looked like I was about twelve.”

Every weekend, Huang would call Mills and pester her to do homework with him. “I
tried to impress her—not with my looks, of course, but with my strong capability
to complete homework,” he said. Mills accepted, and, after six months of
homework, Huang worked up the courage to ask her out on a date. She accepted
that offer, too.

Following graduation, Huang and Mills found work in Silicon Valley as microchip
designers. (“She actually made more than me,” Huang said.) The two got married,
and within a few years Mills had left the workforce to bring up their children.
By then, Huang was running his own division, and attending graduate school at
Stanford by night. He founded Nvidia in 1993, with Chris Malachowsky and Curtis
Priem, two veteran microchip designers. Although Huang, then thirty, was younger
than Malachowsky and Priem, both felt that he was ready to be C.E.O. “He was a
fast learner,” Malachowsky said.

Malachowsky and Priem were looking to design a graphics chip, which they hoped
would make competitors, in Priem’s words, “green with envy.” They called their
company NVision, until they learned that the name was taken by a manufacturer of
toilet paper. Huang suggested Nvidia, riffing on the Latin word invidia, meaning
“envy.” He selected the Denny’s as a venue to organize the business because it
was quieter than home and had cheap coffee—and also because of his experience
working for the restaurant chain in Oregon in the nineteen-eighties. “I find
that I think best when I’m under adversity,” Huang said. “My heart rate actually
goes down. Anyone who’s dealt with rush hour in a restaurant knows what I’m
talking about.”

Huang liked video games and thought that there was a market for better graphics
chips. Instead of drawing pixels by hand, artists were starting to assemble
three-dimensional polygons out of shapes known as “primitives,” saving time and
effort but requiring new chips. Nvidia’s competitors’ primitives used triangles,
but Huang and his co-founders decided to use quadrilaterals instead. This was a
mistake, and it nearly sank the company: soon after the release of Nvidia’s
first product, Microsoft announced that its graphics software would support only
triangles.



Short on money, Huang decided that his only hope was to use the conventional
triangle approach and try to beat the competition to market. In 1996, he laid
off more than half the hundred people working at Nvidia, then bet the company’s
remaining funds on a production run of untested microchips that he wasn’t sure
would work. “It was fifty-fifty,” Huang told me, “but we were going out of
business anyway.”

When the product, known as RIVA 128, hit stores, Nvidia had enough money to meet
only one month of payroll. But the gamble paid off, and Nvidia sold a million
RIVAs in four months. Huang encouraged his employees to continue shipping
products with a sense of desperation, and for years to come he opened staff
presentations with the words “Our company is thirty days from going out of
business.” The phrase remains the unofficial corporate motto.

At the center of Nvidia’s headquarters, in Santa Clara, are two enormous
buildings, each in the shape of a triangle with its corners trimmed. This shape
is replicated in miniature throughout the building interiors, from the couches
and the carpets to the splash guards in the urinals. Nvidia’s “spaceships,” as
employees call the two buildings, are cavernous and filled with light, but
eerie, and mostly empty; post-Covid, only about a third of the workforce shows
up on any given day. Employee demographics are “diverse,” sort of—I would guess,
based on a visual survey of the cafeteria at lunchtime, that about a third of
the staff is South Asian, a third is East Asian, and a third is white. The
workers are overwhelmingly male.

Even before the run-up in the stock price, employee surveys ranked Nvidia as one
of America’s best places to work. Each building has a bar at the top, with
regular happy hours, and workers are encouraged to treat their offices as
flexible spaces in which to eat, code, and socialize. Nevertheless, the
buildings’ interiors are immaculate—Nvidia tracks employees throughout the day
with video cameras and A.I. If an employee eats a meal at a conference table,
the A.I. can dispatch a janitor within an hour to clean up. At Denny’s, Huang
told me to expect a world in which robots would fade into the background, like
household appliances. “In the future, everything that moves will be autonomous,”
he said.



The only people I saw at Nvidia who didn’t look happy were the quality-control
technicians. In windowless laboratories underneath the north-campus bar, pallid
young men wearing earplugs and T-shirts pushed Nvidia’s microchips to the brink
of failure. The racket was unbearable, a constant whine of high-pitched fans
trying to cool overheating silicon circuits. It is these chips which have made
the A.I. revolution possible.

In standard computer architecture, a microchip known as a “central processing
unit” does most of the work. Coders create programs, and those programs bring
mathematical problems to the C.P.U., which produces one solution at a time. For
decades, the major manufacturer of C.P.U.s was Intel, and Intel has tried to
force Nvidia out of existence several times. “I don’t go anywhere near Intel,”
Huang told me, describing their Tom and Jerry relationship. “Whenever they come
near us, I pick up my chips and run.”

Nvidia has embraced an alternative approach. In 1999, the company, shortly after
going public, introduced a graphics card called GeForce, which Dan Vivoli, the
company’s head of marketing, called a “graphics-processing unit.” (“We invented
the category so we could be the leader in it,” Vivoli said.) Unlike
general-purpose C.P.U.s, the G.P.U. breaks complex mathematical tasks apart into
small calculations, then processes them all at once, in a method known as
parallel computing. A C.P.U. functions like a delivery truck, dropping off one
package at a time; a G.P.U. is more like a fleet of motorcycles spreading across
a city.

The GeForce line was a success. Its popularity was driven by the Quake
video-game series, which used parallel computing to render monsters that players
could shoot with a grenade launcher. (Quake II was released when I was a
freshman in college, and cost me years of my life.) The Quake series also
featured a “deathmatch” mode for multiplayer combat, and PC gamers, looking to
gain an edge, bought new GeForce cards every time they were upgraded. In 2000,
Ian Buck, a graduate student studying computer graphics at Stanford, chained
thirty-two GeForce cards together to play Quake using eight projectors. “It was
the first gaming rig in 8K resolution, and it took up an entire wall,” Buck told
me. “It was beautiful.”

“Don’t do it. They try to fill you up on breadsticks so that by the time you go
into the therapist’s office you feel horrible.”
Cartoon by Drew Dernavich
Copy link to cartoon
Copy link to cartoon

Link copied

Shop
Shop
Open cartoon gallery
Open Gallery

Buck wondered if the GeForce cards might be useful for tasks other than
launching grenades at his friends. The cards came with a primitive programming
tool called a shader. With a grant from DARPA, the Department of Defense’s
research arm, Buck hacked the shaders to access the parallel-computing circuits
below, repurposing the GeForce into a low-budget supercomputer. Soon, Buck was
working for Huang.

Buck is intense and balding, and he radiates intelligence. He is a
computer-science hot-rodder who has spent the past twenty years testing the
limits of Nvidia chips. Human beings “think linearly. You give instructions to
someone on how to get from here to Starbucks, and you give them individual
steps,” he said. “You don’t give them instructions on how to get to any
Starbucks location from anywhere. It’s just hard to think that way, in
parallel.”



Since 2004, Buck has overseen the development of Nvidia’s supercomputing
software package, known as CUDA. Huang’s vision was to enable CUDA to work on
every GeForce card. “We were democratizing supercomputing,” Huang said.

As Buck developed the software, Nvidia’s hardware team began allocating space on
the microchips for supercomputing operations. The chips contained billions of
electronic transistors, which routed electricity through labyrinthine circuits
to complete calculations at extraordinary speed. Arjun Prabhu, Nvidia’s lead
chip engineer, compared microchip design to urban planning, with different zones
of the chip dedicated to different tasks. As Tetris players do with falling
blocks, Prabhu will sometimes see transistors in his sleep. “I’ve often had it
where the best ideas happen on a Friday night, when I’m literally dreaming about
it,” Prabhu said.



When CUDA was released, in late 2006, Wall Street reacted with dismay. Huang was
bringing supercomputing to the masses, but the masses had shown no indication
that they wanted such a thing. “They were spending a fortune on this new chip
architecture,” Ben Gilbert, the co-host of “Acquired,” a popular Silicon Valley
podcast, said. “They were spending many billions targeting an obscure corner of
academic and scientific computing, which was not a large market at the
time—certainly less than the billions they were pouring in.” Huang argued that
the simple existence of CUDA would enlarge the supercomputing sector. This view
was not widely held, and by the end of 2008 Nvidia’s stock price had declined by
seventy per cent.

In speeches, Huang has cited a visit to the office of Ting-Wai Chiu, a professor
of physics at National Taiwan University, as giving him confidence during this
time. Chiu, seeking to simulate the evolution of matter following the Big Bang,
had constructed a homemade supercomputer in a laboratory adjacent to his office.
Huang arrived to find the lab littered with GeForce boxes and the computer
cooled by oscillating desk fans. “Jensen is a visionary,” Chiu told me. “He made
my life’s work possible.”

Chiu was the model customer, but there weren’t many like him. Downloads of CUDA
hit a peak in 2009, then declined for three years. Board members worried that
Nvidia’s depressed stock price would make it a target for corporate raiders. “We
did everything we could to protect the company against an activist shareholder
who might come in and try to break it up,” Jim Gaither, a longtime board member,
told me. Dawn Hudson, a former N.F.L. marketing executive, joined the board in
2013. “It was a distinctly flat, stagnant company,” she said.

In marketing CUDA, Nvidia had sought a range of customers, including stock
traders, oil prospectors, and molecular biologists. At one point, the company
signed a deal with General Mills to simulate the thermal physics of cooking
frozen pizza. One application that Nvidia spent little time thinking about was
artificial intelligence. There didn’t seem to be much of a market.

At the beginning of the twenty-tens, A.I. was a neglected discipline. Progress
in basic tasks such as image recognition and speech recognition had seen only
halting progress. Within this unpopular academic field, an even less popular
subfield solved problems using “neural networks”—computing structures inspired
by the human brain. Many computer scientists considered neural networks to be
discredited. “I was discouraged by my advisers from working on neural nets,”
Catanzaro, the deep-learning researcher, told me, “because, at the time, they
were considered to be outdated, and they didn’t work.”

Catanzaro described the researchers who continued to work on neural nets as
“prophets in the wilderness.” One of those prophets was Geoffrey Hinton, a
professor at the University of Toronto. In 2009, Hinton’s research group used
Nvidia’s CUDA platform to train a neural network to recognize human speech. He
was surprised by the quality of the results, which he presented at a conference
later that year. He then reached out to Nvidia. “I sent an e-mail saying, ‘Look,
I just told a thousand machine-learning researchers they should go and buy
Nvidia cards. Can you send me a free one?’ ” Hinton told me. “They said no.”



Despite the snub, Hinton encouraged his students to use CUDA, including a
Ukrainian-born protégé of his named Alex Krizhevsky, who Hinton thought was
perhaps the finest programmer he’d ever met. In 2012, Krizhevsky and his
research partner, Ilya Sutskever, working on a tight budget, bought two GeForce
cards from Amazon. Krizhevsky then began training a visual-recognition neural
network on Nvidia’s parallel-computing platform, feeding it millions of images
in a single week. “He had the two G.P.U. boards whirring in his bedroom,” Hinton
said. “Actually, it was his parents who paid for the quite considerable
electricity costs.”

Sutskever and Krizhevsky were astonished by the cards’ capabilities. Earlier
that year, researchers at Google had trained a neural net that identified videos
of cats, an effort that required some sixteen thousand C.P.U.s. Sutskever and
Krizhevsky had produced world-class results with just two Nvidia circuit boards.
“G.P.U.s showed up and it felt like a miracle,” Sutskever told me.

AlexNet, the neural network that Krizhevsky trained in his parents’ house, can
now be mentioned alongside the Wright Flyer and the Edison bulb. In 2012,
Krizhevsky entered AlexNet into the annual ImageNet visual-recognition contest;
neural networks were unpopular enough at the time that he was the only
contestant to use this technique. AlexNet scored so well in the competition that
the organizers initially wondered if Krizhevsky had somehow cheated. “That was a
kind of Big Bang moment,” Hinton said. “That was the paradigm shift.”



In the decade since Krizhevsky’s nine-page description of AlexNet’s architecture
was published, it has been cited more than a hundred thousand times, making it
one of the most important papers in the history of computer science. (AlexNet
correctly identified photographs of a scooter, a leopard, and a container ship,
among other things.) Krizhevsky pioneered a number of important programming
techniques, but his key finding was that a specialized G.P.U. could train neural
networks up to a hundred times faster than a general-purpose C.P.U. “To do
machine learning without CUDA would have just been too much trouble,” Hinton
said.

Within a couple of years, every entrant in the ImageNet competition was using a
neural network. By the mid-twenty-tens, neural networks trained on G.P.U.s were
identifying images with ninety-six-per-cent accuracy, surpassing humans. Huang’s
ten-year crusade to democratize supercomputing had succeeded. “The fact that
they can solve computer vision, which is completely unstructured, leads to the
question ‘What else can you teach it?’ ” Huang said to me.

The answer seemed to be: everything. Huang concluded that neural networks would
revolutionize society, and that he could use CUDA to corner the market on the
necessary hardware. He announced that he was once again betting the company. “He
sent out an e-mail on Friday evening saying everything is going to deep
learning, and that we were no longer a graphics company,” Greg Estes, a
vice-president at Nvidia, told me. “By Monday morning, we were an A.I. company.
Literally, it was that fast.”

“Hey! No bootleg recordings of the show!”
Cartoon by Pia Guerra and Ian Boothby
Copy link to cartoon
Copy link to cartoon

Link copied

Shop
Shop
Open cartoon gallery
Open Gallery

Around the time Huang sent the e-mail, he approached Catanzaro, Nvidia’s leading
A.I. researcher, with a thought experiment. “He told me to imagine he’d marched
all eight thousand of Nvidia’s employees into the parking lot,” Catanzaro said.
“Then he told me I was free to select anyone from the parking lot to join my
team.”

Huang rarely gives interviews, and tends to deflect attention from himself. “I
don’t really think I’ve done anything special here,” he told me. “It’s mostly my
team.” (“He’s irreplaceable,” the board member Jim Gaither told me.) “I’m not
sure why I was selected to be the C.E.O.,” Huang said. “I didn’t have any
particular drive.” (“He was determined to run a business by the time he was
thirty,” his co-founder Chris Malachowsky told me.) “I’m not a great speaker,
really, because I’m quite introverted,” Huang said. (“He’s a great entertainer,”
his friend Ben Bays told me.) “I only have one superpower—homework,” Huang said.
(“He can master any subject over a weekend,” Dwight Diercks, Nvidia’s head of
software, said.)

Huang prefers an agile corporate structure, with no fixed divisions or
hierarchy. Instead, employees submit a weekly list of the five most important
things they are working on. Brevity is encouraged, as Huang surveys these
e-mails late into the night. Wandering through Nvidia’s giant campus, he often
stops by the desks of junior employees and quizzes them on their work. A visit
from Huang can turn a cubicle into an interrogation chamber. “Typically, in
Silicon Valley, you can get away with fudging it,” the industry analyst Hans
Mosesmann told me. “You can’t do that with Jensen. He will kind of lose his
temper.”

Huang communicates to his staff by writing hundreds of e-mails per day, often
only a few words long. One executive compared the e-mails to haiku, another to
ransom notes. Huang has also developed a set of management aphorisms that he
refers to regularly. When scheduling, Huang asks employees to consider “the
speed of light.” This does not simply mean to move quickly; rather, employees
are to consider the absolute fastest a task could conceivably be accomplished,
then work backward toward an achievable goal. They are also encouraged to pursue
the “zero-billion-dollar market.” This refers to exploratory products, such as
CUDA, which not only do not have competitors but don’t even have obvious
customers. (Huang sometimes reminded me of Kevin Costner’s character in “Field
of Dreams,” who builds a baseball diamond in the middle of an Iowa cornfield,
then waits for players and fans to arrive.)



Perhaps Huang’s most radical belief is that “failure must be shared.” In the
early two-thousands, Nvidia shipped a faulty graphics card with a loud,
overactive fan. Instead of firing the card’s product managers, Huang arranged a
meeting in which the managers presented, to a few hundred people, every decision
they had made that led to the fiasco. (Nvidia also distributed to the press a
satirical video, starring the product managers, in which the card was repurposed
as a leaf blower.) Presenting one’s failures to an audience has become a beloved
ritual at Nvidia, but such corporate struggle sessions are not for everyone.
“You can kind of see right away who is going to last here and who is not,”
Diercks said. “If someone starts getting defensive, I know they’re not going to
make it.”

Huang’s employees sometimes complain of his mercurial personality. “It’s really
about what’s going on in my brain versus what’s coming out of my mouth,” Huang
told me. “When the mismatch is great, then it comes out as anger.” Even when
he’s calm, Huang’s intensity can be overwhelming. “Interacting with him is kind
of like sticking your finger in the electric socket,” one employee said. Still,
Nvidia has high employee retention. Jeff Fisher, who runs the company’s consumer
division, was one of the first employees. He’s now extremely wealthy, but he
continues to work. “Many of us are financial volunteers at this point,” Fisher
said, “but we believe in the mission.” Both of Huang’s children pursued jobs in
the hospitality industry when they were in their twenties; following years of
paternal browbeating, they now have careers at Nvidia. Catanzaro at one point
left for another company. A few years later, he returned. “Jensen is not an easy
person to get along with all of the time,” Catanzaro said. “I’ve been afraid of
Jensen sometimes, but I also know that he loves me.”

After the success of AlexNet, venture capitalists began shovelling money at A.I.
“We’ve been investing in a lot of startups applying deep learning to many areas,
and every single one effectively comes in building on Nvidia’s platform,” Marc
Andreessen, of the firm Andreessen Horowitz, said in 2016. Around that time,
Nvidia delivered its first dedicated A.I. supercomputer, the DGX-1, to a
research group at OpenAI. Huang himself took the computer to OpenAI’s offices;
Elon Musk, then the chairman, opened the package with a box cutter.

In 2017, researchers at Google introduced a new architecture for neural-net
training called a transformer. The following year, researchers at OpenAI used
Google’s framework to build the first “generative pre-trained transformer,” or
G.P.T. The G.P.T. models were trained on Nvidia supercomputers, absorbing an
enormous corpus of text and learning how to make humanlike connections. In late
2022, after several versions, ChatGPT was released to the public.



Since then, Nvidia has been overwhelmed with customer requests. The company’s
latest A.I.-training module, known as the DGX H100, is a
three-hundred-and-seventy-pound metal box that can cost up to five hundred
thousand dollars. It is currently on back order for months. The DGX H100 runs
five times as fast as the hardware that trained ChatGPT, and could have trained
AlexNet in less than a minute. Nvidia is projected to sell half a million of the
devices by the end of the year.

The more processing power one applies to a neural net, the more sophisticated
its output becomes. For the most advanced A.I. models, Nvidia sells a rack of
dozens of DGX H100s. If that isn’t enough, Nvidia will arrange these computers
like library stacks, filling a data center with tens of millions of dollars’
worth of supercomputing equipment. There is no obvious limit to the A.I.’s
capabilities. “If you allow yourself to believe that an artificial neuron is
like a biological neuron, then it’s like you’re training brains,” Sutskever told
me. “They should do everything we can do.” I was initially skeptical of
Sutskever’s claim—I hadn’t learned to identify cats by looking at ten million
reference images, and I hadn’t learned to write by scanning the complete works
of humanity. But the fossil record shows that the nervous system first developed
several hundred million years ago, and has been growing more sophisticated ever
since. “There have been a lot of living creatures on this earth for a long time
that have learned a lot of things,” Catanzaro said, “and a lot of that is
written down in physical structures in your brain.”

The latest A.I.s have powers that surprise even their creators, and no one quite
knows what they are capable of. (GPT-4, ChatGPT’s successor, can transform a
sketch on a napkin into a functioning Web site, and has scored in the
eighty-eighth percentile on the LSAT.) In the next few years, Nvidia’s hardware,
by accelerating evolution to the speed of a computer-clock cycle, will train all
manner of similar A.I. models. Some will manage investment portfolios; some will
fly drones. Some will steal your likeness and reproduce it; some will mimic the
voices of the dead. Some will act as brains for autonomous robots; some will
create genetically tailored drugs. Some will write music; some will write
poetry. If we aren’t careful, someday soon, one will outsmart us.

The gross profit margin on Nvidia’s equipment approaches seventy per cent. This
ratio attracts competition in the manner that chum attracts sharks. Google and
Tesla are developing A.I.-training hardware, as are numerous startups. One of
those startups is Cerebras, which makes a “mega-chip” the size of a dinner
plate. “They’re just extorting their customers, and nobody will say it out
loud,” Cerebras’s C.E.O., Andrew Feldman, said of Nvidia. (Huang countered that
a well-trained A.I. model can reduce customers’ overhead in other business
lines. “The more you buy, the more you save,” he said.)

Nvidia’s fiercest rival is Advanced Micro Devices. Since 2014, A.M.D. has been
run by Lisa Su, another gifted engineer who immigrated to the United States from
Taiwan at a young age. In the years since Su became the head of the company,
A.M.D.’s stock price has risen thirtyfold, making her second only to Huang as
the most successful semiconductor C.E.O. of this era. Su is also Huang’s first
cousin once removed.

Huang told me that he didn’t know Su growing up; he met her only after she was
named C.E.O. “She’s terrific,” he said. “We’re not very competitive.” (Nvidia
employees can recite the relative market share of Nvidia’s and A.M.D.’s graphics
cards from memory.) Their personalities are different: Su is reserved and stoic;
Huang is temperamental and expressive. “She has a great poker face,” Mosesmann,
the industry analyst, said. “Jensen does not, although he’d still find a way to
beat you.”



Su likes to tail the incumbent, and wait for it to falter. Unlike Huang, she is
not afraid to compete with Intel, and, in the past decade, A.M.D. has captured a
large portion of Intel’s C.P.U. business, a feat that analysts once regarded as
impossible. Recently, Su has turned her attention to the A.I. market. “Jensen
does not want to lose. He’s a driven guy,” Forrest Norrod, the executive
overseeing A.M.D.’s effort, said. “But we think we can compete with Nvidia.”

On a gloomy Friday afternoon in September, I drove to an upscale resort
overlooking the Pacific to watch Huang be publicly interviewed by Hao Ko, the
lead architect of Nvidia’s headquarters. I arrived early to find the two men
facing the ocean, engaged in quiet conversation. They were dressed nearly
identically, in black leather jackets, black jeans, and black shoes, although Ko
was much taller. I was hoping to catch some candid statements about the future
of computing; instead, I got a six-minute roast of Ko’s wardrobe. “Look at this
guy!” Huang said. “He’s dressed just like me. He’s copying me—which is
smart—only his pants have too many pockets.” Ko gave a nervous chuckle, and
looked down at his designer jeans, which did have a few more zippered pockets
than function would strictly demand. “Simplify, man!” Huang said, before turning
to me. “That’s why he’s dressed like me. I taught this guy everything he knows.”
(Huang’s wardrobe is widely imitated, and earlier this year he was featured in
the Style section of the Times.)

“I’m so happy to host you! Here’s a spare towel and a guest room with no
intuitive place to hang it.”
Cartoon by Asher Perlman
Copy link to cartoon
Copy link to cartoon

Link copied

Shop
Shop
Open cartoon gallery
Open Gallery

The interview was sponsored by Gensler, one of the world’s leading
corporate-design firms, and there were several hundred architects in attendance.
As the event approached, Huang increased the intensity of his shtick, cracking a
series of weak jokes and rocking back and forth on his feet. Huang does dozens
of speaking gigs a year, and had given a talk to a different audience earlier
that day, but I realized that he was nervous. “I hate public speaking,” he said.

Onstage, though, he seemed relaxed and confident. He explained that the
skylights on the undulating roof of his headquarters were positioned to
illuminate the building while blocking direct sunlight. To calculate the design,
Huang had strapped Ko into a virtual-reality headset and then attached the
headset to a rack of Nvidia G.P.U.s, so that Ko could track the flow of light.
“This is the world’s first building that needed a supercomputer to be possible,”
Huang said.



Following the interview, Huang took questions from the audience, including one
about the potential risks of A.I. “There’s the doomsday A.I.s—the A.I. that
somehow jumped out of the computer and consumes tons and tons of information and
learns all by itself, reshaping its attitude and sensibility, and starts making
decisions on its own, including pressing buttons of all kinds,” Huang said,
pantomiming pressing the buttons in the air. The room grew very quiet. “No A.I.
should be able to learn without a human in the loop,” he said. One architect
asked when A.I. might start to figure things out on its own. “Reasoning
capability is two to three years out,” Huang said. A low murmur went through the
crowd.

Afterward, I caught up with Ko. Like a lot of Huang’s jokes, the crack about
teaching Ko “everything he knows” contained a pointed truth. Ko hadn’t yet made
partner at Gensler when Huang chose him for the Nvidia headquarters, bypassing
Ko’s boss. I asked Ko why Huang had done so. “You probably have heard stories,”
Ko said. “He can be very tough. He will undress you.” Huang had no architecture
experience, but he would often tell Ko that he was wrong about the building’s
design. “I would say ninety per cent of architects would battle back,” Ko said.
“I’m more of a listener.”

Ko recalled Huang challenging Nvidia’s engineering staff on the speed of the
V.R. headset. The headset originally took five hours to render design changes;
at Huang’s urging, the engineers got the speed down to ten seconds. “He was
tough on them, but there was a logic to it,” Ko said. “If the headset took five
hours, I’d probably settle on whatever shade of green looked adequate. If it
took ten seconds, I’d take the time to pick the best shade of green there was.”

The buildings’ design won several awards and made Ko’s career. Still, Ko
recalled his time on the project with mixed emotions. “The place was finished,
it looks amazing, we’re doing the tour, and he’s questioning me about the
placement of the water fountains,” Ko said. “He was upset because they were next
to the bathrooms! That’s required by code, and this is a billion-dollar
building! But he just couldn’t let it go.”

“I’m never satisfied,” Huang told me. “No matter what it is, I only see
imperfections.”

I asked Huang if he was taking any gambles today that resemble the one he took
twenty years ago. He responded immediately with a single word: “Omniverse.”
Inspired by the V.R.-architecture gambit, the Omniverse is Nvidia’s attempt to
simulate the real world at an extraordinary level of fine-grained detail. Huang
has described it as an “industrial metaverse.”

Since 2018, Nvidia’s graphics cards have featured “ray-tracing,” which simulates
the way that light bounces off objects to create photorealistic effects. Inside
a triangle of frosted glass in Nvidia’s executive meeting center, a product-demo
specialist showed me a three-dimensional rendering of a gleaming Japanese ramen
shop. As the demo cycled through different points of view, light reflected off
the metal counter and steam rose from a bubbling pot of broth. There was nothing
to indicate that it wasn’t real.

The specialist then showed me “Diane,” a hyper-realistic digital avatar that
speaks five languages. A powerful generative A.I. had studied millions of videos
of people to create a composite entity. It was the imperfections that were most
affecting—Diane had blackheads on her nose and trace hairs on her upper lip. The
only clue that Diane wasn’t truly human was an uncanny shimmer in the whites of
her eyes. “We’re working on that,” the specialist said.

Huang’s vision is to unify Nvidia’s computer-graphics research with its
generative-A.I. research. As he sees it, image-generation A.I.s will soon be so
sophisticated that they will be able to render three-dimensional, inhabitable
worlds and populate them with realistic-seeming people. At the same time,
language-processing A.I.s will be able to interpret voice commands immediately.
(“The programming language of the future will be ‘human,’ ” Huang has said.)
Once the technologies are united with ray-tracing, users will be able to speak
whole universes into existence. Huang hopes to use such “digital twins” of our
own world to safely train robots and self-driving cars. Combined with V.R.
technology, the Omniverse could also allow users to inhabit bespoke realities.



I felt dizzy leaving the product demo. I thought of science fiction; I thought
of the Book of Genesis. I sat on a triangular couch with the corners trimmed,
and struggled to imagine the future that my daughter will inhabit. Nvidia
executives were building the Manhattan Project of computer science, but when I
questioned them about the wisdom of creating superhuman intelligence they looked
at me as if I were questioning the utility of the washing machine. I had
wondered aloud if an A.I. might someday kill someone. “Eh, electricity kills
people every year,” Catanzaro said. I wondered if it might eliminate art. “It
will make art better!” Diercks said. “It will make you much better at your job.”
I wondered if someday soon an A.I. might become self-aware. “In order for you to
be a creature, you have to be conscious. You have to have some knowledge of
self, right?” Huang said. “I don’t know where that could happen.” ♦



Published in the print edition of the December 4, 2023, issue, with the headline
“The Chosen Chip.”


NEW YORKER FAVORITES

 * The killer who got into Harvard.

 * The contested legacies of Napoleon.

 * Why 1956 was a radical year in hair dye.

 * The legends of Lizzie Borden.

 * The skyscraper that could have toppled over in a windstorm.

 * The day the dinosaurs died.

 * Fiction by Amy Tan: “Immortal Heart”

Sign up for our daily newsletter to receive the best stories from The New
Yorker.

Stephen Witt published “How Music Got Free” in 2015.




WEEKLY

Enjoy our flagship newsletter as a digest delivered once a week.
E-mail address

Sign up

By signing up, you agree to our User Agreement and Privacy Policy & Cookie
Statement. This site is protected by reCAPTCHA and the Google Privacy Policy and
Terms of Service apply.



Read More
Letter from Biden’s Washington
The Left Comes for Biden on Israel

As the Israel-Hamas war divides the Democrats, what does it mean that young
activists are protesting the President, not Xi Jinping or Donald Trump?

By Susan B. Glasser

Daily Comment
What Comes After Panda Diplomacy?

Biden meets with President Xi as U.S.-China relations get less warm and fuzzy.

By Robin Wright

Infinite Scroll
Your A.I. Companion Will Support You No Matter What

New chatbots offer friendship, intimacy, and unconditional encouragement. Do
they mitigate isolation or exacerbate it?

By Kyle Chayka

Postscript
Henry Kissinger’s Hard Compromises

In his final years, the architect of America’s opening to China watched as
Washington turned against his philosophy of engagement regardless of the costs.

By Evan Osnos






Cyber Week Sale
Get 12 weeks for $29.99 $6, plus a free tote. Subscribe Cancel anytime.
Get 12 weeks for $29.99 $6, plus a free tote. Subscribe Cancel anytime.

Sections

 * News
 * Books & Culture
 * Fiction & Poetry
 * Humor & Cartoons
 * Magazine
 * Crossword
 * Video
 * Podcasts
 * Archive
 * Goings On

More

 * Customer Care
 * Shop The New Yorker
 * Buy Covers and Cartoons
 * Condé Nast Store
 * Digital Access
 * Newsletters
 * Jigsaw Puzzle
 * RSS

 * About
 * Careers
 * Contact
 * F.A.Q.
 * Media Kit
 * Press
 * Accessibility Help

© 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance
of our User Agreement and Privacy Policy and Cookie Statement and Your
California Privacy Rights. The New Yorker may earn a portion of sales from
products that are purchased through our site as part of our Affiliate
Partnerships with retailers. The material on this site may not be reproduced,
distributed, transmitted, cached or otherwise used, except with the prior
written permission of Condé Nast. Ad Choices


 * Facebook
 * X
 * Snapchat
 * YouTube
 * Instagram


Manage Preferences







WE CARE ABOUT YOUR PRIVACY

We and our 143 partners store and/or access information on a device, such as
unique IDs in cookies to process personal data. You may accept or manage your
choices by clicking below or at any time in the privacy policy page. These
choices will be signaled to our partners and will not affect browsing data.More
Information


WE AND OUR PARTNERS PROCESS DATA TO PROVIDE:

Use precise geolocation data. Actively scan device characteristics for
identification. Store and/or access information on a device. Personalised
advertising and content, advertising and content measurement, audience research
and services development. List of Partners (vendors)

I Accept
Show Purposes