The pursuit of problem solving: Inside Nvidia’s plans to democratise chips and AI in India and rest of the world

Vishal Dhupar would struggle to explain to people what Nvidia does when he joined the company over a decade ago. Over time he has realised, the easiest way to sum up its mission statement is that Nvidia basically solves problems—those that traditional slash classic computing cannot solve.

“The problems that we’ve tried to solve are all the difficult problems,” Dhupar tells TheSpuzz Online.

Right out of the gate—in 1993 when it was born—Nvidia was able to figure out that personal computers could do more. They were great at office productivity tasks but the return on investment (ROI) wasn’t sufficient. Say you bought this expensive state-of-the-art machine and then realised that you were only using it for eight hours. It was becoming increasingly clear that the PC must seamlessly transition into an entertainment device after work.

Nvidia wanted to do the science and art of computer graphics. It saw an opportunity and decided to work on it. Gaming was the area that it would start with. Once in, there was another problem.

People would generally game on consoles or go to large arcades back in the day because they couldn’t be in a digital environment for too long. The need of the hour then was to build realism in the virtual world by bringing physics into it.

Nvidia solved that problem—too—and became the “de facto standard on the gaming part,” Dhupar says.

It created a market. The rest, as they say, is history.

Vishal Dhupar, Asia-South MD, Nvidia

Need for computing

Developers have been an intrinsic part of the journey. The world that it is trying to build needs their ‘endorsement.’ If it is able to crack the three-stage formula—that includes platform size, longevity, and differentiation—and bring the technology to them, they would do some of their best work and eventually, progress will follow as Nvidia moves to solve the next level of problems.

One of the ways that it is trying to attract their attention—for the last several years now—is through the GPU Technology Conference. Before the pandemic hit, the GTC was conducted on-the-ground somewhere closer to the company’s headquarters in the California region. CEO Jensen Huang would usually deliver a keynote to kick off the proceedings. He would also travel to other regions—could be in Europe, could be in China—and then deliver the same sessions in a different format so that people could really understand the underlying need.

“If the problem hasn’t been solved, then very few people understand what is the method in which it can be solved. So, you need to create awareness,” Dhupar says.

Many Indians obviously would struggle to go overseas to attend this and even though Nvidia has been trying to bridge the gap, developers here would still be at an arm’s length from the real action. But when the pandemic hit, virtual life became the way forward. The conference, also, turned virtual meaning there are no longer any walls or ceilings for attendance and the results speak for themselves.

“Because of that, against a few hundred that could go from India to attend the GTC, over 50,000 people attended it last year,” Dhupar adds.

Also read | Alder Lake is the high-performance hybrid Formula 1 racing car we have built for computing: Intel

Clearly, there is a big recall for the need for computing, like Nvidia does.

The objective is to target developers, researchers, enterprise business leaders, IT decision makers, students, data scientists, creators, and people who are in the venture capital line of business. Nvidia also looks at a multitude of industries that can benefit from this. Automotive is a big beneficiary. Healthcare that has benefited most from deep learning is another. Higher education, manufacturing, and game development are some of the other target industries.

After the registration process—which is free—you get access to thousands of sessions. You can choose and curate topics—that range from autonomous to data science, from collaboration to new-age technologies like IoT, 5G, Edge, or a combination of that and all overlaid by AI—that are of interest to you. And because these sessions are happening in real-time, you can also engage in live Q&A.

“People in the world of intelligence are looking at the world becoming more autonomous. Our belief is, anything that moves will finally be autonomous, and when people learn the recipe of autonomous, they get first-hand experience through people who are doing it, those who are dealing with some of the challenges and how they are overcoming it. We talk about things like that,” Dhupar explains.

Once you have the ‘awareness,’ you could reach out to Nvidia and the company would try and solve your problem the way you defined it. If you believe that you would require a lot more education, even that’s possible in different formats, he adds.

“One of the purposes of Nvidia is to democratise AI.”

The company runs a community program called Birds-of-a-Feather (BOF) to help connect people so they could come together, have brainstorming sessions, and build things. As and when things take the ‘commercial’ turn—for the better—Nvidia would also “deal with it in their due course of journey.”

At the graduate level, it runs a ‘new campus graduate program’ where it ties up with certain institutes to hire people “at the highest end of the engineering value chain and allow them to grow in an organic way.” Nvidia does not believe too much in lateral hiring.

“When the new campus graduates come, they first come and benefit with us for the first six months in their final year of graduation. If they like it and we like them, then we go in for a long-term relationship with them, what we call as full-time employees and many of them get recruited,” Dhupar says, adding “but that’s not to say we only go to predefined engineering colleges.”

And, inclusivity is ‘extremely important.’ Nvidia believes that if “we have representations from all over, it just grows the culture of the company and the innovations happen in a different way.”

Need for change

GPU is the fundamental building block of the compute architecture. Dhupar calls this a fact of life. But the pursuit is not of the invention called the GPU, but about the problem that Nvidia—can—solve, and the way that it can do that is through a mechanism called accelerated computing.

nvidia geforce rtx 3000 series gpus middle 2 1Nvidia GeForce RTX 3000 series GPUs

Accelerated computing takes all the goodness of the CPU and then solves its weakness by way of the GPU. But just because there are multiple semiconductors involved, it doesn’t mean that you’re able to solve the computing need. You have to really solve it at the application layer.

Below this application layer—all the way down to the semiconductor—there are a lot of abstractions which are in play from middleware to the libraries and SDKs and then to building a framework before you get into the application. You’ve basically got to look at it in entirety.

For years, the industry benefited from Moore’s law, which is nothing but saying that computing performance is going to double every 18 months. And, when you write a software code which is predominantly written in logic, and you run on a latency processor, the cost of doing your compute comes down by half every 18 months. That’s no longer the case, which is to say that performance is not going to really improve, the way we were all used to.

Even if the industry continued to do software ‘traditionally,’ it would be impacted but all this has also come at a time when it was going to write it ‘differently,’ or what’s called the software 2.0. That’s when the work of software engineers gets augmented so the machine is able to write the software in conjunction with them. Needless to say, all this called for a shift—a reimagining if you will—in the way that computing was done.

“When the two come together, it tells us why X said, “Nvidia is fascinating,” and if I have to write a software code, why does only X think it is fascinating, something that no human could have done earlier, is the reason why Nvidia exists,” Dhupar says.

Initially, when you would have a problem and someone had to write a code for you, they’d just write it to the best of their understanding. It may not be perfect, but it would do just fine. You’d press a button, hundreds of copies would be published, shrink wrapped and sold.

But what happens when the software becomes larger than life with real-time intelligence and you can’t shrink wrap it anymore? You now need to serve it in a continuum.

Dhupar gives the analogy of a smart car. The car of tomorrow is going to say, “why do you need a key to open me? Why can’t your biometric be good enough for me to open the door for you?” So rather than you opening the door, the car is asking you, can it open the door for you because it recognises you. That software update is very different from the shrink-wrapped part of it. That kind of software cannot fit onto one server.

Earlier, one server was your compute node. Now, a sum total of all your servers—which is the data centre—is your compute node because the software is really big and it’s got a lot of intelligence inside. But if you have to have several computers inside your data centre, you cannot run the setup on a single processor. If the CPU is good for something and not so much for other things, it can offload that—which is the compute part—to the GPU.

Moreover, the volume of the data has increased so much that you really need to protect its integrity at the source where it gets developed and you require another form of a programmable processor like the Data Processing Unit, or DPU.

Suddenly, you’re looking at an architecture which predominantly used to be only CPU, but you’ve now got three processors which are underlying it.

“The larger question is that if you have three pilots which are going to fly your plane, how do you do it with a single operating methodology?”

That’s where Nvidia’s Compute Unified Development Architecture, or CUDA, comes into play.

Whether you’re buying a gaming card, or eyeing the world’s most advanced data centre, CUDA becomes a stream, which then becomes an abstract layer over these three processors. On top of that CUDA layer, you develop the middleware to accelerate your applications and the frameworks follow.

Dhupar gives another analogy. Word is a pretty popular platform. You would have noticed off late, that it is predicting your text better, it is doing your grammar correction better, and in fact it is doing all this almost at the speed at which your mind works. That’s good because you’re able to save time. But then you’re probably also asking yourself, at what cost is this really coming to me?

“If the same problem was solved just using the CPU, it would have been approximately 20 times slower and about five times costlier. But due to computing architecture changes, it now comes at 1/5 the cost and you are able to do it in real time,” he says.

That is what Nvidia’s big focus is.

Need for choice

As a computing platform company, it comes in different formats. You can just buy the chips and the cards from it, if that is what you want. If you want the entire engine, then you can do that too. A classic example of this is the 2019 agreement between Mercedes-Benz and Nvidia under which the companies will split the economics from every autonomous vehicle that comes out in 2024.

Also read | ‘One size can’t fit all’: Intel’s Prakash Mallya on why the chipmaker is making discrete GPUs now

“We want to give you a choice. We believe that when customers have a choice, they can break the status quo,” Dhupar says, adding “Nvidia is available in whichever format you want and we’re happy to work with you in all of them, and our idea is to be a platform rather than being a pure computer systems company.”

Nvidia works in the domains of graphics, high-performance computing and AI and in each of these verticals, it has a dedicated community of fans, enthusiasts, and critics. One of the challenges that gamers are facing in particular, is the non-availability of graphic cards at the expense of what the mining community is able to source.

“If the minors want to take advantage of a general-purpose gaming card, but they have a specific need, then we transform graphic cards which are specific for their need for the hash compute and allow the gamers to get what those cards were designed for,” he reiterates, adding “we continue to make sure that we make them available to the community.”

That gamer community has been also patiently waiting for Nvidia to get into consoles big time. Its Tegra chip that powers the very, very popular Nintendo Switch will reportedly cease production this year and there is no word on if—and when—a successor is coming. Generally speaking, Nvidia has barely scratched the surface on this front even as AMD continues to dominate the console space with barely any competition. Why Nvidia is ignoring the console wars is the million-dollar question. Dhupar clears the air, somewhat—and it’s not good news for those waiting.

“The mission of the company is to solve problems which are not solved and one of the larger problems that needed to be solved in the area of system-on-chips was in a new device that is going to get very quickly smart—aka the automotive industry. And we have been working towards solving that problem largely, and that’s where our focus is predominantly from a standard operating procedure (SOP) perspective.”

For what it’s worth, the company is doubling down on GPUs which is to say, the PC is where it expects more traction—and innovation.

It is simultaneously working to make the cost of acquisition relatively low through its GeForce Now cloud service where you can virtually rent a GPU for any device, including a Chromebook, MacBook “which has no graphic card from our side,” or even your phone. Though, it is not available in India yet.

“It is now a well-established fact that the more you buy from us, the more you save. So, our total cost of ownership is by far the lowest in the industry. There is no equivalent of that. We do it faster and at a lower cost than any other computing model does.”

Nvidia expects GPU supply issues to continue well into 2022 even as demand spikes to an all-time high.

“For sure there is a shortage and we want to make sure that we bring it to you as quickly as possible for your specific needs, by innovating on the solutions that you need,” Dhupar adds.

Need for focus

Nvidia has always been a fabless company because it saw the value of its objectives coming off getting its chips manufactured from those who are competent to do it. It is business as usual and the constraints make you take the decision to go a certain way. Anybody can make chips but it should be done for the right reason, Dhupar says. That’s true for a country like India as well.

“India needs to decide for itself what its goal is and what are the problems that it is trying to solve. If the country believes it has the right resources, it has the right objectives, the right business reasons, it should absolutely get into chip manufacturing. Deploy it and make sure that it is always state of the art, solving the problem that we had doing it,” Dhupar says.

India is a very important market for Nvidia and that’s the reason why CEO Jensen came here in 2004 “long before it was fashionable to come and open a Development Centre here.” The company sees—and encourages—Indians to come and work at the highest end of its engineering value chain. It has three design centres here—in Bangalore, Pune and Hyderabad.

Even as it gears up for another GTC event—which kicks off 8th November 2021—Nvidia is trying to attract the most brilliant minds once again, “those who can come and solve the problems that humanity is wanting to solve.”

“Hopefully they’ll understand our approach and together we can build that whole story up.”


Originally appeared on: TheSpuzz

Scoophot
Logo