Tech pioneer explains the evolution of digital twins

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


In order to understand the recent excitement about Nvidia’s omniverse for engineers, it may be helpful to understand how digital twins evolved as a way to effect digital transformation involving physical things. VentureBeat caught up with Michael Grieves, who is credited with originating the concept and coining the term “digital twins” in 2002 while elaborating on best practices for bringing digital transformation to the design, manufacture, and support of physical products. Technically speaking, however, this credit is only half-true. While Grieves helped frame the key concepts, his friend, NASA researcher John Vickers, actually suggested the term for the emerging field in 2010.

After a long history in the industry, Grieves got a doctorate from Case Western and then “failed miserably” at retirement. He began working with the auto industry to explore how they could use digital transformation to change the way they design, test, build, and support products. One day, an old colleague from EDS invited him to help in a related field called product lifecycle management (PLM).

Over the ensuing years, PLM became the cornerstone for the development of digital twin-related tools and technologies. The three major PLM vendors, Siemens, PTC, and Dassault Systèmes, now all manage portfolios of digital twins offerings for various industries.

Michael Grieves was there at the beginning, as the industry was just starting to shift from moving around 2D blueprints to 3D models. Here, he elaborates on how the industry has evolved, where he sees it today, and how companies can prepare as new digital twins infrastructure, like the Omniverse for engineers, goes into overdrive.

Here is the conversation, edited for clarity and brevity:

A conversation with Michael Grieves

VentureBeat: How did the whole idea of digital twins come together as an important concept to improve on what people were already doing?

Grieves: I realized that the best use of information in building products was to find ways to replace wasted physical resources. The idea was to bring attention to how information could allow us to get more effective and efficient in the use of tasks for creating products like designing a product, being able to manufacture one, having a product fail, and being able to figure out what’s wrong and support it.

The digital twin is the ability to be able to have that information from my physical things without being in close proximity to it and to be able to use that information to then be more effective and efficient.

VentureBeat: What about other kinds of things like software, supply chains, or cloud resources that are not so physical?

Grieves: I initially started with tangible things, because people could figure out that I have a rocket ship and I need the information about that rocket ship, mainly because once you launch it, you can’t get into physical proximity with it. But, quite frankly, a digital twin can be anything we can visualize. This includes processes, logistics, economic systems, and supply chains. I intentionally stayed away from non-tangible things to begin with, because I thought that would confuse the issue. But if you look at how this has started to explode, it now includes all kinds of non-tangible things as well.

VentureBeat: How has the field of PLM evolved since you first started formalizing it in this way?

Grieves: I was way ahead of the curve when I started talking about these ideas. The ability of computers to do these sorts of things was relatively limited. We were just moving into the ability to have 3D models, and the ability to model and simulate behavior was pretty much in its infancy. But, thanks to Moore’s Law, we have dramatically increased the amount of computing capability to make these sorts of things feasible.

VentureBeat: Now that we are in the advent of new domains like deep learning, tools for automatically generating models, and the metaverse for engineers, how would you frame how companies can think about these things practically?

Grieves: That’s sort of why I divided digital twins into different types depending on where they are in the product lifecycle. My perspective is that the digital twin precedes the physical one. If you intend to have a product, you start with a digital twin first to identify all the potential problems.

Building the digital twin at the beginning allows you to make all your mistakes, virtually, if you will, where there are no ramifications if I cause a catastrophic failure. It’s easy to just rerun the simulation again, compared to immediately trying to build something after we get a new product idea.

When we get into having an actual physical product, we have what I call the Digital Twin Instance, which is now the twin of a specific thing, not just a general-purpose one. I want to track that throughout its entire life. If I collect all the information from all my digital twin instances, I have a Digital Twin Aggregate, which then allows me to use these techniques, whether it’s machine learning, or AI, to do such things as predict failure before they happen, so I can do something about it.

This is where I sort of differ from industry 4.0. They have adopted this perspective of “how do I take a failure and drop the amount of time to remediate it.” But from my perspective, I never want to have a failure. I want to predict that something is going to fail and fix it first. So, if I have a glitch on a factory floor, I’d like to know that in 4 hours there’s going to be a bottleneck in this particular area and go fix it before the bottleneck ever occurs.

Image of different types of digital twins. There's a digital twin prototype, where products can be made, a digital twin instance, which includes products that are made, and digital twin aggregates, which includes all products that have been made.

VentureBeat: How do you think the field of digital twins can benefit from lessons of DevOps, where teams are doing more planning and testing upfront and incrementing updates in smaller units of value?

Grieves: As always, the unusual suspects tend to bite you. And the ability to do these things digitally allows me to cast a much wider net, in terms of the parameters that I can test for, assuming I get my physics right. For example, in crash testing, we may only be able to afford a crash test at certain orientations, because of the cost of doing physical crash testing. With virtual crash testing, I can crash test every compass point on the car and determine whether or not I have, for example, a crumple zone that shows up at an oblique angle that doesn’t show up with a head-on collision.

So it gives me a much richer testing environment than I think I can have from doing physical testing. I have proposed that the only real way to know whether a manufactured product is going to meet its requirements is to test it to destruction. Unfortunately, you can’t sell a whole lot of those things once you do that, but if I could test the digital twins to destruction, and if I had confidence that my digital twin reflected its physical counterpart, then I have great confidence that that product is going to perform the way it needs to perform.

VentureBeat: What do you see as some big challenges in getting the data and in organizing the systems that feed into digital twins and like making that more fluid and intelligent?

Grieves: I think it’s a matter of taking disparate data sources, and then being able to pull them together so that we really have sort of actualized digital twins. One of the issues is that it takes a lot of computing power. But this is starting to come. We passed 54 billion transistors on a chip this year, and by 2030 I’m predicting 6-7 trillion transistors on a chip. So, the amount of computing power we have is going up, exponentially. From a technology perspective, I’m pretty bullish that we will have enough computing power to do the sorts of things that I’m talking about doing.

I think a bigger issue is a cultural problem. For example, there are still organizations that think you need to physically test something to be able to draw a conclusion. One cultural issue is that if you hype a technology before it’s ready, people get discouraged about that. A second issue is we now have digital natives coming into the organizations who are absolutely comfortable with this. But the senior management believes that unless they can pick it up and move it with their hands, it is not real. And then you have what I call “retirement-itis” where people are worried about this big change. They are just trying to get to retirement before having to take a risk on something they don’t quite believe or understand.

VentureBeat: What are you working on now?

Grieves: I am still looking at what kind of underlying structures that we are going to need. I am interested in the platform piece and how we create platforms for this and what that is going to look like. I am also exploring how this stuff is generally exploding, where it’s going, talking about the opportunities, but warning about the pitfalls.


Originally appeared on: TheSpuzz

Scoophot
Logo