The Power of Digital Twins

February 25, 2026

Watch the Video




Read the Transcript


Dr. Michael Gregg:

I’m Mike Gregg. I’m the director of Aerospace Systems at AFRL, and I’m really honored to be here today and honored to be on the panel with my esteemed colleagues here. I think it’s going to be a really good discussion.

Digital twins. You ask five people what it is, you probably get eight different definitions. Just really depends on where you sit, I think. So, we’ve got some interesting things to talk about, and I’m going to turn it over to do a lightning round of introductions. Austen, we’ll start with you.

Austen Bruhn:

Yeah. I’m Austen Bruhn. I’m a solution engineer at Coder. And we primarily focus on cloud development environments, really keeping anything secure, especially when it comes to AI and development. But really, my background comes from my time at Lockheed Martin, working in their space systems company, working with digital simulators as a product owner for multiple vehicles. And I’m very passionate about this. I think that it’s really, really important as we go forward into better war gaming as well as better resilience in the future.

Karr W. “Win” Farrell, Jr:

Thank you so much. Win Farrell here. Good morning. I’m with Amentum. And I am the digital transformation officer at our Space Force Range Contract, where we have the operational responsibility of launching spacecraft from Cape Canaveral and Vandenberg as well as others. We take this responsibility very seriously and we want to move these spaceports into the spaceport of the future. We do not believe it’s possible to really do this without digital twins and some other aspects of that as well. So, we are looking forward to chatting with you about some of the opportunities and challenges of using digital twins on the spaceports. Thank you.

Michael Foust:

Thanks, Win. Good morning. Michael Foust. I’m with the Boeing Company. Currently work in the T-7 program for training and sustainment. Like the other panelists have said, this is really important, and it’s all about readiness. And I think from a platform level, really excited to talk about how we leverage this to keep airplanes up, keep flying, keep training pilots, and keep operational.

Peter Sommerkorn:

Good morning. I’m Peter Sommerkorn with Pratt & Whitney, and I have responsibility for all of our military development programs at Pratt & Whitney. I’m excited to be here today as a lot of the digital transformation that we’re doing starts in our advanced programs, where I’ve been working for the last few years. Prior to this, I’d been working on our 6th-gen programs and then had some roles in strategy before that. So, happy to talk, again, today about how this is really changing the way we’re doing things.

Dr. Michael Gregg:

All right. We’re going to dive right in. Gentlemen, digital twins hold a promise. One of the main promises is that they can tie research and development to acquisition, to test, to operations. Can you all speak about how can we implement a seamless transition across all those epics? And does digital twins, are they going to allow us the ability to bypass physical tests at some point? Peter, we’ll start with you.

Peter Sommerkorn:

Yeah, thanks. I have a few thoughts on this. So, I mentioned in my intro that we’re in the middle of transformation at Pratt & Whitney, a digital transformation. And I’ll say that the way that we are operating now is changing in a way that is working to speed up our development processes so that capabilities can be developed faster. We often talk about a valley of death after you go from development trying to build capability. And I’ll say that in my mind, when we look at the way we’re now, every program going forward, we’re starting with a full digital approach. So that’s a digital thread that will start from development through design and then go to testing and manufacturing and eventually out into sustainment. This digital approach really is what enables, I’ll say, us to deliver capability at the speed of relevance. And that’s really critical because the speed with which capabilities are needed is what enables more of the capability to get out into the field.

I’ll say there are two big advantages in my mind as we’ve been working through this that I see. The first is that this digital approach, when we’re developing and getting ready to test engines to bring new capabilities to bear, it allows us to bring learning and troubleshooting to the left and learn those things earlier. And that’s important because that reduces surprises, it reduces turnbacks, it might eliminate test blocks, it shortens the whole development timeline.

The second thing that it does is it lets us make real-time decisions. So, I’ll talk for just a minute about how this has worked and now how we’re doing it going forward. In the past, you’ll have a design for a new engine or an engine that we’re working on, and there are many different teams that work on different pieces of this engine and different subsystems. And they’ve got their own sets of data, and they’ve got their own requirements, and they all live in their own almost silos and sets of applications that they use. You’ve got to bring that all together, and a lot of it is a manual process. And then when they get ready to go test something new, they’ll validate and set the assumptions, and then we’ll go build an engine and go run it. And that build process and that testing process takes months. It is not a short process. And then they’ll get back together and look at what we’ve learned. And then we’ll incorporate, update our assumptions, and go run an analysis, and then go rebuild and retest. And that process is a very time-consuming process.

As we’re moving to this digital approach, what we’re doing now is, of course, you have all these teams, but they’re starting with a single source of truth or a system that ties all of their assumptions and all of their data together. And then when we get ready to update the assumptions before our test, now what we can do is go run scenarios and simulate them with our digital model and our digital thread that’s running through this entire process. And so, we’ll run through that and we’ll learn some things. We’ll see if maybe there are interactions with some subsystems that we wouldn’t have known until you’ve gone and run an engine in the old way of doing things. Or you may find other issues that come up. And then we’ll go get back together, and then we can go update our assumptions and go through that cycle a few times. That takes weeks instead of months. And that allows us that when you go to the first engine build and test, you have significantly decreased the risk on the program and shrunk that timeline to be able to finish the testing successfully.

Sorry, I know my mic keeps cutting out.

So, that’s incredibly important. I’ll give you a real-world example that we’re seeing here. We recently had an advanced propulsion system that we wanted to evaluate in a new platform. And as we’re doing that, again, you have all these disparate teams we normally bring together, and you’ve got to look at how all the subsystems interact, evaluate performance. There’s just the physical fit question. And that process can normally take six months or something like that. When we did this recently, it took six weeks because we have this digital thread with a digital twin that runs through the entire process. And I think that’s important because the needs, the speed with which needs are being required to bring to bear are shortening with all of the threats that are emerging. And so, in order to get from development to actually fielding capability, if we can shorten that time frame, it gives the, I’ll say, weapons systems designers and the warfighter real options that they can use for problems they’re seeing now instead of having to reach for something that’s old.

Dr. Michael Gregg:

Thanks, Peter. Austen, you might have a different perspective.

Austen Bruhn:

Yeah. So, when I think about kind of what digital twins offer in that regard, it’s really around the digital lifecycle, right? And it’s interesting, as Peter was talking, he was talking about physical systems and how it takes six months to go back and change something if something’s wrong. And I come from a software background, I come from an infrastructure background where we have a DevSecOps process. We have an MVP, and then we iterate and reiterate and reiterate. And what digital twins offer in that regard is an amazing way where we can kind of start with an MVP and continue to iterate through different scenarios and different abilities, especially as requirements are going to change over time, versus having to build the system and rebuild the system and rebuild the system, or having to go through the painstaking waterfall process of making requirements so specific and so mission-critical and over-engineered that we spend billions of dollars on something that could cost 100 million.

Dr. Michael Gregg:

Mike, this question is for you, is, as I’ve gone around and visited many industry partners, it seems like I’m starting to see the impact of digital twins on the technical side bleed over into positive impact on business operations. Do you have any insight into what you’re seeing, or does anyone on the panel have that?

Karr W. “Win” Farrell, Jr:

I’ve got sort of a case study here, Dr. Gregg. So, sometimes when you’re in a particular position, you’re not really ready for what’s going to happen. And we were put in a position as a contractor where there was a strategic imperative. There was something that needed to be built as rapidly as possible. There was no question about it. There was a gap in requirements, and we needed to be able to fill that requirement as rapidly as possible. And we did look for digital twins to help us out with that. It was not an easy situation. It was a strategic enabler.

We chose the best that we could find. We chose working with IBM as a partner. And that was a great choice for that time. And what we concentrated on early on in that design cycle was we could not fix every part of the engineering V. We had to concentrate and work with our government partners on which part of the engineering V we were going to attack with the digital twin. And the answer was that we chose requirements traceability. And we work with IBM on that because one of the areas that was fundamental was to really examine these tens of thousands of requirements and how they interacted with each other. This led to understanding in a pre-LLM days of 2018, 2019, before the LLMs and AI really took off, reasoning engines and how we’d be able to find these relationships amongst these requirements.

What was fascinating when we built this is that the digital twin showed aspects of these requirements and the intersections of these requirements that had not been anticipated by the human engineers who had designed these requirements in the first place. This was the value of the digital twin that we presented. And although that’s now subject to history in terms of what was built, and we’re very proud of what was built with the contractors that worked with those requirements, that was a point in time which a digital twin did make a difference.

Dr. Michael Gregg:

Digital twins are often built in proprietary walled gardens. As we think about how we want to streamline our acquisition and pick up speed, we might consider how do we integrate a digital twin from an airframe to an engine, to the mission systems, to the autonomy, all the way down to the circuit boards. How do we do that and get through the potential pitfalls of being held hostage to proprietary IP?

Austen Bruhn:

I’ll jump on this one. So, from my time at Lockheed and just in my time working in defense in general, I’ve been the victim, especially as a simulator engineer, of this problem significantly. Every company, and rightly so, based on, unfortunately, the way the government writes these contracts, is wanting to own their domain. And so, even from the vehicle perspective, I have components that come from different places, but they all want to make it their own black box and only have to work with this one company to do an interconnect, right?

And so, I mean, even things like SpaceWire, I’ve noticed in certain companies where they’re like, “Yep, SpaceWire is an open standard and I want to use that on my satellite. However, if I flip these three bits here, here, and here, nobody else can use that anymore,” right? So, I think if we continue to allow that to happen, even in the system components, that’s going to hurt us in the digital twin side. Because what is a digital twin, right? It’s a mirror of something physical or some component that we’re trying to build. So, I think the first step that we really need to go to is ensuring that we continue to build with open standards for these vehicles and for the weapon systems that we’re working on.

And then I think it goes a step further, right? Because we’re looking at issues where… I look at F-35, for example. And once again, no fault to Lockheed, but it was designed for the government to reduce all of the risk. And so Lockheed kept everything to do with the data and managing it in O&M. And what that created was a data hostage situation, right? And so, if we continue to let those kind of contracts exist, I think we’re going to have some serious problems. And when we look at going from just components to be able to communicate with each other on the vehicles themselves, but also into the twins, I think the next step is to really understand how we’re going to actually communicate and actually control these twins itself, right?

And so, I think that’s going to be a common API, common data types. That’s the only way that this is going to be possible. And I think I look back at some different GICDs that I’ve worked on in the past, and one organization needed this set of requirements, a different organization needed this set of requirements, this organization got involved, a company got involved, and then they put a big group over the top of it and said, “All right, let’s try to put all of this together into one,” and now we have a 900-page document that nobody wants to implement, nobody wants to adopt. So, I look back to ESA. I look back to something like SpaceWire, which is an open standard. It’s an international standard. It’s not export controlled. We don’t have to figure out, “Can I use this or can’t I use this?”

I’ve seen it a lot this week. We talked a lot yesterday in the panel with Lightfoot and the director of SDA, and it’s all about commercial, right? So, if the military is only going to use some type of proprietary or controlled interface to be able to communicate with these twins as well as get the data out, I think we’re going to be causing ourselves a lot of heartache when we go to try to integrate those with commercial digital twins. Because if we don’t have commercial and military working together, I think we’re going to be just not going to be working, especially in this wargaming scenario.

So, I think ultimately that’s what I see. And then we’re leaning forward, right? We’re looking at the AI era. And I think that’s a really important thing to think about because what is a digital twin except a set of processes that can be controlled by an API? And if you can put maybe an AI agent in front of that in a nice isolated environment, then I think you say, “Hey, every single company that’s going to build us a digital twin, we want a common data output, we want common interfaces to be able to control it with, and we want an MCP in front of every single one of these.” And I think we’re going to put ourselves in a place where we have an interactive and multi-tiered and multi-scalable cross-domain solution.

Dr. Michael Gregg:

Anyone else?

Michael Foust:

Yeah. Great points, Austen. I think that standardization is a important part and it has to be a factor of how we move forward. There’s also caution with that, right? So we don’t get into analysis paralysis and stop ourselves from making progress. So, how do you move quickly but move in the same way, in the same boat, all rowing in the same direction is going to be a really critical component. And like you said, it’s not only at the subsystem level, up to the systems, and then as we communicate back to that home, digital twin, wherever that’s hosted, and then how that flows back into the sustainment and other legs is going to be really critical.

So, there’s a lot of initiatives, a lot of things happening. I think there is a place there where we talk standardization and those APIs and opening that up so we’re all ready to talk to each other when the time comes.

Dr. Michael Gregg:

Win, I know you’re in the digital twin space for facilities and infrastructure. How do you see the interoperability playing out between digital twins for infrastructure and how we might go test in the future?

Karr W. “Win” Farrell, Jr:

Dr. Gregg, thank you so much. And I do cherish the time I spent at Boeing. Digital twins and digital engineering was a fundamental part of our lives. We didn’t get to the MQ-25 without it, and I know the T-7 and others are fundamental as well. And that involves multi-contractor capabilities, being able to share amongst various digital twins. This gets to the aspects of interface standards and interface control documents. This may be an old-fashioned approach in terms of some of the things that we’ve been working on from before they were called digital twins, just simulations and so forth. But we’ve all seen digital twins that are anything from a glorified Excel spreadsheet to a view of Cameo to game-based, Unity and Unreal platforms. And we’re looking at being able to integrate between all of those in the ranges that we have here.

This is the challenge We have. This is the challenge that must be resolved because we know that even though our job, our mantra is to preserve and enhance these spaceports, we know that with mere humans, that this is not possible with the architecture that we have in place today. So, there are multiple digital twins out there already to handle various aspects of radars, weather systems, timing, the various infrastructure, the roads and so forth. But we have to get to very accurate and appropriate levels of metrics. Metrics such as, what is the time that it takes? A realistic aspect of the time that it takes to move a particular space vehicle from the gate to the pad, given all the aspects that are going on there on the particular range. That involves an exquisite understanding of the timing and the physical aspects of these digital twins.

How do we do that? We have a better understanding of the collaboration metadata that goes on between and shared between these digital twins. That’s where, even if we’re dealing with black boxes or gray boxes, we can share information amongst them. This is the sort of thing, whether or not we’re building an engine, an airframe, a facility, an infrastructure, or a spaceport or a planetary method of being able to deliver spacecraft into space, that we have to be able to conquer.

Dr. Michael Gregg:

All right. Let’s take this in a different direction. Both General Saltzman and General Wilsbach talked on Monday afternoon about sustainment and readiness. How do we think about using digital twins in the sustainment world? How do we get out of the break/fix mentality? Condition-based maintenance has been a promise for quite a long time. Are we getting close to finding a value there?

Karr W. “Win” Farrell, Jr:

I know my colleagues here are waiting to jump on, but I’ll just put the oar in the water first. We are building a range sustainment system right now. So we believe that as opposed to maintenance being sort of an afterthought or something, it’s the first thing that we think of in terms of the lifeblood of the infrastructure that we’re building. We need to know from a cybersecurity perspective as well in an operational perspective the readiness of operations and the econometrics of these ranges, how we’re going to be supporting from a service perspective the commercial launches, the national asset launches, and the space exploration launches that are coming right down the pike, whether or not it’s 2026 or 2036.

So, these are the aspects that we need in a sustainment system. And to that point, we have an existing sustainment system which is hardware-based. And the very first thing that you mentioned, Dr. Gregg, is how do we move from a hardware-based system to a software- or computer-based system? I believe it’s going to be a hybrid. I think there’s going to be a number of hardware-based and software-based systems that are collaborating and working together.

Peter Sommerkorn:

I’ll jump on this one as well. There are a couple of ways that I think we’re seeing the full adoption of digital and digital twins start to benefit how we’re thinking through sustainment and how we’re seeing it play out. One example that I’ll talk about for a minute is on our F119 engine, which is the engine for the F-22. So, you mentioned condition-based maintenance, Dr. Gregg. And I’ll say over the last, I don’t know, 10-plus years on the F119, we have been working on, we call it UBL, Usage-Based Lifing. It’s the same concept as condition-based maintenance. We’ve been working on putting a program together. And for the last 10 or 12 years or whatever it’s been, this has consisted of putting together a large database, all kinds of data about how are all the parts in an engine react and how things interchange and play during usage. And then many, I’ll say, algorithms that help predict the actual wear, and we’ve been calibrating and working on that.

Now, this is great, and we’re actually seeing real benefits in the fleet, where now instead of just having a rule that says, “Hey, at 2,500 hours, you’ve got to bring in an engine, tear it down, and replace every part because we’re not really sure what’s going to fail when…” And then the alternative is, when things fail before 2,500 hours, you have to take them out of service. It’s expensive. And I think more importantly, it’s disruptive to operations, especially if you’re in a wartime scenario. We developed this system, I’ll say, in a digital world, but it’s partially digital. And what we’re really excited about now is seeing how this full digital approach, having a digital thread from cradle to grave and having a digital model, helps us to further capitalize on what we’ve done so far.

So, specifically in this case, as we… I think there are two ways that I think about benefits that we’re excited to see. So, the first is that as we now have a digital model on new propulsion systems and develop those, instead of having kind of rules based on the algorithms, what we can do is, as a plane is out flying and as an engine is out operating, you can actually in real time, where we’re headed in real time, operate that on the digital twin. So, let’s say that instead of your normal program of missions that you’re flying, you’ve got to fly a bunch of combat missions, which is much harder on an engine and takes a lot more engine life.

The digital twin updates in real time, and then we’re able to anticipate when the maintenance actions need to be taken on various parts of the engine. It’s pretty powerful because it does two things. It decreases the cost because you’re not replacing things that don’t need to be replaced. But more importantly, you’re making sure that you know when you need to bring things in instead of having a break situation that causes problems.

The other thing that is really promising here is that as we’re developing new propulsion systems, we’ve got all this information in this approach, and I talked earlier about how in our development cycle now, instead of just going and testing and seeing what you learn and rebuilding, we can go and run scenarios with our digital models. Well, we can bring in this maintenance approach with condition-based maintenance, and we can actually simulate it in our development programs. And you can start to optimize your design to make sure that you’re maximizing the life of the entire parts of the engine.

Now, all of that, I think, are huge benefits. And then seeing what we can do on that is kind of the cusp that we’re on. So, one thing that now we’re looking at is using AI to get even better predictions for what your usage looks like. I know we have one example, sorry about that, one example where we’re using AI to better predict coating losses on parts so that, again, we can get ahead of the maintenance and make sure that you’re bringing in when it’s needed, but only when it’s needed and you’re not having to react to things fail on you.

Michael Foust:

Yeah. That’s exciting to hear, I think, at the component level. I think that’s the next step for us, right? At the system level, I think the successes we’ve seen at the platform has really been a top-down approach. The subsystems and subcomponents may not have that inherent capability yet. So we’re listening. We’re putting recorders on the buses. We’re looking for anomalies. We’re looking at historical data. And I think we’ve seen some really good successes where we’ve thrown a part on an aircraft and as it varied and did its flight when it landed, that part failed and we had it. So that’s great. But it’s post-processing. It’s working on a lot of data. It’s not real time yet. So I think more of the subcomponents and systems of the system that start having that inherent capability to self-report, self-monitor and report up will make us much more powerful and allow us to put agents and AI and other things on top of that and see success more real time instead of more reactive.

Dr. Michael Gregg:

All right, let’s go another direction here.

We’re asking commanders to make life and death decisions based on what a computer model tells them. It gets back to trust. How do we validate these twins so that a pilot or a satellite operator trusts the simulation as much as they trust the physical flight check?

Karr W. “Win” Farrell, Jr:

It may start with some simple things first. I get kidded sometimes in terms of using 25-cent words when a nickel word will do, but the idea of data provenance is crucial here. We need to know for a particular piece of data where it came from, how it’s been processed, who processed it, the software chain that it went through and so forth, so that that trust, whether or not it’s from an authoritative source or a derived source, can be counted upon at the time that it’s needed.

A number of people look at things and they say, “Oh my gosh, I can’t believe how much data this particular sensor or system is producing.” We look at a different perspective. We look at what it takes to retrieve that data. We look at what it means to compress that data in a way that it can be retrieved in a manner that can be technically appropriate for the use case, whether or not it’s freezing that data when it’s generated or otherwise having that available.

There’s another aspect of this which gets into somewhat of the aspects of AI that I think people are saying, “Okay, what can we do with AI?” Love what we’re doing with text, and that’s just wonderful. But the idea of taking data and bringing that into schemata within AI and then have AI have the highly compressed view of what that data means and how it could be useful for particular situations is a new area of use of AI that we’re very intrigued by. So, there’s some really interesting things that are happening as we intersect these new technologies with some of these old standard ways of keeping things appropriate.

Dr. Michael Gregg:

Austen?

Austen Bruhn:

Yeah. So, when I think about this problem, I think about somebody being in the field. I need to know that that thing is trained on information that’s real, right? And I think, just like what Win was saying here about data provenance, understanding where the data comes from is super important.

But I think this comes back to a data management problem. I think that in order for this to be a reality and for them to be able to make, well, life decisions based on this information, we have to ensure that we’re getting every piece of data that we can possibly think of getting. And talking with or hearing from Susan yesterday on the panel with AI, it seems like we have a whole lot of disparate systems out there that are the sources of truth.

And I think getting everything to a true one source of truth and ensuring that we have a data pipeline and data system that can actually train these models and these twins to be able to do analysis of real-world scenarios, I think, is going to be critical in order to make this something that I would trust if I were in the field.

Dr. Michael Gregg:

All right. Let’s have a little fun. Let’s think going forward here. Lightning round, so we’ll ask all of you. As we see a convergence of AI and agents and digital twins and managing our data, what’s the future like? What’s going to change in this space?

Peter Sommerkorn:

Yep. Happy to start. Actually, I’m going to come at this from a little bit of a different angle, when I think about what needs to change in this space or where we need to go. We’ve talked a lot about the benefits that we’ll see and where we’re pushing technology and where we’re going, and I think we’ve had several examples of how you can improve the performance, you can improve the availability.

I’ll say where we sit at Pratt & Whitney, we spend a lot of time on this. It’s how we’re doing everything going forward. And this is on designing the engine as an engine OEM. I think where we really want things to go to enable this future is we’ve got to push it all the way out into the supply base. So, if we want to see all of these benefits where a plane’s able to operate longer, we’re able to even get to a world someday where you’re using AI to make decisions about how to deploy things or where to put parts in place, you’ve got to have all of those parts ready.

And so I’ll say in our transformation journey that we’re working on, we’re working with our suppliers to help them start to recognize the benefits of this. And we’re starting to see really beneficial things that get them excited. I mean, one example I’ll give you is CMM machines when you inspect a part before you send it out. That’s always a bottleneck for us. We’re seeing using these digital tools and this digital thread with our suppliers, it cuts the CMM inspection time down by 50%. And they get really excited about that.

I’ll say that we’re just at the beginning of this journey, though. We’ve got some suppliers we’re working with. We need to spread that throughout the entire ecosystem to really get the full benefit. So, that’s where I think we should go.

Michael Foust:

Yeah. I think that sustainment leg is what I would touch on as well, right? I think we’re not far from a day where in-flight something could be identified and recommended, maybe a decision made. But I think until it becomes truly operational and we’re seeing fleet availability come up, and that’s going to take that whole thread, not only in-flight recognizing it, but can you let the ground crews know that that’s happening? Can you have the part available? Can you create a shipper from a warehouse to get that part to the base you need it at? Can you have the tech pubs, the support equipment, the consumables that they’re ready to turn that jet back up and get it flying? So, I think we’ve got to continue to bring that up to the forefront. The aircraft and the platforms are really nifty, but that sustainment leg is going to be really critical.

And we’re just going to have to take some leaps of faith. When you think about a part being predicted to fail, it hasn’t failed yet, we pull the part off, what do you do with it? You can’t duplicate a failure because it hasn’t failed. So, we’ve got to take some leaps together on how we handle those things so we didn’t get stuck on the back end. And that’s exciting to think about. Look forward to those days and how we react to those decisions through the whole process.

Karr W. “Win” Farrell, Jr:

Lightning round. No TED Talk here. Okay. So, let’s go back to where this started. And I believe David Gelernter of Yale and Mirror Worlds in 1991, sort of laying out the design and the concepts of what we’re actually implementing now. Those of you who don’t remember David, you will remember the scientist that was blown up by the Unabomber. So, this is what the legacy that we have right now.

And I do believe that there’s another aspect that we’re going to be facing in the near future. And that is the explicit understanding of tacit knowledge. And this, we speak about folks like Nonaka-san and what was done in the Toyota work in Japan. And this type of tacit knowledge, which is not in any of those documentation books or procedures or whatever, it’s the heuristics, it’s the knowledge about how to actually get things done, it’s mastery, and those of you who follow martial arts, Shuhari, this type of ability to have human and machines working together on these types of very, as we would call them, perhaps, wicked problems, very, very difficult problems that we have to solve together. That’s what I think is going to be happening as we move to the spaceport of the future and other exciting problems that we’re going to be solving.

Austen Bruhn:

So, I’m going to take this from a different approach as well, just from my background being in infrastructure as well as space. When I think of what we need to do in the future, I’m not thinking about what’s actually happening. I think about where the problems lie and how we get to, in my opinion, where we need to go.

Where I think we need to be is significantly more autonomous. I think we want something where we can have an agent running digital twins and running things like what-if scenarios and running into different supply chain components to be able to reorder parts or whatever. In order to get to that point, though, I think it comes back to the data problem, and I think it comes back to an infrastructure problem, as well as just an overall integration and coordination problem.

So, I think if I were to be able to think about it from… if I could solve any problem, I would think about how do I solve… how do I get data off of my existing assets and how do I get data off of my existing test systems fast enough, in real time, and have the ability to store it and maintain it well, as well as be able to pull all the existing metadata that I need in order to make the right decisions and inform these twins, I think that would be number one.

I think the second thing is understanding how do we orchestrate this, right? We’re talking about components of components of components and systems of systems, right? So, a digital twin can be an entire world. It can be a spaceport. Or it could be one small component inside of a spacecraft. And so, how do we orchestrate all of these to work together, I think, is going to be something that’s a bit challenging. And I know a lot of companies have been working on how do we do orchestration for digital twins. But then again, I look at the commercial world and I say we need to learn from them better, right? I did some research before this and I saw that SpaceX did somewhere around 1,400 maneuvers to avoid collisions. There’s no way they did that without, obviously, autonomy on board, but also being able to model all of that in a digital twin type of scenario.

So, I think those are critical components. And I think the way that we’re going to get there is by existing or trying to cut this cost curve down, right? Because I think in the past, coming from a data engineering background, it was very, very expensive to get an entire data science team together and get all these algorithm teams to say, “Okay, I’ve got 50-year-old data,” or, “I have brand new data, but how am I actually going to process this thing? And how am I going to get real intelligence out of it?” from a business sense back in then, but now looking at it from the digital twin side, “How am I going to get this thing to actually inform how it’s going to operate in this mirror?”

And so, talking with AWS, even just yesterday, I had a really good conversation where they have things that are costing $5-10,000 a month in services. And they’re like, “Ultimately, we can shove that same thing at Bedrock to be able to solve the problem with a couple hundred dollars a month,” right? So, rather than looking at 30-, 40-million-dollar programs to be able to process this data and put it into a single source of truth, I think we’re somewhere in the range of 5, 10, $15 million, which I think puts us in a much better spot to be able to do this sustainably.

Dr. Michael Gregg:

All right. So, we got time for one more question. Industry is outpacing the government in the space, more agile. The question is, what could the government do or what should the government do to facilitate more rapid adoption and build out this ecosystem?

Karr W. “Win” Farrell, Jr:

I like the possibility of contests. I think that’s been brought up in terms of having folks getting recognition for making contributions like this. I think, as Austen said, having a digital twin which is able to allow for rapid orchestration might be something that we look for, and that’d be something that the government might be able to sponsor.

Peter Sommerkorn:

One thought that I have is that we’ve talked a little bit about standards and protocols and things like that. And I’ll say, at least from where we’ve sat in our experience, there’s this fine line between being too prescriptive and not prescriptive enough. And what we’ve seen be really successful is when we align with the government customer on a framework, but then allow industry to go out and flesh that out because we’re learning quickly in real time and start to come up with standards or standard approaches or standard tools and things like that. So it’s really that balance between the two, which means we’ve got to continue to really work as a partnership with the government in order to get there.

Austen Bruhn:

Yeah, I’m going to echo that, actually. I look at that same idea when I talked about creating these data types and interfaces that are going to be common. Ultimately, I think it’s fine for the twin or whatever else to be that black box because let them orchestrate it. Let them figure out the hard problem. All you need to be able to own is the data and what you want to do with it.

Michael Foust:

Yeah. I think on the acquisition side, we’re probably seeing a lot of these things come to fruition. But we have existing platforms that are going to be around for decades. So we’ve got to find a way to flow that back into those systems and invest in that.

Dr. Michael Gregg:

All right, let’s thank our panel. Appreciate it.