Artificial Intelligence in Command and Control
September 24, 2025
Watch the Video
Read the Transcript
This transcript was generated with the assistance of AI. Please report inconsistencies to comms@afa.org.
Col. Ryan “Ape” Hayde:
Well, good afternoon, ladies and gentlemen. My name is Ryan Hayde. I’m the commander of the 505th Command and Control Wing out of Harborfield. Thank you for the clap, sir. Thank you, thank you. I’ll be here all day. And it’s my honor to host the panel on AI in the command and control world. So the 505th is the only operational command and control wing that the Air Force has. And so obviously we focus a lot on how to manage the chaos that’s gonna occur in any sort of fight that we expect to see. So I’m joined by my panelists, a bunch of very high esteemed leaders in industry who are actually on the leading edge of doing some of the things that we’re gonna talk about today. So we’re gonna hand it off real quick for introductions. And then the first question out should be an easy one. So if you can give a little introduction yourself. And then we’ve seen AI in the last, you know, it’s blown up in the American public about the last three years. ChatGPT became a common household name, but it’s obviously been around for a lot longer. But I wanna go, where do you think the most biggest change in AI is gonna help in the command and control enterprise in the next five to 10 years? And we’ll start with Trey.
Trey Coleman:
Okay, thank you. I’m Trey Coleman, I’m the chief product officer at Raft and a retired Air Force guy, was a battle manager. And let me just start by saying, I’m really excited and happy to be here. Thank you for being here. Not just at AFA, AFA is great because, you know, not only are we sharing ideas, but it’s the homecoming, right? This is kind of like your reunion. And so it’s great to see the family, whether you’re in that uniform or this uniform, you know, we all have a very important mission and we all have to be leaning forward. And if we’re not leaning forward and breaking barriers, then we’re gonna lose. And so we’re all in this together and thanks for being here. Thanks for being also in this panel because there’s a lot of other panels out there and we’re glad you chose ours. Okay, so Raft is a woman-owned small business, defense tech firm, kind of not so small anymore, almost 400 people, really data platforms and data is our jam. We’re very good at it, pretty well proliferated data platform. But as we look at data platforms, by the way, $15 billion business today going to, you know, expected to be a $35 billion business. Why? Because people wanna use AI and to use AI, you have to have curated data. Thus, you must have a data platform. That’s why everybody’s building data platforms. So it’s important. So we wanna build AI too. And so we look at artificial intelligence and you go, well, what’s the best application of artificial intelligence? And I see it’s pretty well proliferated today at the tactical level, whether you’re talking about autonomous vehicles, you’re talking about computer vision, or you’re talking about natural language processing, all capabilities that are fielded today, all in development, none of them refined. I think of this as like, you know, the Wright brothers flew the first airplane a year or two ago, right? And that’s where we are in the development of AI. And so we’re still figuring it out. But I think the biggest game changer, and we’re all working towards this, is gonna be the operational level, right? That’s where the decisions are made. I think that’s where the war is won or lost. The battles are won or lost at the tactical level, and the war is won or lost at the operational level. And AI at the operational level is very, very, very hard because that’s decision-making. And that’s probably the hardest feature of artificial intelligence. You’re right, it’s making those complex decisions. And I’d say most of how we’re using AI today is autonomy, is if-then statements, and that’s very helpful. We need more autonomy. But making complex decisions at the operational level is where we have to get to, and we have to get there faster.
Col. Ryan “Ape” Hayde:
Thanks, Ben?
Ben Van Roo:
I’ll just agree with everything that Trey said, and we can move on. So Ben Van Roo, CEO, co-founder of a company called Legion. We are a software company, about three years old, 60-ish people, fully focused on artificial intelligence and agentic AI. And so I’ll kind of touch on what that means. A little bit about myself. Unlike Trey, I’m not in the Air Force. My father flew 102s and 810s, and my brother flew 16s and 35s. And the only way they will allow me to go to Thanksgiving dinner is if I’ve basically dedicated my career to the Air Force. So I worked in the public and private sector. I was a researcher at RAND for a long time, and then went back to the private sector, and have solely focused on the precursors of what we know about this CHAT-GPT, natural language processing, for about 12 years. And so I totally agree that this is the last three years AI has been a thing. And I think for 12 years, it’s kind of been a thing. And I’ve like, this is really getting good. And so a few years ago, I started a company that really focused not on models, but really on the fact that what we observed is that the models all kept getting better, but people were trying to do work, which is a workflow. They’re touching multiple data systems. They’re trying to take specific types of steps. And the plumbing around how do you take a model and the right permissions and plug it into multiple applications, it was just software. And so we started out on that journey. And so right now, our company has, United States Special Operations Command is our largest customer. We have deployments, aisle two through aisle six. We are enterprise-wide for SOCOM on aisle five and aisle six. And we are coming to, if you guys, Air Force, the NIPR GPT platform, sometime this fall when they resolve a few things. But we’ll be one of the first agentic platforms there. So all that precursor, why do I think this is important? Where do I think AI is going? I think over the next couple of years, what you’re gonna see, and we all know CHAT-GPT, and we’ve all used it for personal use or for professional use, sometimes for professional use when we shouldn’t be going on to CHAT-GPT. But we use that today to ask questions, and in some cases, do workflows. I like to do a guided series of things. And what we, we’re still kind of in the very infant stages of it. Even the NIPR GPTs, everyone’s kind of used this chat paradigm. What’s coming really, really soon are going to be more directed co-pilots. I need to accomplish these discrete series of tasks over a period of day. And that’s gonna come, then people are gonna talk about agents. Next couple of years, it’s gonna look like that. Beyond that, which is really interesting, I think we’re gonna get much more into allowing agents talking to agents. So think of your little, you know, your NCOs are now digitized, and they are having communications and making chain of custody, potentially decisions with rules around them in the next five years. 10 years from now, I have no idea. I mean, nobody actually knows in the venture community or in the ag community what’s gonna happen in three years. But 10 years from now, I think we really need to think very much forward and say degraded environments, what policies are we gonna give to agents to make decisions? Doctrine will have to change probably in the next three to five years. It’s gonna move slowly. It feels now like in some ways it’s moving slowly, even though it’s moving quickly the most. But very soon we’re gonna be augmented by hundreds, if not thousands of agents working on our behalf, not just in the physical battlefield, but also in the digital world.
Col. Ryan “Ape” Hayde:
All right, thanks. Peter?
Peter Guerra:
Yeah, hi everyone, sorry. I had Skittles for lunch, so they got caught in my throat there for a second. So I’m Peter Guerra. I lead AI for Oracle, a small data company you probably have heard of. We’ve been working with the federal government since 1977. So it’s just been an amazing privilege to work with the government for so many years and to be the trusted partner there to have a lot of the data. Just as a quick fact, we process a lot of data for the federal government. We do like a billion dollars of transactions with the treasury every single hour. We have medical records for veterans. We have a lot of trusted data with customers. And the reason why it’s important is, I agree, data first, AI follows. And so what we’ve been focused on is really helping our government customers think through how do you use that data? How do you make workflows go faster? How do you take some of the barriers of parts of bureaucracy that are needed, but can potentially move faster? And then how do we help you do that using AI? How do we help you get new AI insights and so forth? So it’s been great. To the C2, so this is where I’m seeing the evolution and I get the opportunity to see, ’cause I have a global role, both the commercial side and the government and public sector side. And where commercial is taking AI is they’re moving it from knowledge to action. And that is happening today. So you can think of like the global supply chain or the distribution systems like FedEx and other things. They are already embracing AI to help them move from knowledge to action. Meaning when things are a workflow that needs to be done, typically, usually by humans with human in the loop, they are moving to a more automated way to be able to do that. Case in point, we use this for ourselves. We process all of our POs, 85% of them at Oracle done all over the world is done completely autonomously with no human in the loop. And that is something that is happening every single day. And so where I see the future for what we need in particular in government and particularly when there’s chaos potentially, as you said outside, that is coming with any kind of conflict, and especially in C2, where we need to do is we need to move from knowledge to action. Knowing is one part of it and that’s really important. So ISR and other things are really important, but what we’re headed to and we’ve seen this in current conflicts is there’s a lot more automation that’s happening on the battlefield. And so we need the ability for C2 to react autonomously and be able to do that in a way that can be directed potentially and potentially not directed. So I think that’s where we’re headed.
Col. Ryan “Ape” Hayde:
Very great, thank you much. Lou.
Col. Louis Ruscetta, USAF (Ret.):
Yeah, thanks. My name is Lou Ruscetta. So I’m the Director of Strategic Growth at Virtualitics. So a small AI company, about 90 folks based out of Pasadena, California doing mission, AI for mission readiness across the DoD right now. Retired Colonel, Air Force acquisition officer, spent most of my time aircraft, ISR, and command and control type programs. So 26 years before I retired a couple of years ago. So when I look at where AI is going in the next five years, I think there’s two parts of it. I think the first part is moving from the stagnant to the dynamic, right? We are using AI now. AI is in use, but it is in sort of a stagnant sort of environment that’s almost predictable. You have time to look at it. You have time to look at the data, look time at what the information that’s coming out, right? And in the next five years, I see this moving into the dynamic environment where there’s gonna be aspects of it where when you look at decision superiority, that is a factor of speed and quality. So increasing that speed and quality of the information, the data, the outputs from what you’re seeing from the AI and ML aspects of it to inform those decisions to get that quicker for all of what we are trying to do operationally in that environment.
Col. Ryan “Ape” Hayde:
All right, thanks. I’d like to talk a little bit about trust right now. And we talk a lot about trust generally with trusting the data, ensure that you understand where that’s coming from. I’d like to take it a different route and trust in the AI’s output of that. And so we’re talking about, if we’re using command and control, for example, life and death decisions are being made provided to the forward edge. Currently, those are manned platforms and those are human beings that are gonna be told to do something and potentially by, whether it’s autonomous or human in a loop, by a system. And those guidance or the direction that’s going may not be intuitive to those human beings. They might think that they should go left and you’re gonna tell them to go right because you know better. And you know better because you have AI analyzing mass amounts of data. So in addition to trusting the data, how do you ensure that the force trusts what you’re gonna produce at the edge? Because if they don’t trust what’s being produced in the C2 world, we’re no better off without it. So I’ll start over with Ben, take it off first.
Ben Van Roo:
This is a hard question. So there’s a couple of components. I think when we started the company and when we started working with SOCOM, trust was really overcome only when we went out in the field. We’d go to a JRTC and we’d say, “Hey, can we do this for this use case?” ‘Cause AI’s got, there’s great demo-ware everywhere and anyone can stand up something really easy and show a cool demo. But as soon as you start getting real reps in some type of environment or an exercise, then I think, okay, this is where we can trust it. This is where things fall over. So I think there’s a huge part that’s just, that will be reps. I would really encourage leaders here to be thinking about how do we use this, encourage it, and constantly learn and learn and learn. Then I think it starts to be beyond that, you need artificial intelligence to be repeatable. So right now it’s still a little bit stochastic and that’s kind of the nature of it. And so you need to be able to have repeatable outcomes, other things like audit logs. I think in the longer term, when I think about trust, it’s gonna be the policies. When we’re looking at 10 drones in the air, you can process it in your mind. When it’s 100, you’re gonna have to decide when you hand over certain types of trust to AI ’cause it’s gonna start, the cognitive load is gonna be too much to humans. So I think in the near term, a lot of it’s education and practice and then it’s making it repeatable and eventually it’s gonna be pushing the trust up into the policies.
Col. Ryan “Ape” Hayde:
All right, that’s all right, great. I think the highlight there, the practice. And so we always try to practice like we fight. And so this can’t just happen in the fight. It can’t be the thing we keep and then we only unleash when it’s time to actually go. You’ve gotta get reps with it so that the people in the field actually trust it. So that’s a good point. Peter?
Peter Guerra:
Yeah, I just wanna say, I think one thing we’ve seen in real life scenarios where AI is being deployed is test and validation is key. It’s not always done well or thought of end to end, especially when deploying AI systems. So think of like large language models know about the data that it was trained on. It doesn’t know about your data unless you let it see your data. And so the answers that come back is always the expert issue. Like how do you know if it’s actually giving you the right answer if you don’t know yourself, right? That’s a much more difficult problem when you start getting from like deterministic type knowledge things to more probabilistic knowledge things in the field. And so one thing that I think I really wanna stress is really be thinking about that entire test and validation pipeline from beginning to end and understand that these are mathematical, statistical systems that have a degree of error always. 10, 15, 20%, like you can look at the numbers of what the testing validation is. So it requires a lot of thought and a lot of testing to make sure that you’re okay with those margin of errors that you’re gonna inevitably get from any AI system.
Col. Ryan “Ape” Hayde:
Sure, and just because the margin of error comes from AI, that doesn’t mean it should be thrown out much like humans make errors all the time and we trust them. And so good point though. Lou?
Col. Louis Ruscetta, USAF (Ret.):
Yeah, so trust is the foundation of AI adoption. By far, that’s the critical point, right? So in the end, especially when you’re looking at it from a decision standpoint, decision superiority, we hold our decision makers accountable. We hold our Airmen and our commanders and Guardians accountable. We don’t hold AI algorithms accountable, right? And we never will, we shouldn’t. So when you look at the need for trust, so I know like at, so Virtualtics, we have what’s called explainable AI and we are using that now with Air Force Global Strike Command to synchronize maintenance activities across the B1, B2, B52 and the ICBM platforms. So, and what we have seen is as they have used this and started using it and using it for their planning, for their execution, for their understanding and of what they need to do and integration across the entire logistics element, we’ve seen the adoption increase, right? So that’s sort of how we’re measuring trust and how I’m measuring trust within the team of going, hey, how is that Airmen and Guardians still using that? Are they using it on their own? And they are, and is that increasing? And that to me is that measure of trust. But without that trust, without that ability to go in and see how the data is being used and what it’s being produced, and even giving the user the ability to question it, right? That’s how you gain that trust.
Col. Ryan “Ape” Hayde:
And I think as this becomes more ubiquitous in our lives and it just becomes things that are around us day to day, we will interact with it more and therefore it won’t be as unique in the fight. It’ll be just another portion of our lives, but it happens to happen in the fight. Trey?
Trey Coleman:
Yeah, I definitely agree with, I mean, you guys are hitting all the same points that come to mind. The last thing I’d add to that is, have you read the book or have you read it recently, “The Speed of Trust” by Stephen Covey? It’s a great book if you haven’t read it, or haven’t read it recently, pick it up. It’s a great read, it’s a quick read. But the idea there is that if you build trust, you can have faster transactions. You can make better decisions, you can move faster, you can make better business decisions because you have trust. And if you don’t have trust, everything slows down. And speed is what we all want here, right? That’s what we’re trying to get to. And so if you have the greatest AI in the world, but you don’t trust, you don’t have anything. And so I think, but I want to think about that book. It’s great to apply human principles to AI ’cause that is what we’re trying to artificially recreate here. And I think he talks a lot about transparency and communication. And so as we’re building these AI products, I think it’s very important on us, one, to be very clear with our expectations and not oversell, and probably incumbent on the government side to not have sky-high expectations and expect perfection the first time we deliver an AI product. And so we got to meet in the middle, be very transparent and work together to build these products jointly and build that trust together.
Col. Ryan “Ape” Hayde:
All right, thanks. We talked a little bit about chaos, and I described the operational level of war attempting to put some, to take chaos and make it understandable. And I’ll use an example. Let’s say you plan 100 aircraft to launch, 70 actually launch to get to the push, 50 come back, and in the time that that happened, three bases, runways are destroyed, there’s PL here that’s no longer there, there’s no missiles over here, and now all these aircraft are coming back and they have to go somewhere. And right now, the human being’s ability to understand and figure where that is is either to the operational attack level command and control that has to figure out that chaos and manage it. That is rapidly exceeding the capability of human’s ability. And so I see a huge ability in this realm to organize the chaos. So if we’ll go over to Peter, what do you think the best way, or how can AI help us do some of that organization of the operational level of war with many, many inputs that they’ll have to take into account?
Peter Guerra:
It’s appropriate you gave me the chaos question first, I appreciate that. So, I mean, we have this analogy today, actually one of the airlines that I work with, I’m not gonna say which, but they actually have to deal with this on a daily basis across their, just the analogy, across their entire fleet, even as it’s flying. President comes in, they have to delay, they have to go, it’s very complex. And by the way, they don’t really use AI, they actually use paper to do it. So the next time you get on a flight, think about that. They should be, that’s why we’re talking to them. Anyway, but I think like the whole point of AI is like, if a human can do it, then why do we even need AI? So human right now, when you’re overwhelmed with facts, it’s very difficult to pick out like signal from noise, we know that. And so that’s where I think like what I said earlier, in these situations where we’re planning for, future conflicts and challenges in very specific spaces like airspace, we really need to be thinking about how do we move AI from knowledge to action? How do we use the AI that is available today, we can use today to help us with that planning? ‘Cause I think what we’re gonna have is we’re gonna need to have the integrated data together in order for us to be able to turn that knowledge into action much faster. And again, I’ll say it, like I said earlier, in order for us to get to that point, we have to focus on the data. And the big challenge right now across Air Force, Department of War, like every customer I work with is, that data is not together. It’s not in a way where you can actually be able to go to knowledge to action like we talked about. So I think start with that testing and validation, and then we can move to the AI agents for any kind of mass chaos scenario.
Col. Ryan “Ape” Hayde:
Okay, all right, Lou.
Col. Louis Ruscetta, USAF (Ret.):
Yeah, I mean, like you said, I mean, we are managing that chaos now, but it is very done at that human level. Bringing that to artificial intelligence and machine learning really gets to how are we connecting all of that data? How is that data being connected and integrated together? Right now, like you talked about, I can have a spreadsheet that has four tablets on there on the bottom, and I’m not sure what the heck’s going on in each one as they’re trying to integrate it all. But that’s where artificial intelligence can bring in. So we talk about decision superiority and decision intelligence, right? But there’s also that information, data superiority, right? How much data can I bring in and bring that all together so it all makes sense, right? And this is where, I’m a meathead rugby player. How do I get to understand all of this at one time at a very quick rate? And it’s going to be that integration of data to manage that chaos. And so how are we going to connect, for example, the aircraft status with the runway status with the right radars, with the right weapon systems, that what’s operational, what the targets are, looking at red air, how are they operating? We’re going to have to put that all together, either, and not in one database, but have something that can reach out to all those databases for the information.
Col. Ryan “Ape” Hayde:
All right, thanks. Trey?
Trey Coleman:
I think about, it is all about the data, and there’s a lot of data out there, and it is chaotic, but I think that one of the things we fall in the habit of doing is trying to bring all that data to one person, and yeah, I want to present it in a neat, clean, orderly way, but I would argue that all that information doesn’t necessarily need to get to a human. The way we can think about AI is what workload can I offload to a machine to do for me so I only have to think about the simple, the hard things? So I’m not wasting my brain power doing simple administrative functions when I have really complex, hard decisions to make, and use AI to do that.
Col. Ryan “Ape” Hayde:
All right, thanks.
Ben Van Roo:
Yeah, so I think that’s 100%. So I mean, we’ve kind of laid out the foundations of it. It’s like, first, to the extent possible, you will want to know where are all the runways, what’s the status, and the unique portion of where AI is going now, which is a little bit different than the past, is I would argue, like, you could solve those problems with optimization algorithms of the past, like my old world, it all was numerical. Now there’s more context that can come in around it with language and what’s the status of things. So you can do more with the models. You can work to get the models. Artificial intelligence can also help us orchestrate. So I think to your point, Trey, it’s not necessarily just one person has to have all the information. It’s given the information that we have, and some of it might be degraded. We might not have any understanding. How do we orchestrate whatever we can to make the highest level decisions and have people get involved there, and then offload some of that to large language models, to optimization models, to simulations? What is intriguing and where artificial intelligence is starting to get more and more proficient is in orchestrating all these different systems together, and then saying, like, this is the best picture that I have given the information that is available and not.
Col. Ryan “Ape” Hayde:
All right, thanks. In a fight with a peer competitor, I hypothesize that there’s probably three things that are gonna, three major categories that will define whether we win or lose. One, I think, is contested logistics. Second is command and control, and third is the long-range kill chain, who has the farther arrow, traditionally is one, the ability to manage the chaos, which I talked about, and the ability to get stuff into the fight. So of those three, whichever one you wanna go to, where do you think AI will give us the biggest ability to conquer that specific task in the next five years? So, Lou, over to you.
Col. Louis Ruscetta, USAF (Ret.):
Where will it, which one are the big three? I think it’s going to be, to be honest, managing that fight, managing the orchestration of the actual operations at that time. So we’re doing it now, right now at VirtualAge, we are managing that logistics sustainment, building readiness spare kits, what is that agile combat employment, based on the tail numbers we’re doing, based on the parts that are on that aircraft, that can be done now, and that sets the stage, and that’s that foundation. But if we don’t know, and if we can’t operate within that red airs, that opponent’s OODA loop, right, and start making sure that we can make decisions better and faster than them at the time, we’re going to fail, and it’s going to be an ugly type of fail for our forces, right? So, especially in a peer competitor that most likely will outnumber us, right? So, we rely on the superiority of our weapon systems, and not necessarily the quantity of them, right? That has been the history of our Air Force, right? So, the ability to utilize that in a much more effective and efficient manner to create that mission set is going to be key.
Col. Ryan “Ape” Hayde:
All right, thanks. Trey?
Trey Coleman:
I’m not going to answer your question.
Col. Ryan “Ape” Hayde:
Okay, next, Ben.
Trey Coleman:
Here’s the answer, is I’m not going to pick one, because it’s just impossible to separate them, right? It’s like the joint functions, you can’t just, I’m going to go to war, I’m only going to do three of the five joint functions. No, you got to do them all. And so, I would argue that what AI is going to help us with is going to link the three, right? So, when I’m executing a long-range kill chain, I know what logistics I need to move after that kill chain has been executed, so I can rearm, and I’m making the best decisions possible about how to move those logistics. And so, I think it’s actually the connective tissue between those three functions that AI can help us the most.
Col. Ryan “Ape” Hayde:
Awesome, thanks.
Ben Van Roo:
Great, great punt. Yeah, but maybe to also punt, I’d say the, what I kind of think about, and Peter and I have known each other for a few years, and thinking about what does AI look like at the edge? All these different functions, when they are contested, when they’re degraded, I’m always worrying most about the fact that we might really start trusting all of this stuff, and then if we don’t have that connection, it’s going to be, what’s our decision cycle look like? And so, I think it’s going to be, part of it’s going to be our ability to have AI available, especially in times when blankets the fan, and we’re in really degraded environments, so we can still make that group, these five guys over here have access to some form of AI that they’ve now expected to be part of their decision process with some level of data. So again, it’s going to be the preparation for the mass chaos is, I think, going to be an interesting linchpin for all three of those.
Col. Ryan “Ape” Hayde:
Yeah, that’s interesting. So, right now we’re talking about trust in AI before, and we might get to the point where that’s not even a consideration anymore, it’s so part of our lives that when we don’t have it, we now know how to go back to the way it was before.
Ben Van Roo:
Yeah, no teenagers can write any term papers anymore.
Col. Ryan “Ape” Hayde:
That’s right.
Ben Van Roo:
It’s funny, but not funny.
Col. Ryan “Ape” Hayde:
Peter?
Peter Guerra:
Yeah, I mean, personally, I think that supply chain and logistics, I mean, we know that wins, you can’t fire a bullet you don’t have, or a missile that isn’t there, or it’s still being built in some factory somewhere. I mean, I think anything that we’re kind of expecting in terms of any conflicts in the next 10, 15, 20 years, the most important thing where we can apply AI right now is across supply chain and logistics, because we know that we need materials, we need to support the soldiers and Airmen and others, like if it’s not there, then we can’t execute. So I think everything follows from there personally. And what we’re seeing just from everyone’s perspective around like the global supply chain and logistics, they are rapidly adopting AI to automate all parts of the supply chain and logistics community right now today. And I actually don’t see, in the US at least, I don’t see as much adoption around that yet. And I think that’s where an area where we can really kind of focus and improve, and I think is gonna, is the most important of the three. Sorry, I know you’re a C2 person, but sorry about that, sir. But I really do think that is like the key thing that if we’re gonna use AI for anything today, that’s an area that we really need to partner together on.
Col. Ryan “Ape” Hayde:
Yeah, I agree. It’s all a command and control problem. You’ve gotta do it all, otherwise it doesn’t get there.
Peter Guerra:
Fair.
Col. Ryan “Ape” Hayde:
All right. We got a little less than 10, so finish it off with, we’ve talked a lot about the benefits and what we’re seeing in the future and where we think AI is going. What are some misconceptions of what we think AI is gonna be able to do for us that is probably not reality, specifically when we’re talking about what we’re trying to do in command and control? And so I’ll roll the bones and go with Peter first, the chaos guy.
Peter Guerra:
Thank you, yeah, I appreciate that. It’s gonna be total chaos. I get questioned all the time ’cause I actually studied AI and the technical level, mathematical level, I always get asked the Terminator question or whatever. So I think what’s really where we’re headed is actually a combination of AI intelligence and action, actually with robotics. I mean, let’s just put it out there, it’s not Terminator, so nobody quote me on that. Hopefully there’s no media here. But really, but thinking about things that work together, it’s not just the AI for knowledge, it’s action, action then into whatever we start thinking about in 10, 15 years around robotics. You’re already starting to see that actually in some factories now, like with the humanoid robots and like what NVIDIA’s announced and all that kind of stuff. But if you think about that in our context, that is, you go downstairs and there’s, every single booth has some sort of autonomous flying thing in their booth, like every single one except Oracle, but every one, ’cause we don’t do that. So I think that’s where we’re headed and so I think that’s, again, I’ll just stress again, the testing and validation portion ’cause that’s the part that, as a mathematician, I start worrying about probabilistic models being applied to things that fly and can do damage, right? Like that gets to be real challenging very quickly. So I think that is where we’re headed and I think that is where we’re gonna see most of the development happening over the next 10 years.
Col. Ryan “Ape” Hayde:
Right, I think you’ve hammered testing in every answer. So I think that’s near and dear to what you believe.
Peter Guerra:
At least I’m consistent.
Col. Ryan “Ape” Hayde:
Yeah, that’s right. Lou, wanna?
Col. Louis Ruscetta, USAF (Ret.):
Yeah, I think one of the biggest, I would say, misconceptions when I, when we’re talking about artificial intelligence is the, it’s the ability to solve problems, right? And thinking that AI is going to be the solution. AI is an enabler. It is gonna help us make the decisions. It is gonna help us bring together information that we haven’t seen before, right? Because either it’s been hidden in other data, right? ‘Cause we’re just overcome with the amount of data that’s there. Or it’s in an area that we just haven’t looked. So to go and, again, right now, we’re using artificial intelligence to support B-52. I’ll use B-52 as example since I was the SPO director there, right? We’re using B, to support B52 maintenance activities. AI is not gonna tell us, “Hey, the reason why we’re having a flutter issue on aircraft 18 is because of this part.” It’s not gonna get down there yet, but it is gonna identify that, “Hey, that flutter is always happening in this environment, at this case, at this part, maybe even at this location on the aircraft.” So now our engineers can look at that and really hone in on that problem much quicker. ‘Cause when I was a SPO director, it took us 22 months to find out what was going on in an aircraft. Now we can look at that and with the access that we have, and even being able to pull in not just the actual maintenance data, like the times we had to remove and replace a part, but what were the maintenance logs? What did the maintainers write about this? What is going on? What did the pilots write about this, and what was going on? So we get that information better.
Col. Ryan “Ape” Hayde:
Thanks. All right, Trey.
Trey Coleman:
I think this is more of a ethical question than a technical question. The human race is brilliant. The people in this room are brilliant. We could build amazing things, and we could certainly build a machine, a tool to make really complex decisions. We can build anything, but the question is, should we? And I think, I personally draw the line at decisions about taking human life. I think that we should never cross that line. We are capable today of building artificial intelligence that can make that decision, but we should not today or ever, in my opinion.
Col. Ryan “Ape” Hayde:
All right, thanks. Ben.
Ben Van Roo:
Yeah, I think the, maybe go kind of a different way with it and almost go to something you were talking about earlier on. I think there’s a misconception right now where we think, we’re starting to think about AI is like almost exactly what it is at this moment in time. This moment in time is kind of check GPT, plus or minus, and we have different variants of it. But we’ve talked about where does it impact C2, where does it impact everything? For me, I think it’s gonna be much deeper than that. We do need to spend some time in the DoD thinking about how is this going to impact mental health? How are we doing training of these soldiers? How are we helping them do defense travel? Like you guys are gonna all have to deal with vouchers and go home and be like, God, I wish I had some AI to fix that. So I think there’s a little bit of it that I would encourage us to not let it, even like chat, manifestations of chat, in a few years will be there, but in other realms, it’ll feel antiquated. And so one thing that I’d say is there’s a misconception that we now know what it is and how we are going to use it. And the ethical questions are front and center, but also it’s just a moment in time where we’ve barely scratched the surface. For all the good and the bad and the weird in between.
Col. Ryan “Ape” Hayde:
All right. Lighting round, one minute or less. What do you want this audience to take away about AI in C2? Trey.
Trey Coleman:
Let’s work together and build it.
Col. Ryan “Ape” Hayde:
Perfect, love that.
Ben Van Roo:
It’s gonna move as fast as we want it to.
Col. Ryan “Ape” Hayde:
Okay, Peter.
Peter Guerra:
Testing and validation. You knew I was gonna say that, right?
Col. Ryan “Ape” Hayde:
I knew you wouldn’t let me down.
Peter Guerra:
I can’t say it enough.
Col. Ryan “Ape” Hayde:
I knew you wouldn’t let me down. All right, and then Lou.
Col. Louis Ruscetta, USAF (Ret.):
We need to start on the road to adoption, like true adoption. So we talk about it quite a bit. We talk about how can we incorporate into our weapon systems, right, from an acquisition standpoint, from the engineering standpoint. But how do we start adopting it now, right, at the basic level from an engineering standpoint to put it into our systems, so when we feel the system, we’re not trying to add an AI capability into it later that gets, you know, shortcut. But how do we bring it in now to where it’s part of the design, part of the thinking from the very beginning, so we can start moving forward.
Col. Ryan “Ape” Hayde:
All right, thanks. So ladies and gentlemen, again, it truly was an honor to be able to sit on stage, or stand on stage while you sat, with leaders like you that are taking this to the next step and actually trying to get what the fielded forces and what our men and women in the fight need. So thank you all very much, and I appreciate your attention and time today.