AI Autonomy in the Fight: How Agentic AI Is Transforming the Tempo of Air, Space, and Cyber Operations

February 25, 2026

Watch the Video




Read the Transcript


Maj. Gen. Kimberly Crider, USAF (Ret.):

All right. Good morning. Welcome, everybody. Thanks so much for being here today, for what I think is going to be a really interesting and insightful, hopefully no holds barred conversation on this topic of autonomy. And really, we’re going to talk about autonomy and agentic AI, and how it’s promising to transform the fight. But let me kind of set the stage here for us a little bit. So I mean, clearly for decades we’ve been optimizing on platforms, but now we see the shift to optimizing on decision speed for competitive advantage. That’s been driven, of course, by this rapid rise that we see in autonomous systems, agentic AI, to enable that kind of outcome. And just to clarify, when we’re talking about autonomy, we’re talking about executing tasks without continuous human control. When we’re talking about agentic AI, we’re going a step further. We’re talking about interpreting intent, generating options, coordinating actions at machine tempo, typically across complex environments.

And in our complex air, space, cyber operations, machine tempo, the machine tempo by which these capabilities are operating at is really driving decision cycles. So our panel today is going to help us explore what it’s going to take to leverage these capabilities, leverage autonomy and AI to really take advantage of them, and to really optimize those decision cycles to our advantage.

But it’s not going to come without some hard fought change, right? I mean, in a fight where decision speed wins, AI, autonomy and agentic AI can certainly create huge advantages. But at the scale within which we have to operate, it’s going to take some hard fought changes, some hard fought changes across policy, across bureaucratic processes, across architectures, across acquisition, approaches, and across our mission, operations, concepts, and employment. And that’s where the rubber meets the road. If we want to transform the fight, we literally are going to have to fight to transform. So please join me in welcoming this panel, who’s going to help us unpack this conundrum. Joining me here today, of course, we have Lieutenant General Cropsey, who is the military deputy to the secretary, the assistant secretary for acquisition technology and logistics for the Air Force. We have Colonel Tim Helfrich, who is the portfolio acquisition executive for Fighters and Advanced Aircraft.

We have Ms. Shannon Pallone, who is the PEO for Space, Battle Management, Command Control, Communications and Intelligence, United States Space Force. And we have John Hudgins, who’s here with us from Google Public Sector. He’s a strategic growth leader and leading a lot of AI initiatives for Google and for its customers. So thank you panel for joining us here today. So to get us going, let’s kind of start with the vision, right? And I’m going to kind of pose this question for each of our panelists, starting with General Cropsey and kind of move on down the line. What do you see, sir, is the vision for how autonomy and agentic AI will transform the force over the next five years?

Lt. Gen. Luke C.G. Cropsey:

I think if I had to put it in a single sentence, I’d say creating high trust human machine teams that put lethality, speed and adaptability onto an exponential growth curve when it comes to the capability that we’re delivering.

Col. Timothy Helfrich:

Yeah. So you’re going to probably get more myopic answers for me on this panel, because I am laser focused on putting lethal mass into the hands of the operators, and we’re doing that through autonomy. But what I see in the next five years is that vision come to fruition. We are going to see in the air manned, unmanned teaming at scale, large force exercises, and operational.

Shannon Pallone:

So I’d say in five years, we’re not having this conversation, because I like to be a little aspirational, because it’s already fully integrated into our ops floors. And you guys touched on trust and teaming, that that is happening at scale and we’re already moving on to what is the next cool technology I’m not even thinking about yet that we need to be working into our operation centers.

Jonathan Hudgins:

These three folks are solving some really hard problems, really, really hard stuff, hard tech. I’m going to have a little more of a boring answer. I just want an agent that can file a DTS voucher automatically, right? I mean, so this team absolutely keep going. I mean, they’re going to nail it in the Pacific, but I’m just thinking about the weeks that are going to be wasted when all this team goes home from their TDY and spends hours trying to file their DTS voucher. So jokes aside, that’s not really a joke actually. We really need to do that.

Getting more tactical for a minute though. A place that I see real opportunity is the staff planning processes at the A staff and at the air operation centers, right? So think about that 72 hour ATO cycle. Anyone here who’s worked in an AOC, how much of that cycle is manual data entry, PowerPoint slide, Word doc, version 12 on some Word doc on SharePoint, right? Translation, if you’re in Korea, right? Man, Notebook LM, Gemini, these types of tools can do that stuff at machine speed now. We just got to get them into those networks. Imagine if Gemini was plugged into that 72 hour cycle, what we could do. That’s the kind of stuff in addition to the hard warfare stuff, but the staff planning, the ATO cycle, I think that’s a big juicy target for AI and particular agentic AI.

Maj. Gen. Kimberly Crider, USAF (Ret.):

Yeah. It’s all great. I mean, this is all part of the vision, right? This is all part of where we want to go. Certainly-

Col. Timothy Helfrich:

Hey, Kim, can I jump in real quick?

Maj. Gen. Kimberly Crider, USAF (Ret.):

Yeah, please.

Col. Timothy Helfrich:

One other thing, so as an acquisition war fighter, I spend a lot of time using the acquisitions’ weapon system, right? Microsoft Office. And if you got a really highly advanced program office, maybe you’re using the Atlassian tool suite. But one of the things I was given the honor to do is lead a tiger team for acquisition transformation in the Air Force looking at acquisition. And so, part of it is looking at what can we do with acquisition documentation? What can we do with acquisition reporting? And how about, what are we actually going to use to execute our programs? And when you fundamentally get down to it, you just have a data and data access problem, and leveraging agents to be able to push the right data to the right people, and pull it when you need it would be a fantastic thing to see in five years as well.

Maj. Gen. Kimberly Crider, USAF (Ret.):

Yeah. Well, and thanks for that, because I think with this grand vision and from the standpoint of human machine teaming and driving the tempo of warfare to the maximum benefit, all the way down to filing the DTS voucher for us automatically, we know that it’s going to take a lot of work to get there. And one of the biggest challenges is going to be the data. And so, let’s kind of hang on that topic for the moment, because I think it all kind of comes back to that in the conversations. And I really want to get this panel’s thoughts on what is it? What do we need to do? And General Cropsey, let’s go back to you. What do we need to do? How is the underlying data environment that you see today from your standpoint preparing us or enabling us to get to this vision? And if it’s not, what do we need to do differently? And then, let’s hear from some of the others, because you’ve each got perspectives on this.

Lt. Gen. Luke C.G. Cropsey:

Yeah. So I think there’s two fundamental problems that we’ve got to get after. And you mentioned it initially on the data front. I think we actually have to get everybody to understand that data isn’t one of those conversations that we have and oh, by the way, part of the conversation. Like, we don’t strap it on, we don’t bolt it on as an afterthought. We’re actually designing the systems that we’re building to create that data, share that data and exploit that data as a fundamental aspect of what we’re designing into the system itself. That is a significant mind shift when it comes to the way that we think about how the systems that we’re building it and we’re pushing are connected into a broader data strategy, data model. At the end of the day, if we’re looking, if we’re moving down this construct, the way I look at it is speed kills, what fuels that speed is the data that’s available in order to make those decisions.

So if we’re not first and foremost thinking about what our data strategy looks like on these platforms, whether it’s legacy and how am I going to get the data that’s on that platform that may not have been designed to give me that data, how am I going to get it off and how am I going to get it into the system, to the stuff that Tim’s working on right now with cutting edge modern software technology. What does that look like? How do I do it? That’s point number one. And all of us have to own that, by the way. That isn’t just something that you drop on a random engineer somewhere down in your program office. That has to start all the way at the top and we have to drive a culture that thinks that way. The second piece, and I’m sure we’re going to get into this some more, is that underneath of that, there is an underlying infrastructure that has to exist in order for that data to get to where it needs to go.

And the technical term right now for the state of that infrastructure, from my perspective, is it sucks. So given how great the team here responded to the DTS comment, we could have the same comment about the overall infrastructure that we have on what we’re doing on NIP or SIP. I mean, you pick it, right? Pick the network that you’re currently having to hog wrestle today in order to do the things that you need to go do. And while we certainly have software application challenges that go hand in hand with this, there is fundamentally an underlying connectivity issue here that we have to be able to get after. And I’m super interested to hear John’s thoughts on that, as somebody that does hyperscale connectivity, and what that looks like and how they get after it, because we ain’t.

Jonathan Hudgins:

Yeah, absolutely. So at Google, we’ve broken down the data silos. Google, I mean, like I say, we did this decades ago. I have internal search system at Google, it’s called MoMA, but think of it like Google search. And anything I need to know that’s going on in the company, I can ask it questions and it’s able to go and reference that information, and pull it up internal. That’s because of an API management system that we have internal to the company all riding on Google’s hyperscale infrastructure. That API management system is called Apogee. There’s a lot of other things, culture, et cetera, that allow us to move very fast at Google. But Apogee, for example, just to help the folks in the room understand, it’s in use right now by the Air Force actually in two projects that we’re working on, both with Air Force LCMC. And what we’re doing is take that legacy data source that you just talked about.

There’s maintenance information systems, for example, that were developed and procured before APIs even existed, right? This stuff, this is mission-critical stuff, right? We do the same thing in the financial services industry for banks, right? JP Morgan, Goldman Sachs, we modernize their systems too the same way. And that’s mission-critical stuff too. If Wall Street goes down, that’s a pretty big deal. They had the same thing. Things were written on mainframes, Cobalt, et cetera. Our teams come in and what they do is they build the Apogee API gateway that connects API to API between the legacy system and 21st century user applications. And we’re doing this right now with the Air Force on a maintenance project. The use case is maintenance, but the use cases can be really anything. It can go in air gapped environments, et cetera. So I think the bottom line is, and this is the technology, everyone in this room’s going to use this technology today, or at least sometime this week.

If you track your UPS package online, you’re using Apogee. If you order Starbucks online, you’re using Apogee. It’s all under the hood of everything we do every day to transact in real time with data sources. So these are the tools that Google builds, but it’s an API first mentality that you have to have between the data sources in order to have that gateway to allow data to interact in real time with AI.

Maj. Gen. Kimberly Crider, USAF (Ret.):

So how are we thinking about that in the space C2 environment, Shannon? Because I know you’ve been thinking about similar things to what John is talking about here. Maybe you’re taking advantage of some of what he’s just mentioned, but…

Shannon Pallone:

I feel really validated that we’re building API gateways right now, so that was great. No, I mean, but I think one of the things you’re hitting on is a deliberate investment in the infrastructure. And I think, sir, that’s what you were getting at too, right? It’s really easy. So here’s the problem in space. It’s really easy to say, “I want to buy these satellites, because satellites are exciting. They used to let me buy satellites. They don’t anymore,” but there are these exciting objects on orbit.

It’s probably as exciting as buying fighters, right? None of us are as exciting as buying rockets. They don’t let me do that. But that’s where it’s really easy to say, “Invest in this technology.”

And then if you look at how you actually want to use those satellites, if you look at how you want to use that data, if I look at… We’ve worked together for the last couple of years on building out the DAF battle network. How am I providing some of that data back to the Air Force, to the greater joint community? It’s all infrastructure. And the hardest thing to fight for is, could I have a little bit of money to build an API gateway? Because by the way, that’s going to unlock a lot of legacy data. Can I have a little bit of money to build better networks? Because by the way, data is still subject to physics. I think we’ve all been trained, sorry, by Google, that it’s instantaneously at our fingertips, because it feels that way, because none of us are on AOL dial up anymore. Probably most of us in this room actually remember what that sounds like. Sorry, not everybody. Definitely there’s some people who don’t, but all of us know what it sounds like, but it’s physics, right? How fast could you get that data when you’re on dial-up?

And then, if I’m on an edge platform, I hope you don’t have dial-up speeds, but in a contested degraded environment, you might have dial-up speeds. So when you go back to that question, and it’s the same problem in error that is the problem in space, how am I deliberately designing that? How am I thinking about what goes to the edge? How do I have the right data there when I need it, because I can’t have… The number of people who say, “I just want all the data everywhere all the time and just have all of it, and then I’m just going to be able to do whatever I want with it.”

And I’m like, “You don’t have enough server storage capacity, you don’t have enough bandwidth in any of your comm links to be able to actually have all the data everywhere all the time.”

It’s got to be that deliberate design. We’ve got to make those investments in infrastructure. And I’m watching the Space Force really lean into that in recent years, but it’s got to continue to be a focus or all the other things that we’re building are not going to be able… If you go back to the where are we five years from now, if we don’t make those investments, everything else we’re investing in is going to be underutilized.

Maj. Gen. Kimberly Crider, USAF (Ret.):

Yeah. We’ve got to make the investments. I mean, and we hear from Google and companies like Google that they’re making the investments. Clearly they see that the investment has to be made and what they’re doing to deliver these things, to the point where they’ve almost made it seem too easy for us. So we’ve got to articulate, while it seems easy, it doesn’t come without that investment cost, and that strategy and that design focus that General Cropsey was talking about. And clearly, let’s get to, Tim, your thoughts on this, because when you’re talking about being at the edge, you’ve got war fighting at the edge, you can’t lose connectivity, you can’t afford not to be able to operate these systems that are going to take advantage and depend upon these autonomous capabilities. How do you think about data? How do you think about the investment and telling the story that says we need to keep making these investments?

Col. Timothy Helfrich:

Yeah. Many people, when they think about a weapon system like an F22 or an F47 or a CCA, they think about the air vehicle and that’s actually just one segment. We over the years have figured out that you needed to have a training segment and you need to have a ground segment, but with F47 and CCA, we have a digital infrastructure segment, because there is no way that we will be able to execute those programs without the investments that, by the way, is built in, upfront, into our cost estimates on what it’s going to take to field this weapon system. And so, we know it upfront and then practical application of that is I need to get into a situation. I hope we start talking about mission autonomy soon and like aircraft flying. I want to talk about that, but let me seed it a little bit.

I know that I need to get to a place where I can update my mission autonomy overnight or between sorties, and there’s no way I’m going to be able to do that until I can get the mission debrief data to the software developer and back. And that’s got to be real time. And we’re probably not going to have teams of software developers all over the places we’re going to put CCA, but that digital infrastructure is important. I need to know what data, I need to be able to pull out the right data to be able to give to the software engineer to make the updates that are important.

Shannon Pallone:

But if your AI agents are helping you write the code, you might not even have to have the man on the floor taking that information back and making the updates.

Col. Timothy Helfrich:

So what we haven’t done, and this is great, we’re starting to do this. What we have not done is, because we have aircraft that will be flying, that will be carrying lethal weapons, and so what we need to ensure, and right now where we keep humans in the loop, this is a red line for us, the decision to use lethal force will be made by a human. And so, we just need to make sure if we’re applying agents to update the code, that that doesn’t get us into a point where it takes the human out of making that lethal decision, but we should be taking advantage of the modern software development benefits of using agents.

Maj. Gen. Kimberly Crider, USAF (Ret.):

Yeah. So we’re going to get to these topics, because this is a really important piece of this whole puzzle that we’re trying to unpack here. The data is one important aspect of it, and we’ve got some good thoughts here. The next important piece that I want to jump on, and we’ll get to the other things that you guys are building on too, is trust, right? Because in the quest for how we’re going to leverage AI, even with the underlying data environment that we so desperately need and seek, trust is a real factor here. Our ability to trust these autonomous systems when human life is on the line or when we’re dealing with lethal force, how does that really happen? How does the trust equation play out? So General Cropsey, you were thinking about this, I’m sure, in your prior role as the PEO for C3BM and you guys were working on ABMS, and you’re thinking about the cognitive load challenge. And how did you factor in that cognitive load issue and how are we going to begin to start trusting the AI that we need to work with?

Lt. Gen. Luke C.G. Cropsey:

So I think there’s maybe two things that are worth pointing out on this one. And the first one may not actually be that obvious. And that’s the fact that in order for us to make progress in this particular area, we actually need language to describe what we’re doing in a way that allows us to articulate in a not abstract way exactly what it is we’re talking about. I don’t think you can trust something you can’t actually describe. And so, a lot of this is how do we build a taxonomy, a language around what we’re talking about when we’re talking about AI and autonomy together, and the way that we’re actually applying it? So I’ll give you another AFA, Mitchell Institute plug. They did an air power forum last month, where they talked about AI and autonomy on a panel much like this. You should go check out that link.

It’s a excellent conversation around just how to think about this space and also language that is developing around this space. So I think the language is an important piece to this, because we tend to talk about it as AI or as autonomy, and I think we have to get much more nuanced in order for us to build the incremental approach into a deliberate discipline kind of a path. The second thing that I would say is that we actually have to deploy this in a way that allows our human brain to stay wrapped around the decision space, so that I can intuitively assess the inputs and the outputs. If we go too complex too fast, I don’t actually have the human intuition anymore to know whether or not the thing that I gave you for an input makes sense for what you’re telling me as an output.

And it’s really hard to build trust in that environment, which means we actually need to thin slice this problem in a way that allows me to get a much more narrowly defined decision around a particular application or instance or function, so that I can deploy this thing, look at what it’s doing, look at the results and go, “Yeah, that actually makes sense and I would’ve made the same kind of a decision.” When you first started using GPS apps on your phone and it was telling you to go this way and not that way, that was all great until it told you to go away, that you know it wasn’t going to be faster, except you didn’t know about the rec.

So you double guess the AI or you double guess the algorithm in this case and you went the other way, until eventually over time you’re like, “Actually, this didn’t end well for me the last 10 or 15 times I did it. I’m just going to do that.”

Well, now it’s telling you, “Look, you could go multiple different paths, but it’s going to take you this much longer.”

Why did they do that? Because of trust. As humans, at the end of the day, I don’t trust this pane of glass and tell me what I already know. Well, you don’t know a lot of things. And so, how you gate that and the way that you actually put that into practice has a huge implication on whether or not you fundamentally end up trusting a thing or not.

Maj. Gen. Kimberly Crider, USAF (Ret.):

Yeah. In your world with CCAs coming online, trust is a big factor.

Col. Timothy Helfrich:

It’s a huge factor. And I agree on many of the things you said, but we can’t be so kind of paralyzed to get everything perfect before we get going. So the way that we have taken this in CCA is let’s take the small step and let’s go deliver in our commitments and let’s take the small step and we’ll show it in the virtual environment thousands of times. And then when we go fly it, so like when we flew just recently, the RTX mission autonomy with General Atomics, it did what it said it was going to do. It did what we expected to do. And when yesterday, if you didn’t hear, when Shield AI’s mission autonomy flew on Anduril’s jet, it did what it said it was going to do. And what we actually found going through some of those reps and sets in the virtual environment is that you could gather trust in virtual.

And so, one good story is on the PVI. So think like a tablet that a pilot might have in the cockpit. When we first started showing these to the operator teammates, because everything we do is hand in hand with our operators, it’s just how we’re wired. But when they would give us feedback on it, they were like, “Oh no, I want to see this and I want to see this and I want to see it.” So they wanted to be the pilot for the CCA, but because these are not remotely piloted aircraft, these are autonomous. And once they saw, they got more used to it, they’re like, “No, I don’t want to see that anymore. I want to see less. I want to see less. Give me the full battle space picture. I just need this little bit of information.”

And then when somebody new comes in, the old hat is like, “You’re not going to need all that. You can ask for it, but you’re not going to need all that stuff.”

And then when we went and flew that in the F22 in live flight, and it did exactly what we said it would do in the virtual, we gained that trust. And so, we’re gaining trust piece by piece, and we’ll gain more trust when we get the aircraft to the EOU here, to the experimental operations unit, so they can put it through its paces in a few months. And we’ll get even more trust when we put into large force exercises here in a couple of years. So it’s that incremental fulfilling the commitments and showing that it’s going to do what you expect it to.

Maj. Gen. Kimberly Crider, USAF (Ret.):

How’s it work for space? Is it similar?

Shannon Pallone:

Well, I was going to say, similar, but slightly different experience that I’ve had on the space side and it’s really getting at the same thing. And I loved what you said earlier about where the human is in the loop in making the decisions, because it’s all the stuff you just talked about that lets you then make the decision at that point in time of how much are you going to trust versus how much are you going to say, “No, no, I want human eyes on it.”

So I actually had a system where we started with automation and then we ended up having to take the automation out of the system, because the battle managers who were trying to make decisions said, “But I don’t understand why it came up with that answer.”

And I don’t think we had any regrets on starting with automation, because trying to pull the automation back out when we’d started with it was much simpler than trying to build the automation in after, but it went back to the, how do I get the reps and sets, exactly that? How do I get the experience and build the trust? And okay, now that I understand as the human how the machine is making decisions, and I can now explain that to my boss when my boss says, “Well, why would you recommend that? Are you sure the machine is right? Why would it come up with that COA?”

They could come back and say, “Hey, these are all the factors that went into it. Here’s all the decisions that it’s making. Here’s how it’s stopping along the way.”

And now you add the automation back in. And so whether you start there or you end there, I don’t think actually matters, but it is that time of getting the experience in to understand it. And it’s fair, right? I have been trying to teach myself prompt engineering at home and I call the AI my intern, because it requires a lot of feedback. “Hey, great job. Hey, we talked about this and you’ve screwed it up three times in a row. Why do you keep doing that?”

I’m looking forward to the day when it’s not my intern, when I’m like, “Hey, boss, help me with this problem.”

But it’s getting that comfort level and understanding how to interact with a technology that is just so important for being able to actually adopt this, because in the line of work that we do, blind trust in, the machine’s definitely right, is just never going to be the right answer.

Col. Timothy Helfrich:

Yeah. One thing I failed to mention is that one of the ways that the operators knew what was happening is we built a joint simulation environment system, we call it game, but it’s essentially like an orange wired JSE. So we know what is actually being, what commands the autonomy is giving and the inputs that made it give that. Now, it doesn’t know everything, but it gives you so much insight into why it’s doing something. So then essentially we’re doing virtual debriefs with the pilots, understanding why it’s doing that and are we comfortable with that or do we need to make changes?

Maj. Gen. Kimberly Crider, USAF (Ret.):

How are you thinking about it in the commercial world? Because I’m sure you… I mean, you guys are building the AI, you’re deploying it, you know users have to trust it. How do you think about trust?

Jonathan Hudgins:

The answer to that question is a summarization of what I’ve just heard from these three. It’s two things. You build products that just deliver extraordinary value to end users, and you have a robust test and evaluation process to do so. So General Cropsey gave the example of, I’ll just go ahead and say Google Maps. If you think about the products you all used daily, Google Search, Gmail, Google Maps, I mean, there’s a lot of trust you put into these products. I would argue that Google search, you’re trusting that machine with a lot of very intimate life questions about health, relationships, finances, those kinds of things. I’ll speak for myself, my personal Gmail, if it was hacked, people know a heck of a lot about me. More than probably we would want, but I trust Gmail with that information. And then maps to get my family to and from. I think a useful example that’s analogous to Colonel Helfrich is Waymo.

Waymo is Google, if people didn’t know that. Waymo is owned by Google. It’s live today in 10 cities. Two more are coming. The process that we took to implement Waymo is the same process that, or at least what I’m hearing from Colonel Helfrich. One kind of step at a time. One street, one highway, one city, et cetera. It never goes live in a city without a driver in the seat first, and then eventually it gets to a point where there’s no driver in the seat. I actually took my first Waymo last week in Austin, Texas, and it was completely normal. The only thing that was awkward was I found myself wanting to say hello as I got in and goodbye as I got out, which was a little bit weird, but I ordered it on Uber. It showed up. My Bluetooth from my phone unlocked the door.

I got in, it took me to the place. It drove just like a human would. And you got to think, and it’s the same problem he has. It’s navigating a highly adversarial environment with imperfect information across a number of multimodal sensor suites that it has to process, and make life and death decisions in nanoseconds. So how did I trust that? Well, I trusted that, because I’m a part of Google. I understand literally what’s happening under the hood and maybe I’m a little forward leaning. And I said, “Yeah, I’ll get in this car. I believe in what they’re doing.”

But it started one road at a time. Waymo didn’t just bang pop across the United States. It started in San Francisco and it moved to Phoenix and Austin. And so, it’s going to come to every city.

Maj. Gen. Kimberly Crider, USAF (Ret.):

So as you go through that process of little by little trusting that the system is going to do what it’s been designed to do, well, I think what comes out of that is delegating authority. So this is a whole nother aspect that we have to think about if we really want to try to take advantage of these technologies, is how do we start to make the decision of what authority we’re going to delegate to the machine? And I want to go back to Colonel Helfrich on this, because I think this is kind of the very tangible example we can point to from a military example. With CCAs and as we’re operationalizing CCAs, it’s the ultimate human machine teaming experience. How is that delegation authority working out?

Lt. Gen. Luke C.G. Cropsey:

Hang on. Hang on just a second.

Maj. Gen. Kimberly Crider, USAF (Ret.):

Yeah, sure.

Lt. Gen. Luke C.G. Cropsey:

We had this conversation off-stage and it needs to get on stage. And some of you out there have got to be thinking, and I’m going to go back to your, we can’t wait for it all to be perfect. I agree. We’ve been flying Predators for decades. We’ve been flying Global Hawks for decades. So in connection to Kim’s question, what took us so long? Why is this so much different than what we’ve been doing literally for the last 20, 30, 40 years?

Col. Timothy Helfrich:

Yeah, that’s great. I thought you said you weren’t going to ask that question, sir. But you’re my boss, so great. It’s actually a good question, sir. So first, predators, reapers, you just take those off the table. Those are remotely piloted aircraft. There is somebody on stick and throttle flying those all the time. Global Hawk, it’s a little different, right? But what you have there, Global Hawk is you set up waypoints and it does what really falls into the category of flight autonomy. And what you’ve actually done is you forced me to have to talk about autonomy GRA, which I want to do. I will before I’m done, but flight autonomy is those basic things that just say, “How are we going to ensure that it doesn’t crash? How are we going to do the basic things like takeoff, follow way points?”

Mission autonomy, on the other hand, that is when you’re employing TTPs, tactics, techniques and procedures. When you’re actually inside the very highly adversarial area or an engagement area, it is going through those different tactics and the playbook that needs to happen there. So the interesting about that is that flight autonomy, that’s inextricably linked to your flight and safety critical aspects of the aircraft. So that is kind of like falls in an airworthiness world. My mission autonomy, on the other hand, because it’s being limited by what the flight autonomy will do, it goes and does those things that only, or that autonomy does really well at, like coming off a sensor, data processing and doing those tactics, techniques, and procedures. And because it is not inextricably linked to the flight and safety critical, I can update it like this, right? So I had to separate those two things out. And so, hopefully that answers the question.

Lt. Gen. Luke C.G. Cropsey:

Yeah. So I think in the context that Kim was asking, the distinction that I think is important here is that in the Predator or maybe better of a Global Hawk case, what I delegated was literally just flight controls. Fly the plane, fly the route, land the plane. That’s all I delegated. In your context, when we’re talking about CCAs, we’re delegating a crap ton more stuff, technical term.

Col. Timothy Helfrich:

Yeah. Oftentimes we talk about the man fighter, the pilot, in this case is the quarterback. And sometimes it might actually be better to be calling the coach, because they’re calling the plays, right? You need to be at the point where you can call the plays for a four ship of CCA. And so, the one other thing that I might add in is people, we want to make sure that we’re thinking about the use case. The use case for a CCA is much more dynamic than a use case of a Global Hawk too, right?

Maj. Gen. Kimberly Crider, USAF (Ret.):

Yeah. So we’re delegating a lot more, but I also take your point that we’re separating out the different types of autonomy and how we’re delegating it, where flight autonomy, it’s airworthiness, it’s safety, it’s pretty set. And although we’re going to do updates, we’re not going to do them as continuously as we might with the mission autonomy, where we’re going to allow for more changes, because we have to adapt to the environment as well. So in the space C2 world, how does this play out? And where does this question of what’s delegable or not delegable as it relates to integrating autonomy into our space C2 capabilities coming together?

Shannon Pallone:

I would say I think we’re actually a little bit behind you in the thinking and I loved listening to all that. So thank you for that. I feel like you just saved me a meeting. I think we’re a little more cautious. We are very cautious in space, because I send something up into space and I can’t get it back. And I crash something in space and I maybe create debris that becomes a problem for everything else for a very long time, depending on what orbit it’s in. It could stay there forever. It could degrade very quickly. It’s going to be getting that practice in. So saying I’m going to do health and status of a satellite and doing autonomy that way, similar to the example you gave is going to be a much easier problem than the spacecraft itself is going to decide that it wants to make a maneuver, because it thinks that there’s going to be a conjunction and I’m going to hit another object.

I don’t think we’re quite at that comfort level, but I think we’re not at that comfort level because we haven’t employed enough of the technology to work through it yet. We’re still trying to understand what those risk trades are and really trying to characterize, and I think General Saltzman said it so well at his keynote on Monday, but that risk trade of the things that I know how to do and how I’m doing it today versus the risk that I’m taking, because I’m not doing those things. We’re still experimenting with how we find the right balance between those trades and when is it worth giving up something that I do today for the thing that I can’t do, that I should be able to do tomorrow?

It goes back to trust. It goes back to how much we’re building that out. I loved your Waymo example, simply because I think the thing you didn’t highlight, as someone who has family in San Francisco and gets a lot of articles sent to me on all sorts of Waymo things, and I’ve ridden in one and they’re fantastic. I think the greatest thing was, “I don’t have to talk to a driver. This is amazing.”

But you have consequences for that too, right? So if a Waymo hits a human being, there’s still legal consequences to that. It’s actually the same exact problem. And I think we tend to forget that our problems and commercial problems aren’t actually that different. And so, making those leaps into commercial technology shouldn’t be quite that hard, but it takes us a little while to wrap our heads around that model. So I’m seeing that change. I’m seeing as the Space Force has stood up, the excitement of the guardians who are what I would call Native Guardians, they aren’t the ISTs that everyone else was that founded the Space Force. Just think of technology in a totally different way. And I think we’re at that inflection point, where we’re going to start to see it really rapidly flip, because the guardians of today are already saying like, “Why do you guys do things in such an antiquated way? This is silly. Why am I spending my time on this?”

So I think we’re at a good point, but I don’t think we’re fully there.

Maj. Gen. Kimberly Crider, USAF (Ret.):

Well, and unfortunately on that note, we’re out of time, but this has been a really rich conversation. So thank you guys so much for helping us think through these things. The fact of the matter is we can’t wait any longer. I mean, and we do have guardians, we have industry, we have so many folks that can and should, and we need to help pushing on this. Because to General Cropsey’s point, we are way out of time. It’s taken way too long. We’ve got to get after the data. We’ve got to deal with the trust issues. We’ve got to do the iterative experimentation. We’ve got to work through this risk calculation, so we can get the delegation and the autonomy right, and we’ve got to just move faster. So join me in thanking this panel. They have just been phenomenal here today.