Accelerating Decision Superiority: Harnessing Data and AI
September 23, 2025
Watch the Video
Read the Transcript
This transcript was generated with the assistance of AI. Please report inconsistencies to comms@afa.org.
David Ware:
All right. Thanks. Thank you all for joining us for what will, I’m sure, be a really exciting session on data and AI, and how those can power the future of the warfighter. Quick introductions. Major General Michele Edmondson is the Air Force Deputy Chief of Staff for Warfighter Communications and Cyber Systems Headquarters Air Force. She serves as the Air Force’s senior leader responsible for guidance, oversight, and advocacy of Air Force warfighter communications and cyber systems implementation. She integrates comprehensive planning, strategic resourcing, and force development readiness and management activities to deliver critical freedom of action in cyberspace, advancing joint warfighting missions by ensuring the service maintains technological superiority and operational resilience across all domains of conflicts. General Edmondson, welcome. Ryan Tseng is the President, Co-Founder, and Chief Strategy Officer of Shield AI. He is responsible for driving long-term vision, shaping corporate priorities, and guiding the company’s positioning at the intersection of autonomy, national security, and emerging technology. He previously served as CEO, guiding the company from inception to over 1,000 teammates and a $5 billion valuation, and was previously the founder, CEO, and CTO of YPower, a wireless charging company acquired by Qualcomm. Ryan’s a graduate of MIT and the University of Florida. Ryan, welcome. And Major General Luke Cropsey is the PEO for Command, Control, Communications, and Battle Management. He was commissioned through the Air Force Academy in 1995. He’s held key engineering acquisition and sustainment positions of aircraft and conventional nuclear weapons systems and National Reconnaissance Office space systems, served on the office of the Secretary of War staff for systems engineering, and led the squadron group and wing levels as a materiel leader and senior materiel leader. As PEOC 3BM, he has what many have said is the hardest acquisitions job in the Air Force, integrating across air and space forces, as well as with joint and international partners to design and deliver the DAF battle network. Welcome, General Cropsey. Maybe I’ll just start with, the department has asserted that data and AI and autonomy are going to play a critical role in the future of decision making and achieving victory. What’s the most exciting advancements that each of you have seen in the last year or so? And then, what are a couple of things that you’d like to be able to do, but you can’t yet?
Maj. Gen. Michele C. Edmondson:
Me first?
David Ware:
Yes, please.
Maj. Gen. Michele C. Edmondson:
Awesome. Really, let me start with a huge thank you. David, thanks for doing this. But thanks to AFA, and thanks for the invitation to be here. And it’s great to sit up here and share the stage. And the highlight of AFA is finding these connections that you otherwise wouldn’t find. And just talking to Ryan backstage about some of the things that we’re both really focused on and priorities has made this just a huge win already. So thanks for letting me join this group. So what’s the one biggest thing is kind of hard, maybe parochially. I’ll kind of start with, from where I sit today, wearing my cyber badge, it’s the stand up of the A6 and the S6 over the last year. And why that’s really important when it comes to data and AI and autonomy is really because now you have two voices sitting at the table to advocate for the service requirements and to be able to get the momentum, to get the resourcing, to get the support that we need from the department, to really invest in the right thing. So that is huge. But from a programmatic perspective, I think I’d really start with CCA and look at the success that we have had with the CCA program. And that’s really exciting. But there’s lots of things out there. I think that my AETC friends would really resonate with what the Air Staff is doing using deep learning models, I think 620 of them in total, to really get after the pilot retention crisis in the Air Force. And that is really the training piece, the absorption piece, and the retention piece. And how do we look at that differently? So that’s exciting too. And I think from what do I think or wished we could do, honestly, that list grows every single day. From a six perspective, it’s really getting after ensuring we have the trained Airmen and resources to sense, assess, protect, and defend everything associated with the warfighting environment. And General Hensley said it best yesterday. We have to think of that as so much bigger than networks. But I think that my mind first goes there is the six. But then it goes to Ms. Char-Lachlan. And I put my space hat on. And everything that we could do in the space domain, and really the mission end of the data that we get from our spacecraft, and how much data gets left on the drawing room floor. And I think about the technical exploitation we could do with more AI in that environment. And the same would go for the Intel community. I was talking to Ryan about using drones and AI for base defense. The list is really, really long. But now I think with an A6 and an S6, we can do better to advocate for solutions in the future.
David Ware:
General Cropsey, you want to pile on that one?
Maj. Gen. Luke Cropsey:
So I think, I mean, from my perspective in a C2 environment, the piece that we’re continuing to push on is the DAP battle network. It’s a system of systems. We’re looking to integrate sensors, effectors, logistic systems. And how do you apply all of that to the singular point of being able to drive decision advantage inside of our own capabilities as compared to those that your adversary is bringing? And so from a temporal standpoint, I think the conversation around data and AI is fundamentally about, hey, how do I accelerate the speed at which I can make those decisions? And how do I accelerate the quality at which those decisions are made? And then collectively, I end up in an OODA loop that, quite frankly, nobody can catch up to. I think to get there, we’re going to have to figure out how we get the AI components of what we’re doing out to the edge. And we’re going to have to move past a scenario where we’ve got these kind of pristine training grounds inside of these data environments. And we’re going to have to get AI to the point where it’s actually directly interacting with the physical environment in which it’s currently operating in, and it’s actually learning directly out of the environment as opposed to necessarily having to rely on training data that we provide it directly as a function of how the model is getting instantiated. So as I think the field continues to progress and we start thinking more and more about what data looks like and AI looks like on those edge conditions, and how we start thinking about the AI application of the physical environment that it’s in, I think we’re going to really open up some potential as we’re moving forward to really transform the models around how we think about the application of AI and the way that it learns and the way that we fundamentally employ it inside of our weapons systems.
Ryan Tseng:
For me, the last year has been about seeing for the first time, I think, large-scale impacts on the battlefield by AI. Shield AI was started 10 years ago by my brother and me, and one of the co-founders. My brother was a Navy SEAL, and he wanted to bring the best of what was going on in the autonomous driving sector to the mission of protecting service members and civilians. We have a billboard that’s occasionally in writing that says, “The greatest victory requires no war,” bringing just asymmetric capability to dominate on the battlefield to provide that deterrence. And through the first nine years of our journey, I was proud of the impacts that we had in different places around the world. But last year, we had the opportunity to start deploying our capabilities at meaningful scale in Ukraine. And for the first time, you could see how AI was enabling unreachable parts of the battlefield to become reachable. Undiscoverable targets, discoverable. Unhittable targets, strikeable. And to see AI making a difference in peak kill in these targeting loops, to see it accelerating the decision cycles and providing strategic effect on the battlefield was really a moment that I thought has been amazing and something that I think that if the US and allies continue to lean into, will become a source of asymmetric capability for decades into the future.
David Ware:
Maybe, Ryan, could you expand a little bit around your view on General Cropsey’s point, which was, as I heard it, we have to be able to get out of the sort of central, you know, we have a data repository, we’re gonna train all the AI on that data, we’re kind of in the middle, and we have to be able to push AI to the edge. You all have had some success there. What was required to be able to enable sort of that pushing AI to the edge?
Ryan Tseng:
So just quickly, some of the success that we’ve had in terms of operating in challenging environments, we’ve used AI applied to vehicles to conduct over 200 sorties in the Ukraine. None of them have had GPS. And I think that the critical capability that probably needs to be built is the government industry team that’s able to learn, adapt the software, and deploy it at scale. One of the biggest learnings has been the software that worked, you know, yesterday isn’t the same software that’s gonna work the following day. And so when we talk about physical AI or systems that can learn on the edge, a lot of the pipelines that provide, you know, the high-end performance that enables mission effectiveness, but is going to earn trust and is certifiable and trustworthy by the operators, I think is the critical missing component.
David Ware:
Interesting. Large language models and agentic AI are all the rage these days. Do you all see a place for those in sort of the, I guess in the death battle network, for example, or in the places that are a little bit more command and control and autonomy related? Are you mostly focused on, you know, more traditional sort of autonomy use cases, neural networks, those kind of things?
Maj. Gen. Luke Cropsey:
Okay, I’ll bite. No, I think they’re absolutely relevant. And I think as we continue to move forward to the point Ryan was making, there’s a, I think a sweet point between the complexity of the space that you’re operating in, the human mind’s ability to correlate the inputs to the outputs and a trust that we, I think better or worse, right, we either do or we don’t have within the model that’s generating those outputs. And I think as we get better, I call it intuition built around whether or not the models are actually giving us the kinds of answers that we would expect, I think what we have to do is actually start with a narrower field of inquiry where we have a more intuitive ability to understand the output from the standpoint of what our heads can stay wrapped around. And so at least from a battle network standpoint, I think primarily where we’re starting is in a more narrow set of things where we can really train things specifically for a particular aspect of the problem and still have fairly good intuition at an operational level for what the results should look like. And I think we would grow it from that standpoint out. And I think in those contexts, these kinds of large language models and agentic AIs really come to bear much quicker than they would otherwise. But I certainly welcome any other thoughts on that one.
David Ware:
Yeah, might be a good actually segue into a little bit of a conversation around trust. The first time I used a large language model, I really screwed it up and I didn’t understand how it worked a few years ago. And so I typed into it, give me a list of the most authentic Indian restaurants in the Northern Virginia area, and it produced a pristine, amazing, like I was so excited to eat at all these restaurants and five out of six of them didn’t exist. And that was my error, right? I didn’t know that it wasn’t searching the web in the backend and all of those kinds of things. So like my fault. But my trust was broken and it took me a month or two to try it again ’cause I was like, oh, it’s garbage, right? And it turns out like the error was between the seat and the computer. But I guess the question for you all is, as you all think about what is required to enable that trust at the end user level for people who are gonna be new to this technology, new to these technologies, experimenting with them, and it’s so easy to lose that trust. In addition to sort of the starting small point, General Cropsey, that you mentioned, maybe Ryan, you could start with, what are a couple of the things that you all have found are important to be able to create that trust in the AI solutions that you’re building?
Ryan Tseng:
So I think that’s, you know, finding the intersection of performance and trust is the key to unlocking the capability of AI and autonomy on the battlefield. And so I guess like number one, from a technical architecture perspective, it’s super important to have the system segmented and compartmentalized the right ways. If you just take an aircraft as an example, make sure that your flight critical systems and things that don’t necessarily need the creativity and like very wide scope thinking of an LLM, you know, like segment those things off, make sure that they can be like flight certified and everything else, and then try to keep the LLMs with very complex autonomy in another part of a system. So just like, I think there’s technical architecture work that if, you know, implemented, helps keep the AI in a box, so to speak, that’s gonna bound what it could do, that’s either bound the downsides of what can happen. I think the second component is making sure that people get exposure to the way that the AI thinks and behaves. So to your example about, you know, asking what Indian restaurants are in the area and then finding out that they don’t exist, that was an example of, you know, testing an AI in a safe space, right? And finding ways to get, you know, in our case, we do a lot of work at autonomy, embedded in the simulators, embedded in the war games, so that people can see and develop intuition for when it’s gonna do well, when it’s gonna misbehave, and be able to call out, you know, or basically just identify the TTP, so to speak, in terms of how this AI can be used. Effectively, so I think integrating it into the training programs, the TT development things, the war games, is gonna be critical to developing trust in the system. And then I think very fundamentally, I think first principles matter so much. You know, we both went to University of Florida, and the thing I took away from my engineering program at Florida was just an appreciation for first principles, like the math and the physics, you can basically build up to anything when you have a strong grasp of those things. And as AI becomes more creative and more influential, it’s the people that are able to see, you know, when it’s hallucinating or when it’s making bad decisions. Like being able to be smarter than the AI because you’re grounded in first principles, I just think is incredibly important for anybody that’s working with the tool.
Maj. Gen. Michele C. Edmondson:
And I think I would just add in from a force perspective, I agree with everything that Ryan said. You know, we bring people into the Air Force, you know, every single day, that bring incredible experience in this space. We need to capitalize on that for one, but then we need to ensure that the force is kind of on a level playing field for really understanding the principles of explainable AI and what does that really mean. And there’s nothing purported about AI that is about relinquishing control. I think it’s about providing the force reps and sets so that they get comfortable with the strengths and weaknesses of AI, right? And it’s called human machine teaming for a reason, and the human always kind of has the upper hand in that relationship to intervene when the human needs to intervene. But there really is, I think, an opportunity for us to do better by the force, to continue to give them the tools to make them more comfortable in the trust space.
David Ware:
I wonder, General Edmondson, if you could also talk a little bit about the funding picture and advocacy for AI and for data. I think one of the challenges that we’ve often seen is that it’s often seen as a trade-off from other mission that I should be accomplishing. We don’t always necessarily palm for the data and the AI things that we need to be able to do. And then there’s a question of like, okay, well, I’ve heard some people say, what mission am I supposed to be doing less of so that I can go pay for the AI thing that I wanna do? And so at the headquarters level, how are you all thinking about sort of advocacy for that type of funding? And what would you say to programs or to organizations who would really love to be able to do more data and AI work but can’t find the budget for it?
Maj. Gen. Michele C. Edmondson:
Yeah, and that kind of goes to the trust piece. It may be a little bit of a culture change for the force and really seeing, I think, that an investment in AI is really an investment in innovation for the future. And we have to be more comfortable with experimentation and accepting risks so we can realize those investments for the future. And then I kind of go back to where I started with having the advocacy now at the table between the S6, the A6, and CN, and that trifecta of a relationship I think will really help move the needle on getting funding for things where we need to invest to support the force. But really, it’s an opportunity to move us past just the acceptance of the status quo and to look for new ways to do things, data-based solutions to difficult mission problems that I think in the end allow us to move the human in the loop to higher level task because we’re more efficiently doing other things, which then in and of itself is a resource investment.
David Ware:
General Cropsey, you’ve been fairly inventive in terms of how you’ve done some of those things within your own PEO. How do you think about sort of flexibly allocating resourcing to address some of those requirements?
Maj. Gen. Luke Cropsey:
Yeah, I mean, this might be somewhat pedantic here to the broader group, but at one level, you’ve just got to get it written into your RDocs and your PDocs so you have the flexibility inside of your existing funding stream to be able to move that money and do things as targets pop up that aren’t necessarily about going and getting other or new money. It’s about how do you take the money that you have and leverage it for the problem that you have in front of you. And given the speed at which things are developing inside of this field, we’re way inside of the palm cycle. Like if you’re thinking that you’re gonna put something in place for the 28 palm, I mean, we haven’t even gotten to the end of 25 yet. I mean, it’s right next week, but think about it. You’re two, probably three years away from actually being able to do anything with that money. That’s an eternity in this field when it comes to like how fast things move. And if we aren’t setting ourselves up under the existing authorities and tools that we have with the budgets that are already laid in, we’re just gonna completely miss the game. And I think having that kind of mindset where you’re looking at your existing programmatic plans and you’re staying flexible around what those plans are. In other words, I’m gonna lay in those plans, right? Every program manager is gonna go do their due diligence, but I’m gonna hold them loosely. Because at the end of the day, I know that whatever I think I’m going to do in the next year is likely not what I need to go do in the next year. I’m gonna have to be able to pivot. And we don’t generally as an acquisition enterprise tend to breed that level of flexibility in the way that we think about how we do programs. But I think in this environment, especially if you’re in a software-defined, hardware-enabled environment, and you’re not actively thinking about software kinds of speeds and AI kinds of speeds, we’re gonna miss the boat. And we’re not gonna get there as quick as we need to, or we’re not gonna stay ahead of where the adversary’s going. So you’re gonna probably have to have some top cover around what that looks like and how we do it, which requires a conversation, not just within the program that’s executing it, but it requires you to engage with your Magicom. It requires pretty tight coupling with the operational user on the other end to really understand what that problem set looks like. And then you just have to get after it. And I think in the process of generating that value on the back end, we’ll start to shift that culture. But up front, you’ve gotta be willing to take some risk early on and really get after some of those high opportunities that would minimize the downside cost later on, to Ryan’s earlier point. Understanding where you can take those smart risks, and even if it doesn’t play out, you learn something along the way and you haven’t really crashed anything, I think that’s gonna be really important. So how do we put robust prototyping experimentation into the underlying fabric of how we think about what we’re funding and what’s in our budget? Because those are the environments that I think fundamentally, if we’re enabling those on a continuing, ongoing, annual basis, I may not know exactly what falls into that year over year, but I know I’ve got the money and the opportunity to do it.
David Ware:
Ryan, I wonder just from the industry perspective, how do you, what’s your perception of how the funding landscape for these types of things is evolving and when programs and organizations have done really well, how have they been sort of engaging you on that, or maybe there’s some innovative things that they’ve done?
Ryan Tseng:
I think the general nailed it, it’s about flexibility, being able to adapt as you learn what’s going on. I think some of the biggest friction points for industry is identifying innovations or working together with the end user to find a solution to the problem and then just being unable to execute because the money is locked into tight grids. And so I think that anything that can be done to improve the flexibility and the agility of funding is absolutely game changing. And of course, there have been initiatives across the Air Force, across OSD, across the Department of War to make funding more agile. But I think that’s just an area where it will continue to pay dividends if program managers and the acquisition community continues to try to find ways to lean into agility because the technology landscape changes so quickly and the right solutions to the problems, not just the technology but works, what is effective on the battlefield changes so quickly that the funding has to be as agile as the technology and the evolving operations on battlefields.
David Ware:
For sure. You’ve talked a little bit about sort of getting in bed, or not in bed, but in with the users to try to develop these solutions and to test them. General Cropsey, I wonder if you’d talk a little bit about, you all have done a lot of experimentation, Chalk N and other things, in terms of trying to get capabilities quickly tested and piloted. What have you seen sort of be successful in the government industry team on those types of testing scenarios?
Maj. Gen. Luke Cropsey:
Yeah, I mean, a little bit to Ryan’s earlier point, I think in some of these cases, you’re just going back to good first principles. And in a lot of cases, it’s like, how do I take a system that’s kind of designed around, hey, I gotta do a requirements and I throw it over to the acquisition team and I throw it over to the resource team and there’s this three-year process that has to happen in order for us to close that loop, and you just get back to good user-centered design where you’re going straight to the end point and you’re saying, okay, I got some smart guys over here on the nerd herd side of this business that have the ability to really think deeply about where and how the tech can be applied, but there’s really sticky information over on the operational side. I don’t fly a jet, so I don’t necessarily understand all of the things that are going on in a given mission set. You got that piece of it. How do I create a collaborative environment that lets me actually get out there and put it in a real scenario, a real environment, and muck around with it, right, technical term. And I think, right, the space is so complex in both an operational sense and a technology sense that it’s really hard for people to wrap their head in the abstract around the things that you’re talking about. And until they actually see it in an environment that they understand and they know, it’s problematic to design it on the virtual space side of this and know that you’re going to land it if you’re not putting it out there into that environment. So I think large-scale exercises have become increasingly important. We obviously have upped the game on that front from an Air Force perspective, a department perspective. Finding the pieces of it that you can adequately segment into something that would allow you to do a significant component or capability without necessarily have to test the whole end-to-end thing all at once, I think is also important. And that requires a little bit of thinking up front in coordination with your user to figure out, okay, what matters to you, and then what can I actually viably test in that environment? And you gotta get it into a plan and you gotta do all the things that go into the prep and all that goes with it. And you actually need a team that’s thinking about it from that perspective and that’s getting in front of it.
David Ware:
I’d like to zoom back out and talk a little bit about other sort of barriers to at-scale AI adoption and what can be done about them. I think there are certainly pockets of success and pockets of being at scale. There’s a lot of experiments that are sort of out there and have not quite made their way into the realm where they’re impacting the everyday Airman or civilian. And so I just wondered, General Edmondson, maybe you could start. We talked a little bit about funding, we’ve talked a little bit about trust. What are the other kind of barriers in your mind that are preventing more of these sort of experiments from getting to scale?
Maj. Gen. Michele C. Edmondson:
Yeah, I think I would really start with data. Data, data, data, data, data. And we’ve gotta get the data right. And we really struggle, I think, with data integrity in being able to integrate our data. We’ve gotta be able to share and aggregate data. And I think that we go into things with good intentions, you know, we look for COTS systems to use and solutions, but then we personalize things to the degree that we just can’t get there to integrate, right? We create these individual hand-carved wooden shoes out of things, and then that creates barriers that weren’t certainly intentional, but it creates a problem where we then have to go start over to find new solutions. So we’ve got to do better by the data. And from a SICS perspective, if we can’t get the data right, there are so many things that we just won’t be able to do to support General Cropsey in his endeavor. So we’ve got to focus more on the data piece. The other piece is I would say, you know, hey, we need more people with the right skill sets, right? We’re the United States Air Force, and we need your expertise. So we are working hard, we’ve stood up a new data analytics career field, which I think is great, but that is a very small core subset of human beings. We need more people that can help us with the data piece. And then I would say it’s bigger, certainly, than this core group of humans. We recruit a digitally literate force, right? The Airmen that show up at basic training, the 18-year-olds, they were digitally literate from the time they were born. We have got to capitalize on that, and we have got to upskill them throughout their career so that we continue to build on the skills they built, bring with them, and then we’ve got to, I think, go fix the rest of us that maybe weren’t a digitally literate force, right? And to make sure we can keep pace with that generation entering the Air Force. And then really, I think, fundamentally, to the actual execution of it, it’s the scalability problem, it’s the infrastructure problems, it’s the computing problems, it’s looking for software-defined solutions instead of hardware that needs to be physically replaced in equipment to be able to, I think, and then maybe the last piece is the culture piece again, and the force understanding the value, right? So that everybody is bought in to these things that we think are gonna, I think, help move us forward in the future.
David Ware:
Thank you. Ryan, on your side, what are the hardest barriers, you know, sort of as a builder of these technologies that you hope the department can help you overcome?
Ryan Tseng:
I think the biggest thing, and this is gonna be such a boring answer, it’s just the agility of the acquisition system, you know, is probably the biggest challenge. But I do think that, you know, so maybe to give an original thought here rather than just repeat the same thing that, you know, people have said before, I think that finding ways to get into the digital environments, into the simulations, into the war games with AI having a seat at the table can be an accelerator both for industry and for the government to fast forward to the best thinking on, you know, how to use this on the battlefield. And so I think, you know, if there are ways that the Air Force and the program officers can figure out how to fast forward the injection of autonomy and AI into the digital environment, I do think that’s something that will pay dividends, and I know some people are thinking about it.
David Ware:
Anything to add, General Cropsey?
Maj. Gen. Luke Cropsey:
Yeah, I mean, I think to that point, you actually have to have a digital environment to inject it into, which is maybe obvious, but I think it’s worth stating. Like from a battle network standpoint, one of my biggest challenges is just the actual, the underlying infrastructure that actually makes it all work. And if you look at the number of different stovepipes that we have scattered around, right, different aspects of how we do even just the C2 part of this, right, I have literally independently owned and operated stacks all over the place. And because of that, and because of the lack of data ubiquity involved between all of those things, I can’t enable that to happen, and if I can’t enable that to happen, I can’t enable the AI application to happen. So a lot of the focus around what we’re trying to do on the C3BM side of the house, quite frankly, right, it’s not glamorous at all. I mean, to your previous point, Ryan, it’s like just the hard nugwork of figuring out how do you get the right infrastructure where you need it so that you can actually enable the kinds of things that we’re talking about right now. Because short of that, it’s entirely just aspirational about, hey, yeah, I’d love to go do that, but I can’t even get a stack that actually will run an AI model on it right now in any kind of an edge condition. So that’s what I would add.
Ryan Tseng:
And maybe if I could just call out a good news development, the autonomy government reference architecture the Air Force, I think, has been pushing, in my opinion, has been a really amazing step that frankly exceeded my own expectations in terms of simplifying the development to deployment of autonomy across vehicles. We’ve had the opportunity to now work that architecture on a diverse set of vehicles from helicopters to target drones to CCA surrogates. And it’s an architecture that I think has delivered tremendous value. And so I do think that where government and industry have identified friction points, that’s at least I would claim, and interested to know other people’s views, but I would claim that’s an example of a success story of seeing where autonomy and AI was going and taking proactive steps to lay track that would enable the industry government teams and frankly industry industry teams to fast forward to useful capability.
David Ware:
Yeah, I think we’ve heard a lot in the last year or two around what does it take to have sort of standards for an open architecture, and I realize that you all have done a lot of experimentation and a lot of development around that task that’s been super exciting. Maybe just one, looking towards the future question is, assuming that you’ve gotten all the experiments right, you’ve built the infrastructure, all of those kind of things, would love to hear you all talk a little bit about your level of aspiration for that level of autonomy. And I don’t wanna get in a doctrinal discussion or any of that kind of thing, but would love to know, what are the types of things where you see in the future, actually we think this should be sort of fully autonomous, what are the ones where you think, well, maybe we actually would like to have a little bit more sort of control or human in the loop? Maybe General Cropsey, we can start with you on that one.
Maj. Gen. Luke Cropsey:
Yeah, actually in some ways, that’s a really hard problem or question to even answer in some instances, because so much of what we currently today think is off limits versus what’s in play is a function of where we sit and the time and the place where we’re at. Like the best example I could think of as I was contemplating the question here. So if you go to the shooting range, your M16 has a single shot and full automatic on it. And so you have a selector switch based on where and how you happen to find yourself in the battle space and whether or not you’re trying to meter out your individual ammo and you’re targeting individual things or whether or not you’re being overrun and it’s like you put everything you got on it. And that’s an interesting comparison compared to the 50 cal machine gunner that’s sitting right next to you, which is a whole other level. So in a sense, what we’re doing is we’re asking the folks that are implementing the AI to be able to implement it through that entire continuum from, hey, I wanna really kind of have oversight or input into everything that happens before it happens to what I’ll call full automatic on the 50 cal. And trying to sort your way through where and how from a decision advantage standpoint, you’re gonna put in those stops or those markers and where you have to put a dial in a sense, right into the tech to be able to do that. That’s a challenge from where I’m sitting. And I think back to the trust conversation that we had early on, I think we’re gonna have to work our way into that. But I think when the balloon goes off, there’s a very real conversation around how much does that policy influence the way that you implement the tech at any given point in time. And so I think we have to be very thoughtful about those kinds of questions around how we put the technology into play so that we’re capable of being able to address that full range of options.
David Ware:
General Edmondson, as you think about, I mean, most of these things are gonna be developed and in the programs and applied and refined sort of at the edge. How do you think about setting policy for autonomy and for AI in that context where so much of that sort of end level innovation is gonna be happening sort of very far at the edge?
Maj. Gen. Michele C. Edmondson:
Yeah, I think I kind of, as I’ve been in the seat here for a couple of months, I’ve realized that maybe in this world of compliance versus warfighting, we tend to tune our systems focused more on compliance versus warfighting. And I think we have to shift that paradigm to tune our systems for warfighting versus compliance and have the discussion of where there is tension between the two and where there’s associated risk to force, risk to mission, and what are the commanders in the field need to be able to do their job. And so from a policy perspective, I think that we need to kind of shift how we’re having that conversation. And then thinking about it holistically, I kind of go to follow the doctrine in mission command and what our expectations are for mission command in the first or second island chain in an austere environment, in a DDoL environment, and what are the requirements for decentralization going to be for those teams to succeed in that environment? And then what do we have to give them from a policy guidance framework to be able to use data automation and AI in that environment to make decisions at echelon, which is a lot different than senior leader decisions that are sitting back away from that tactical edge.
David Ware:
Ryan, you’ve talked a lot about autonomous kill webs, and I know that’s one of the things you all are sort of getting after. What else needs to be done sort of in practice for you guys to be able to achieve that vision?
Ryan Tseng:
Yeah, if you had asked me a couple years ago, I would have said going from conceptualization to in-flight capability on weapons or aircraft, I think a lot of those development pipelines and infrastructure have now been established, and it’s actually possible to pretty rapidly go from, I want to do this mission or this thing, and then produce that capability that can go do that thing in an exercise or something else. The big gap that I see in the autonomy lifecycle to enable these kill webs is the factory that produces the TTPs. Like how do you inject autonomy on this into the warfare center so we figure out how to actually use this most effectively? Is that thing that we conceptualize at the front end as being useful actually useful? Does it survive contact with some of the brightest minds in warfighting? So I think enabling those warfare centers to pressure test the TTPs, provide those feedback loops, because the good news is the development cycle time has gotten a lot faster. And then the second component is how do you train that at scale, and then how do you deploy it with force while continuing to connect those loops back? So I think to get away from spot deployments of autonomy to large-scale deployments of autonomy, we’ve got to find a way to, at scale and pace, take care of those next steps in the autonomy and AI lifecycle.
David Ware:
Awesome. Well, in our last couple minutes here, I’m sure that there are a ton of vendors in the audience who have amazing solutions that they would like to bring you all that are relevant to your missions. I wondered maybe, General Cropsey and General Edmondson, if you could just start with sort of what types of solutions are your teams most interested in hearing about, and how should people go about bringing those to you?
Maj. Gen. Luke Cropsey:
Well, in terms of like our touchpoint, the easy answer on my end is, right, send us an email to our front door team, and that way you’re not stuck with a single point of failure in the system, right? If you send it to me, I’ll make my best effort at making sure we get you to the right spot. But if you send it to the front door team, they will actually make sure you get to the right spot. I think in terms of the problem space that we’re interested in, I mean, you heard a lot of it here today. So hopefully, you know, you took some good notes as we went through this. But I think the other piece of this that maybe we haven’t necessarily touched on quite as much is the fact that we probably need a way to actually exercise how this would look in an operational sense without actually having to do the operation, right? The scale, the speed, the numbers, the mental loading that I think is gonna happen in the system is gonna be like nothing that we’ve seen to date. And I’m not sure we actually know how to create that environment ahead of time. So very interested in thoughts around what that would look like and how we would maybe do that.
Maj. Gen. Michele C. Edmondson:
Yeah, I agree with everything that Luke said in that exercising piece. I’m struggling with, we’ve got to be able to train like we fight. And calm degradation has got to be a top priority for understanding what the warfighter requirements are and what we can provide. And we have to be able to recreate that in an exercise environment. First, I probably would say though, anything that we can do in the SIX world to enable the success for Luke, we want to be in those conversations and be partners with the community to do that, certainly to help him. Selfishly from a SIX perspective, I would say it starts with the data and we’ve got to get the data to the edge. We’ve got to get the data to the edge and we’ve got to ensure that we have the calm capability in the austere environment. Once we ensure that, it goes to protecting and defending our networks and really automating cyber defense. And I shouldn’t say network, I should say the entirety of the warfighting environment. And we’ve got to stop thinking about this network as a separate thing from the warfighting environment. It’s all a warfighting environment. The network is the warfighting environment. I think that Char said it great yesterday when she said, “We’ve got to start thinking “about combat power being digital at its core.” And I think that is a different paradigm than what we think about today. And then it goes back to the topic of the conversation, right? We need to have more teams, more AI, more human machine teaming so that we can better enable decisions at echelon and that’s not just senior leader decisions. And then I think it really goes to the development of the force and the readiness of the force in anything that we can do to partner to better ready the force for the future conflict.
David Ware:
All right, a good note to end on. General Edmondson, Ryan, General Cropsey, thank you for your time. Really appreciate it.