Accelerating AI

February 24, 2026

Watch the Video




Read the Transcript


Susan Davenport:

Yes, thank you. Awesome, all right. Welcome, everybody. I am Susan Davenport, the Department of the Air Force’s chief data and artificial intelligence officer and I’m absolutely thrilled to be here at the AFA Warfighter Symposium with leaders from industry talking about acceleration of artificial intelligence. As you know, especially this group here in this room, AI is no longer theoretical, it’s already influencing real-time decisions, shaping operational capabilities and changing how we secure our nation.

Across the Department of the Air Force, AI adoption is accelerating rapidly and deliberately. We’re moving beyond pilots and proofs of concept to AI embedded across multi-domain operations, readiness and sustainment, workforce augmentation, readiness, development, training and education. But we’re very clear about one thing, we do not do this alone. Many of the most impactful advances in AI are coming from industry where capabilities are fielded faster, scaled earlier and proven in real world environments. Noted, most environments aren’t as complex as the Department of the Air Force’s environments but they’re equally as complex in different ways and so much of what we can learn is what industry has already put into practice.

That’s why, today, I want to hear directly from industry partners about where AI is working, real successes, real lessons learned and what actually takes to move rapidly from prototype to mission outcome. I also want to hear industry’s reaction to the Department of War’s newly released AI strategy, I’m interested in what resonates and where you all believe the greatest opportunities exist for government and industry to move faster together. I have one shameless plug, the Department of the Air Force is getting ready in the next several days to release our own AI strategy. That strategy is focused on speed, operational relevance and breaking down barriers to success, it’s literally on Secretary Meink’s desk right now.

So, today’s conversation is also not theoretical, it’s about execution, it’s about alignment and, ultimately, it’s about decision advantage, ensuring data and AI translate into real measurable impacts for the war fighter. All right. So, with that, I’m going to introduce our distinguished panel. I’m excited to be joined today by leaders who are working at the intersection of AI and real world outcomes. To my left, we have Chris Brown from Virtualitics, he is the public sector CTO. To his left is Nicky Pike, he is the field CTO at Coder and, hopefully, Nicky, when he introduces himself, is going to tell us what a field CTO is. Next to him is Jay Salzbrun who is the CIO for Defense Systems at GE Aerospace. And then at the end we have Jay Theodore who’s the CTO for ESRI.

So, thank you all for joining us, it’s going to be a great discussion. What I want to do now is turn it over to you to give about a one-minute introduction of yourselves, what you’re doing today and how AI shows up in your work. Go ahead, Chris, we’ll start with you.

Chris Brown:

Hey, Chris Brown. So, I like to joke that I started my career last century saving the world from Y2K so you can all thank me a little bit later. But really, I have joined Virtualitics where we bring AI to the readiness and sustainment world to be able to focus in on things like predictive maintenance, munition storage so we don’t go over net weight explosives and things and how we can really support the war fighter, really, on the front line to be able to get our ships, planes and vehicles to the fight.

Nicky Pike:

So, my name is Nicky Pike, so I’m the field CTO with Coder and a field CTO is actually the bridge that we do between customers, our product, R&D, our sales teams, we try to bring in all the requirements that everybody wants and make sure that everybody’s talking the same language. Now, I started my career over 30 years ago working in companies like Microsoft and, recently, right before I came to Coder, I was with VMware where I worked on Cloud Foundry. And my career has always been about helping developers be more productive and increase what their output can be and I came to Coder because it satisfied a requirement that we’ve had in the industry since we started software development which is having those consistent environments and getting rid of a lot of the variants that we have in development.

And when we start looking at AI, the things that we need for humans are even more so when it comes to AI because now, instead of having that one environment, we can have the ability to spin up 10, 50, 100 different agents to run so being able to provide that consistency and realize that output that’s coming from agents is why I wanted to come here.

Jess Salzbrun:

Hi, Jess Salzbrun, I serve as the chief information officer for GE Aerospace’s defense division. I love my job, I sit at this really special intersection of technology and delivering mission outcomes for the war fighter and it’s just a really, really exciting job that I get to show up to every day so thank you for allowing me to be here. For those that may not have full context for GE Aerospace, so we are primarily a propulsion provider, we’ve been partnering with the US military for over 100 years. We power two out of every three helicopters and fighters in the US fleet and half of the US Air Force’s bombers. We aspire to lead the industry in AI and we are well on our way to do that. We’ll talk a little bit today about some of the ways that we’re applying AI to deliver sustainment outcomes for the US Air Force so thanks for allowing me to be here.

Jay Theodore:

Hi, I’m Jay Theodore, I play the role of CTO. I focus quite a bit on enterprise and AI technologies, bringing them into our products, I’m in product development. And in terms of what we do at ESRI, we are a 57-year-old software company and I’ve been there for about 33 years now and I started as a programmer but my real role is to bring relevant technology in a useful and integrated manner. So, if you take our technology, it’s basically capturing the digital twin of the world whether it’s above ground, on the ground or below ground and, as part of that, bringing the system of observation, the system of record. And I would also say the system of decision making where, whatever is relevant and the context is often spatial, there’s a location that’s tied to pretty much everything that you do. So, that’s what we do and we provide that as SaaS and PaaS and, in the tactical edge also, that’s what we do.

Susan Davenport:

Great. Thank you all. All right, now we’re going to get started into the discussion so this is where it’s going to get exciting. All right. So, I’m going to start with you, Jess, and then, if you could, when you’re done, turn it over to Jay. So, most of us in this room know that the Secretary of War released his AI strategy on the 9th of January. And for all the CDAOs and the mil depths and the COCOMs, we’ve been very, very busy. It’s an extremely exciting time for CDAOs for data and AI in the department as a result of that strategy so, hopefully, all of you have read it.

In that strategy, it talks about what the mil depths and COCOMs have to do in 30, 60 and 90 days and so, again, we’ve been busy working on those things. We accomplished for the Department of the Air Force the 30-day activities and that was catalog 460 some systems of record into Advana and the other thing was identify at least three, we identified six because we’re overachievers, but we identified our fast follow projects that are aligned with the pace setting projects that the Secretary of War has lined out. And so, tons of fun things that are happening in this realm and I know, from my seat, my vantage point, I think this was the exact right strategy at the exact right time.

From industry’s perspective, what are your thoughts about this strategy? How are you all reacting to this and where do you see the greatest alignment? Jess, if you can kick this off?

Jess Salzbrun:

Yeah, thank you. So, for me, this was really an acknowledgement of the fact that, clearly, we need to supercharge our industrial base but, also, we know that we don’t have the capacity in our industrial base today to support the deterrence and readiness goals that we aspire to. And that AI is a necessary tool in order to achieve that, that we will not achieve those goals without the application and aggressive adoption of a technology that gives us competitive advantage over our adversaries and a technology, by the way, that the US does better than anyone else in the entire world. And so, you know that that has many of our enemies a bit scared but we’re at an inflection point now where we can’t just rely on the AI labs to be doing all of the heavy lifting. We all look to Palantir to do it for us and this involves the participation of all of us in the industrial base.

And so, GE Aerospace, in partnership with Palantir, by the way, we enjoy working with them but we really are leading from the front in that and just one data point to put some meat behind that claim. In 2025, we delivered 30% more engines than we did to our defense customers than we did in 2024 and a lot of that was driven by the application of technology and artificial intelligence to help us identify and break constraints in our supply chain. So, for me, it’s a really exciting time, we are very supportive at GE Aerospace of this strategy and are excited to be part of industry partnering with the department in making these goals a reality.

Susan Davenport:

Jess, thank you. Jay, you want to take that?

Jay Theodore:

Yeah. As I read through the AI strategy document, I saw a lot of alignment with the industry itself. Being in the industry for a long time, there’s something about system design, system development and system deployment and the AI strategy actually captures all of it. It’s not just what you do but how you do it and how do you gauge whether you achieve the results or not. And even the timeline that is specified there, I would say, is well aligned. In product development, we have sprints, we have iterations and we do the checks and balances at the end of each of these and then we have product releases every six months. Even from a timeline perspective, it aligns to that and then it’s the full stack starting from data. When we talk about AI, the most relevant thing that we need to discuss is data, the cataloging of that, the metadata that’s associated with that because, if you don’t have a good starting point, anything that’s the end result is just made up.

So, that’s what I would say in terms of how the strategy aligns with what we do in industry and how we can bring about success through this. So, it’s really a great alignment, I would say.

Susan Davenport:

Yup. Thank you for that, Jay, appreciate that. All right, Chris and Nicky, this next question is going to be for you. We’re increasingly mandated any lawful use language in AI contracts to better support our war fighter work. From your perspective, what contractual or bureaucratic barriers still hinder agile tech companies from partnering with the department? If you could change anything tomorrow, what would it be?

Chris Brown:

Yeah. So, this is a great question especially with the lawful use. If anyone’s been reading the news with what’s going on with Anthropic, so this topic could be a little bit of a minefield, I guess, these days. But what is one of the biggest challenges to implementing AI within the department actually really focuses around the CDAOs, I think. One of the passion projects I’ve had consulting for and selling to Department of War over the last probably 10 years has really been around the data governance and management and especially the security of the data. Because it is very important that the data that’s going into the models, especially if you’re running these RAG-based use cases, what data is actually going in there, what security classifications they have and what data’s coming out and matching that to the actual people.

So, working with the CDAOs is going to be one of the key things for industry to make sure that, as we implement these going forward, that we can actually make sure that what’s coming back out and what’s going in is actually what someone can actually see because we don’t need any more data leakages. We have enough problems with China attacking our networks and insider threat going back to guys like Snowden and even before.

Susan Davenport:

Yeah. I will just add before Nicky answers, phase one of the AI strategy talks about cataloging the data and the data source, that’s only step one. So, the great news is, for all of you data and AI officers out there, the next phase is going to be actually logging the metadata, understanding what data that you’re collecting and then making that available across the department. So, cataloging of the information system is interesting and a required first step but we’re moving toward metadata.

Chris Brown:

Yeah. And that context, when people talk about doing text to SQL types of use cases, that metadata is so important so that the model can actually understand what each little element within the system is.

Susan Davenport:

That’s right.

Chris Brown:

Even though how smart LLMs are today, they’re really bad at math still.

Susan Davenport:

Yeah, absolutely. Awesome, thank you. Nicky?

Nicky Pike:

So, I do believe that any lawful use mandate is absolutely the right direction and I do think this aligns with what we’re seeing with OMB and how it goes about federal procurement today. So, OMBM2522 already states that agencies have to … They’ve got to avoid vendor lock in, they’ve got to look for product interoperability, they got to look for data portability and they’ve got to have clear delineation between IP and data ownership. One of the things I think we’re seeing with the 180-day timeline to embed this in contracts is we’re sending a clear signal but we’re still going to have what I feel are some barriers that are going to be more operational than philosophical and the one that comes to mind for me most is going to be that system by system ATO.

So, right now, we come in and we help build a program or a platform that’s out there today, that platform gets authorized then a project team comes in and they build an agent on top of that platform, well, most organizations are going to require that just that agent go through a complete ATO cycle again. And to me, this is a lot like requiring a building permit for furniture that goes inside a credited building. The platform itself is the security boundary, it’s not the things that operate inside of it, I think we got to start treating it that way. So, the most significant change that I could see is that we go in and we codify the fact that AI development and runtime platforms are infrastructure, they are not bespoke systems.

So, this means having acquisition language where the department can go in and they can accredit a hardened environment one time, they can treat the workspace templates and the configuration as government property and then this is going to allow the things that happen within that system, things like new agents or workflows or new models to be treated as configuration changes. Those will be subject to more streamlined risk reviews rather than having to go through the whole ATO process again and I think this is how we align the any use concept with the speed that the strategy’s providing and we do have precedents for this already.

When we look at the Department of the Air Force, some of their software factory programs like Kessel Run, I think these gave us great results and I think that’s something that really … That we can all agree was a good outcome for everybody.

Susan Davenport:

Yeah, I love that one. And luckily for you, the acting deputy CIO is sitting here in the room, Dr. Keith Hardiman, and so, hopefully, he heard that and we’ll make sure that that happens but, yeah, absolutely agree.

Nicky Pike:

We’ll talk later.

Susan Davenport:

There are a lot of precedents set for that and I think inheritable controls and all of that is exactly the right direction we should be headed so thank you for that. All right. So, the next question is going to be for Jess and Chris. Sustainment and readiness depend on large data sets, turning large data sets into timely decisions. Can you share an example of where AI is already delivering measurable operational impact for the department?

Jess Salzbrun:

The clearest example in our world is a partnership that we have with the DLA and the US Air Force in managing their fleet of the JD5 engine which powers the T38 aircraft. So, it’s no secret that getting more pilots trained is a priority and we’re struggling to do that right now, we can’t train pilots on planes that aren’t operational. And you can think about the amount of data that goes into maintaining a fleet of engines especially when that data is spread across multiple different acquisition agencies where there’s a source of truth with the OEM, GE Aerospace, there’s a source of truth with the DLA, there’s a source of truth within the Air Force, sometimes multiple sources of truth and so the ability to manage 6,000 individual piece parts on a JD5 engine across multiple different agencies is incredibly challenging.

And so, the work that we’re doing is creating a single, unified source of truth, it is a data platform that really serves as a decision engine where we are leveraging artificial intelligence to identify constraints in the supply chain before we may even know that they exist. If you can think about a human trying to sift through 6,000 parts and the availability of those parts and placing demand on the supply chain in order to get parts positioned where you need them, when you need them, the work scopes, when maintenance needs to be performed, when engines need to be coming off wing, it is a giant optimization problem that AI is really, really good at. And so, this tool that we have created is putting AI in the hands of those that are managing the fleet and managing sustainment of the fleet.

And just one really tactical example of how that shows up is, within the platform, we have what’s called the AI morning brief where you can pull up the platform and it’s going to dig through all of the data that we have pooled together and consolidated and it’s going to tell you your number one priority this morning is to go and tackle this specific part because it is going to hold up a repair that you need to do in three weeks that’s going to prevent this many engines from going back on wing and training more pilots. So, it’s been a really, really neat and very impactful application of AI in helping support sustainment and readiness goals for the US Air Force.

Chris Brown:

Yeah, you would’ve thought I wrote this question for the panel because this is what Virtualitics does and AI is really going to be a game changer for the department around sustainment and readiness across supply, equipment, training and personnel. The sad fact is is that most of this is maintained on Excel spreadsheets, it is very manually intensive so that leads to errors, it leads to planes not get in the air, ships in the water, tanks in the field because, all of a sudden, someone pulled the last door for an aircraft out of the bin and they didn’t know they needed to order more.

So, what we’re doing is we’re making sure that we can make those predictions based on historical data about when you’re going to need to make those maintenance calls, be able to show you your supply going down so you can get ahead of when you hit zero. Make sure you can call DLA six months, a year ahead of time because we know lead times are long for almost any part, probably even screws within the Department of War so making sure that we can really keep those things in there. And even from storing of munitions, like I mentioned that we do storage planning, we saved global strike command over 50,000 man-hours because we are now able to start operationalizing, automating off the restorage of munitions and taking into account using machine learning models, allowing them to query large language models about where certain parts within the munition needs to go because some things can’t live to one another.

In my very basic layman’s terms, I did not serve, I kind of equate it to putting bacon soda and vinegar together, you get the kids’ volcano in third grade so we don’t want that to happen. Taking into constraints net explosive weight because we can’t go over that for safety reasons. So, it’s very important that we start getting this into the hands of all the service members out there and the biggest challenge is the data. What we’re trying to do is not try to be another data lake, another system that have to send data to, hold that data, re-ETL it, re-catalog it-

Susan Davenport:

Thank you for that. Appreciate you saying that. Yeah, thank you.

Chris Brown:

Yeah, because we can talk about Palantir, we can talk about Databricks, we can talk about 100 other different data lakes that are out there so let that data be managed and governed there as the true system of record and then we are providing that last mile of analytics on top of all that.

Nicky Pike:

And I would like to add one thing. We’re talking a lot about how AI can actually approach how we think about projects and how we analyze data but there is the other aspect of this about what AI can do for the people that are actually developing these applications and these programs and what we can do to make the developers within the Air Force more productive. And not only developers, but also those domain experts and those engineers who maybe don’t have that technical background but they’ve got a lot of domain knowledge, they’ve got a lot of processes that they think that will work but how do we get those out into software that they can actually use. And I think that’s one of the places that we’re also going to see a huge movement in AI where we’re able to unlock those capabilities from people that may not otherwise have that technical ability to do so.

Susan Davenport:

Yeah. So, you said you may as well have written this question for this panel because it’s exactly what you do. You all just riffed off of something that I did not plan to talk about but will now really quickly is, in addition to the department of the Air Force’s AI strategy that is, hopefully, on the secretary’s desk right now, we also have a data strategy and that data strategy talks about decentralized data, leaving the data where it lives, having those authoritative data systems then feed up to some centralized catalog where you can hit an API and get access to that data but leaving the data where it is so that we’re not handling it multiple times and making multiple copies of it so I’m really glad to hear that you all are thinking in that same vein.

So, I will jump to the data question now that we’re on that. So, Chris and Jay, let’s talk about data. Data discovery remains a persistent challenge. So, for those of you in the department itself, you know that we have evolved slowly into getting to decentralized data management. So, we started out with a lot of things and then we ended up with a data fabric and that data fabric was a stepping stone to where we’re going next which is something called DAF data on demand and it’s exactly what we were just talking about, making data discoverable for AI and for humans and machines essentially.

So, Chris and then Jay, data discovery remains a persistent challenge, how does industry manage the full data life cycle? How do you enable discovery and governance and make it operationally useful in your companies? Or, if it’s not in your company, do you see other government customers that have been successful in this way?

Chris Brown:

Yeah, I have been handling this both as a consultant going back about 10 years and as someone selling into and it’s a difficult problem and we’re starting to make small strides forward. What has been good is that the advent of the data lake has been a very good thing because we’re now able to easily store large amounts of data that is both structured and unstructured in a economical way. Being able to use object storage and then separate that storage from compute makes it much easier for us to be able to bring that data in, it’s easy to adapt to as opposed to our traditional ways of using traditional relational data stores to where we had to create these specialized data marts all over the place so we can actually just really start centralizing this data. And I’ve seen this actually come to fruition in programs I’ve worked with like, I’m sure everyone’s heard of Avana out there, I’ve actually worked with Air Force Vault over a number of years too to help provide these capabilities out.

What is also important is that we’re starting to get better tools with data observability for data lineage which has been one of the, I think, key things missing from the data governance thing in that cataloging is pretty simple, obviously, it’s difficult to keep up to date because a lot of times those catalogs are hard to keep up to date. We’re getting a better handle on security, I’m starting to see people move to true attribute based access control to where we can start doing true access control both at column, row, cell level which is key to not duplicate data around. But again, going back to that lineage, to understand where that data came from, what transformations has it gone through is really the missing key and that’s where I’m starting to see people go to and become more successful in being able to manage this data.

Susan Davenport:

Thank you.

Jay Theodore:

Yeah. I would just add to what was discussed so far that a data strategy has to include metadata also because what we’ve seen is that AI has driven way past human cognitive understanding in the sense of it’s very multimodal now and we receive massive volumes of data at all kinds of frequency. So, the speed to relevance is very important and, for that, context is important. Context doesn’t often come from the data alone, it comes from metadata also. So, we are using AI to generate metadata so that they can help agents understand the data better and bubble up the contextual value of that data itself. Because if you take imagery along with what we call as vector data and observation data and maybe weather data also that’s coming in, all of those captured within a slice of time and space is very essential. And to do that, how can we advance AI itself in a meaningful way, I think it’s all tied together.

So, metadata along with data, I would say, and along with what we discussed, keep data where it is because that’s the best of breed but get the computer alongside also, get the agents alongside to work with that data.

Susan Davenport:

Love that answer, thank you for that. All right, let’s completely pivot to something totally different and I want, let’s see, Nicky, Jess and Jay to answer this question. So, I’ve been talking a lot about the Department of War’s AI strategy, we had 30, 60 and 90-day deliverables, each one of the mil depths did and the 60-day of which we are working on right now revolves around recruiting, retaining and training our top AI talent in the department. And so, there’s a lot of people back home very busy writing this plan and, in 60 days, we’ve got to have the plan done, I want to say it’s 8th or 9th of March is when we have to hand that into the Department of War.

Thirty days after that, we have to actually implement that plan and so workforce is near and dear to my heart and very top of mind right now so I’m interested in hearing how industry recruits, retains and trains top AI talent.

Nicky Pike:

So, in my 30 years, I have watched a lot of really talented engineers walk away from well-paid jobs to go work for someplace where they have purpose, they have meaning. They want to work on meaningful problems and they want to do so in a way that they’re allowed to use modern tools and they’re allowed to have very low friction in that. I don’t think that AI talent is any different in this, I do think that the stakes are a little bit higher because the competition for that talent right now is absolutely brutal. So, if we want to look at the industry and what we can do to … What you can take from that to bring in and retain this talent, I think, at the forefront, you’ve got to look at having a structured learning and career progression program.

So, this is things like formal AI fellowships, the ability to rotate between some of the pay studying programs so that they can learn new concepts and that they can get new ideas but also being able to have guaranteed time to be able to research and upskill within the authorized sandboxes. When we look at the speed at which this industry and this field is changing and the innovation that we’re seeing, these are absolute requirements to bring talent in and we’re seeing these patterns being recognized by some of the industry giants like Microsoft and Google and a lot of the defense labs today.

The real secret here is that you’ve got to make sure that their learning environment is comparable to and equal to what we’re seeing in the industry and this includes them having access to the same tool chains, the same IDEs and the same frameworks that a lot of the top tier consumers are using today or customers are using today. And we need to make sure that they have the ability to spit up environments where they can go through and they can access those models, they can try new experiments, they can spin up new stacks but they need to be able to do this in minutes and hours rather than quarters.

And right now, when you have a ticketing queue that requires a six-month process for them to get access to GPUs or model endpoints or even data sets, then you’ve lost these guys before they even started. And I do think that the department has a very compelling mission, I think it’s something that people want to be attracted to and, when we look at projects, some of the pace setting projects like Swarm Forge and Ender’s Foundry and Agent Network, these are exactly the type of high visibility, high impact products that are going to attract and bring in serious driven engineers. And being able to have them give access to that, being able to allow them to have the visibility and the impact on mission that they’re going to see through those projects, these are the things that I think come in and that’s stuff that we take from the industry all day.

You want to make sure that they have the ability to upscale and keep growing because of the innovation and you want to make sure that they’re able to do so in a very fast and modern process where they’re not having to experience a lot of friction. If you want to see an engineer walk out the door, regulate them to use in what they think are antiquated tools and create a friction barrier which makes them feel that it’s almost impossible for them to innovate and experiment with new technologies.

Susan Davenport:

Yeah, absolutely. And I think something that I would add to that is allowing them to move within the realm of their expertise. Don’t put a big chain of command on top of them and have them have to go explain to their boss who isn’t as technical or doesn’t have the experience or skillset that they do to try to get permission for things, give them their left and right boundaries and let them run within it. So, thank you for that. Okay, Jess, you want to go on this one?

Jess Salzbrun:

Yeah. Copy, paste pretty much, everything you said is spot on. It’s about a mission that matters. To tell somebody that you get to come and help build the arsenal of freedom, that you come to GE Aerospace and you get to invent the future of flight, that is a really cool mission. And at least at GE, you feel that energy among the team which makes it a lot easier to recruit top talent and I am actively recruiting and hiring and landing top AI talent already. And mission drives it, access to the tools and the … For example, we, GE Aerospace has partnered with the National Labs for over 20 years and we are one of the world’s top consumers, top industrial consumers of supercomputing and that is a really cool thing for somebody deeply seeped in AI to come and know that they’re going to have the tooling, the access to the compute resources that they need in order to really break boundaries.

And then the only last thing that I would add is our CEO, Larry Culp, is all in on AI, it is very obvious and I think the same thing applies within the department, Sec War is all in on AI. And so, to be recruiting top talent to come and work for your mission knowing that they’re going to have the resources, they’re going to have the ability to cut through the red tape and the bureaucracy to get things done because the leadership at the top is committed and putting their money where their mouth is really makes a difference.

Susan Davenport:

Yeah, thank you for that. I will add that the Department of the Air Force has stood up a barrier working group to cut through that red tape as you just talked about. So, the barrier working group was outlined in the Department of War’s AI strategy as something that … You almost don’t want to end up going to their barrier working group because that means several layers have failed below that one. So, we have one, it is housed in the AI Center for Excellence in my office and so we’ve already canvassed a lot of you out in the department to find out what your barriers are to success when it comes to acceleration of AI. And so, what we’ll be doing in the next couple of months is really understanding how we join with the other mil depth CDAOs to either solve those challenges, partner with CIOs across the mil depths to solve some of those challenges and then, when we can’t solve them, we’re going to raise them up to the DOW level.

And some of the feedback, the informal feedback was, if one of these issues, these barriers, this red tape as you call it, Jess, gets raised to the department level, you don’t want that. It will be solved in about a week but it will be quick and we’ll bring in the right people to do that. So, everybody is realizing that there’s ways to solve these things and we should just be highlighting them and solving them quickly so thanks for that. Jay, you want to take that question again?

Jay Theodore:

Yeah, I’ll just do copy, paste, summarize here because a lot has been said and we are running out of time too. I would say it’s to do with skillset, we talk quite a bit about that. When you talk about skillset, it’s not just about hiring, it’s also growing the talent that you already have. Learning, learning is an everyday thing whether it’s AI or not. The next thing is mindset. As leaders, being able to adopt AI actually means putting the right guardrails in place and accelerating and facilitating how do you communicate with machines and where you bring the strength of a human as opposed to a machine. Cognitive skills are quite different and we have to adapt to machines in order to make best use of it. Finally, it’s how quickly can you get to be doing relevant work, that’s really whether it’s AI or anything else. People are excited when their work is meaningful so how quickly can you advance to that level is desired.

Nicky Pike:

Yeah. I do want to say one more thing. I think one of the things that we saw that came from the Department of War strategy was the fact that they made the proclamation of wartime footing and the fact that they have the burial removal process. I think this is going to resonate with a lot of people that want to come and work for the federal government because it does signal that we want to make things as comparable within the federal government as we see within the industry and those are the types of things, when you look at, like you said, the ability to support our war fighter, the ability to come in and be a part of the American experiment from a software perspective. This is one of those things that I think will resonate but it’s going to remain to be seen whether that actually reduces the friction, the day-to-day friction that developers see but, if that happens, I think we’re going to see a lot more developers want to come into the federal space.

Susan Davenport:

Yeah, great, thank you for that. All right, our final question before the lightning round, I’d like to hear first from Jay and then Chris and Nicky and then Jay. Many organizations are experimenting with agentic AI, what innovative implementations have you deployed either for government customers or in your own organization and what lessons learned would you share with the government?

Jay Theodore:

Sure. If you take agentic AI itself, there’s agents that can run within a system and then there are agents that can talk to other agents to get something of greater value and more meaningful value, their interoperability and clarity is very important. I would definitely say that agents working in a black box is not as effective as an agent that is transparent, not opaque in how it does, what it does and explaining it whether it’s the decision maker, the commander or someone in the field. What we found is some agents are extremely effective when they amplify the cognitive skill of a human.

For example, the amount of satellite imagery at high accuracy and resolution with all the sensor data, remote sensing, LIDAR, all of those are extremely high, we call it the trillion pixel problem where imagery became a big data problem much before many of the other data sets. There, the resolution of computer vision with AI has greatly advanced beyond human recognition. So, similarity searches, searching for outliers and all of those, I would say it’s a no-brainer to quickly adopt in this PSP environment where you can advance the pace of adopting AI technologies.

The other thing is about there are standards like A2A, agent to agent communication, or MCP, model context protocol, that are there for agents to communicate with other systems that may not be agentified yet but have very critical pieces of information. So, there I would say there’s quite a bit of advancement in trying to build these uber agents that are really assisting the decision maker and not taking action necessarily. So, making it interpretable at every level of an agent I see is quite critical for adoption itself.

Susan Davenport:

Thank you. Jess, you want to go on this one?

Jess Salzbrun:

Sure. So, I always like to start, anytime I’m talking about agentic AI, with just a grounding on what that is because so much of mainstream AI started with access to ChatGPT, you look at it as a chatbot effectively. And think of generative AI which is most of what mainstream America interfaces on a daily basis as the GPS so it’s going to tell you where to go but it’s going to do nothing to actually get you there. An agent, on the other hand, is like the pilot so it’s going to, in a lot of cases, be able to tell you where to go but then it’s also going to fly you there. So, agents are AI tools that actually take action on behalf of a human or, in a lot of cases, might supplement a human in their own cognitive work. And so, we are having a lot of success through the application of agents in breaking down the complexity of massive data sets.

So, we have millions and millions of pages of technical data within GE Aerospace and interpreting that data when, a lot of times, these are scanned PDFs from 30 years ago, pencil drawings from engineers and technical publications on engines that have been flying for decades, interpreting all of that data and then using that data and say a technical publication for a maintainer to be able to maintain one of our engines is incredibly complex. And so, we have agents that are not only interpreting those millions of pages of documents but then making the insights in those documents available to both maintainers, people that are supporting our fleets of engines but then also interacting with other agents. So, interpreting those documents to help drive work scope for an overhaul visit, for example, to maintain our engines in the aftermarket and then also interact with another agent that is going to place demand in our supply chain for the spare parts that we need to maintain those engines.

And so, when you have multiple agents that are working together being orchestrated by a human, it’s really, really powerful the impact that you can have.

Nicky Pike:

So, I know we’re running out of time so I’m going to try to keep this pretty short. But I think the lessons that we can learn is nothing more than looking at some of the most painful things that we’ve seen within the industry today. The most painful lessons that we’re seeing within the industry are coming from where agents have too broad a permission scopes and where they have unsandboxed access to tools. And Simon Willison who is one of the loudest voices when it comes to AI theory talks about the lethal trifecta and the lethal trifecta is where an agent has access to private data, it’s got exposure to untested content and it has the ability to communicate without its system. When these three things work together, it is the perfect exploitation path and I think that we’re starting to see some of the outcomes that come from that in the wild today whether that be destructive production changes, whether that be unintended exfiltration or even prompt injections that come from code review comments.

So, one of the things that we really have got to look at and the lessons that we need to learn is that we’ve got to really do workspace level constraints within these agents where we give them scoped APIs, where we really restrict their access to private data, where we have network allow list and isolation to prevent them from being able to have exposure to that untrusted content and where we practice egress controls to make sure that they can’t communicate with outside their system. And we do see a real threat timeline on this, because of time, I’m not going to go in it but, if anybody wants to talk about that threat timeline, please find me after this because I think the message that we’re learning from the industry as we go through, provide the blueprint for what we need to do to look into the future.

Susan Davenport:

Excellent, thank you so much for that. So, speaking of agents, if the AFA gods are out there or can hear me, what I’m proposing for next year if they invite us back, I would love to do a digital clone panel. So, each one of us have our digital clone and have our digital clone speak to each other and conduct the panel fully digitally. So, I want to say thank you to everybody for coming out. Thank you, panelists, this has been a fantastic discussion. Thank you all for hanging in there and I hope to continue the discussion during the symposium so see you out there. Thank you.

Nicky Pike:

Thank you, everybody.