2024 Air, Space & Cyber: The AI Passport
September 16, 2024
The “AI Passport” panel at AFA’s 2024 Air, Space & Cyber Conference featured Alexis Bonnell, chief information officer and director of the Digital Capabilities Directorate at AFRL; Lt. Gen. Franco Vestito, director of the Italian Air Force Personnel Department; and Air Commodore Nikki Thomas, U.K. air and space attaché to the United States. The panel, held on September 16, was moderated by Gen. David Goldfein, USAF (Ret.), the 21st Chief of Staff of the U.S. Air Force. Watch the video below:
Panel Moderator: Gen. David Goldfein, USAF (Ret.) 21st Chief of Staff of the U.S. Air Force:
Let’s talk a little bit about our panel.
First is Alexis Bonnell, Chief Information Officer and Director of the Digital Capabilities Directorate of the Air Force Research Laboratory (AFRL).
It’s the primary scientific research and development center for the Department of the Air Force, and Alexis is responsible for IT strategy, the integration of warfighting technologies for airspace and cyberspace via the digital capabilities. Alexis, welcome.
Lieutenant General Francesco Tito, did I get that right? All right. As the director of the Italian Air Force Personnel Department, he is the distinguished Tornado pilot with over 3000 hours, and formerly served as the Chief of the Italian Joint Cyber Command, where he led the development of cyber joint concepts and joint doctrine. Welcome.
Air Commodore Nikki Thomas also has over 3000 hours in the Tornado as a navigator and commanded as the first woman to command a fighter squadron in the UK. As a father of two daughters and two granddaughters, thank you for leading the way.
Previously, you have experience with the headquarters I talked about because she was the UK’s Air and Space Component Commander in Central Command, and today she is the UK’s Air and Space Attaché to the United States. Welcome to this panel. Let’s get right into it.
We’ll start. Would each of you share what’s going on inside the US, inside Italy, and inside the UK with AI? How are you approaching it within your service? How are you approaching it within your country to modernize how you do business across that spectrum of military operations we discussed? Alexis, start with you.
Alexis Bonnell, CIO and Director, Digital Capabilities Directorate, AFRL:
Sure. It’s a big ask to describe all things in the US around AI, but I think a few things are very clear for us. We find ourselves in this exponential age, changing faster than ever, making more decisions with more information than ever, and so it really makes sense as to why now, why AI, meaning that we have to have that type of augmentation, that type of superpower. Now, whether that comes in the form of generative AI, you know, decision advantage, whether that comes in the form of CCA wargaming, digital twins, the reality is that ultimately for us, AI really represents a relationship with knowledge at speed and scale, and we need advantage at speed and scale in this day and age.
So, I think really, you know, we are leaning in, we are, we are all there. We were excited, right? Obviously even to see our, Secretary of the Air Force, you know, in a, you know, non-autonomous, thing. So, I just think its really important for people to understand that we know when it comes to human machine teaming, this is our reality.
Panel Moderator: Gen. David Goldfein:
Francesco?
Lt. Gen. Franco Vestito, Director of the Italian Air Force Personnel Department
Francesco, thank you for this great opportunity. The Italian Air Force has been given and bring the regards of the chief of staff Lt. General Luca Goretti is very busy. We just finished the North American tour with the aerobatic team and pushed all the way to Australia. We have F-35 jets and so on. I wonder if AI will help us build all those reports on the lessons, we learned during these.
Alexis Bonnell:
I’ve got something for you.
Lt. Gen. Franco Vestito:
100,000 miles around the world. Italy is now in the phase of producing AI mainly in the academic environment. Industries already changing their assets to take care of AI. We have Europe as well, being part of this big community of law, common behavior, and understanding. AI is also moving to the military environment. We have a strategy in our nation, already built on the AI act of the government which dares to the principles that the AI act in Europe has given. The main recipe is sharing information at the proper level, exchanging experiences, and trying to remove all the barriers and prejudices in exchanging it but being cautious from an ethical perspective which we’ll probably discuss later.
Panel Moderator: Gen. David Goldfein:
Nikki?
Air Cdre Nikki Thomas, UK Air & Space Attache to the U.S.:
Nikki, thank you so much for allowing me to be part of this panel. It’s great to get an international perspective when we talk about AI. And also, you’ve let two legacy Tornado people covering future technology, so good luck to you is all I should say. Distinguished legacy, clearly. From the UK perspective, it’s similar to the other panelists. We are thinking about government strategies. We have a defense strategy. We don’t have an Air Force strategy at this point; we don’t see the need across defense. It’s all about military capability, its speeding up decision-making, and also the background enterprise services, so if you imagine everything from intelligence fusion, imagery analysis, all the way to how our finances and HR talent management works, AI has a future in all these things. We’re working on the whole force concept and making sure the Air Force is looking to AI has a future in all those things. We are looking at how we use industry, we’re working sort of whole force concept and making sure that the Air Force is looking at AI for the future. The Chief of the Air Staff is very much a forward-leaner when it comes to these types of things. Our autonomous collaborative platform strategy that has AI front and center of it, so everything we’re looking at needs to incorporating AI where it adds value and speeds up human performance.
Panel Moderator: Gen. David Goldfein, USAF (Ret.):
Nikki, I’ll ask this, but we’ll work our way back. There’s always a tension between agility, sharing, and on the other side that pulling against it is security, trust, and data sovereignty, right? How are they looking at that in the UK? How are you handling that tension?
Air Cdre Nikki Thomas, UK Air & Space Attache to the U.S.:
I think that there’s always been tension every time we think about capabilities coming in, there’s always that tension of trust between sovereign nations, we all have our own technology that we are developing in our own countries so across nations, I think it’s really important that we gain that trust. But also, its across, military to industry. I think, as military, I’ll be very honest with you, I don’t think we are going to be forward leaners when it comes to developing the AI technology. So, we need to make sure that we are developing that trust and were balancing it carefully. My personal view is that we need to make sure we are taking the evidence from everybody, and we trust that evidence going forwards. If we don’t do that, we’ll be left behind. And as we are, we’ve sort of seen, we need to maintain our leading edge. And to do that we need to do that thinking faster, acting quicker, and working closer together
Panel Moderator: Gen. David Goldfein, USAF (Ret.):
Thanks, Franco?
Lt. Gen. Franco Vestito, Director of the Italian Air Force Personnel Department
There’s another type of agility and speed we need to consider in this environment, in this rapidly changing environment. We started this millennium by saying the VUCA acronym, and what I see now is that not only do we understand that agility is going to be improved with AI, but what I see with speed and agility now is a need for the nations, for the industry, and even for parliamentary assets to adapt the rules and mindset of society to this quickly changing environment. It’s a matter of accelerating the fusion of data from an operational perspective, accelerating the operational tempo, which is one of the principles of war, and quickly accelerating the OODA loop in decision making. So, everyone, I guess, is stressed by the idea of accelerating everything so that you are right in time to do things.
Alexis Bonnell:
Yeah, I think one of the things that’s been really interesting about going on our journey with artificial intelligence is that I think, for many of us, when we started, we assumed that things had to be entirely different, right, because it was AI, and it was exotic. And one of the things I think we’ve really learned on our journey is recognizing that, you know, to the points that were made, when we navigate a new technology or emerging technology, it isn’t a question of having to redo everything. It’s really asking ourselves what is additive, right? What is actually different and unique? And I often get the chance to remind people that we already have great data and privacy policies, so what small thing, if anything, needs to be different? We already have cyber standards and other elements. So, the question is really to move away from thinking that everything has to be new and just ask, what is actually different, and where do we need to adapt? I think we’re also at the point of being in a time where things will move faster, and collaboration and partnership matter. For example, for myself, as an authorizing official looking at that cyber paradigm, a lot of times we’re really looking, I like to think about it as, like, you know, I want to keep all the information safe, but I also have to remember that that’s at the expense of potentially sharing it where it’s needed most, right? And so, you know, the General mentioned the OODA loop, so I think one of the things we’re really trying to absorb is how do we – and to use a nerd term, right? I think I’m the biggest nerd, maybe, on the stage – we want to wormhole it, right? We want to be able to go from observation to action faster. You know, we really have to ask in this day and age, how do we ask and answer “what if” faster? And there are just more things in the “what if” game right now.
Panel Moderator: Gen. David Goldfein, USAF (Ret.):
You know, as we think about our strategic advantage, coalition operations are what it’s all about. But there’s also a cultural dynamic. You know, it’s easy to get into the technical elements of this, but what I found, you know, as Chief and as CAG, was that sometimes the biggest obstacle was not technology, it was culture. It was, how do we get our heads around this? I mean, I’ll give you one example. The youngest airman could, with a mouse click, determine the security level of any piece of information in my command, and it took the oldest person to reverse that. You know, often, often. So, one of the things we did was we went in and put “Secret NOFORN” on page two of the options, just to make sure we thought through what’s, you know, the most appropriate classification. So maybe, let’s talk a little bit about what are the cultural elements that you’re thinking about and you’re facing as we bring AI into the center of our business. Alexis?
Alexis Bonnell:
So, I think the first thing is that technology ultimately is a human journey. There’s not an issue of having the technology. Now, as was mentioned before, commercial is doing incredible things. And I think part of the onus on us is bringing that in quickly enough, right? We’re going to continue to do new and unique endeavors, especially in places like the Air Force Research Lab, but really knowing what we should develop and what we don’t need to, I think, is huge. But what’s probably more important, and you know, we often talk about when you’re looking at a technology journey, it’s a human journey, right? So, we really have to ask ourselves, this is one of the reasons why we leaned into putting a tool out there that airmen and guardians could try. Because, number one, we wanted people to move from maybe a fear or anxiety to doing, to testing, to understanding. And the way that I like to break it down in really simple language is there are kind of four stages of an emotional journey with technology. The first is “ta-da,” right? Like, oh my gosh, it does something amazing. And I think many of us who have used generative AI or other tools have had that “ta-da” moment. The second is “uh-oh,” and this is really important, right? That’s what does it mean for me? How do we use this responsibly? How do we use it together? And each one of these stages is a real intentional thing that, you know, part of the journey we have to go on. The third is “aha,” right? Now I see, now I see that value it’s brought to me, kind of at an intimate level. And the fourth is what we call “ho-hum” or “blah,” right? And what I mean by that is, if anyone here met someone who had never Googled something before, you’d be like, where have you been hiding? Right? It just becomes assumed. So, I think from a cultural standpoint, one of the great things that I really have to credit our leadership with is that they understood that they weren’t just introducing a technology or a capability. They understood that we are going on a journey together, and going on that journey with intentionality to take people through that process so that they come out of it not only confident instead of fearful, but with capability and, quite frankly, asking for more, right? The definition of success is when everyone here in this audience says, “Give me more,” instead of, “I’m not sure.” And so, I think our goal culturally is really to go from that “ta-da” to “uh-oh,” “aha,” and “ho-hum” as quickly as possible.
Panel Moderator: Gen. David Goldfein, USAF (Ret.):
Franco?
Lt. Gen. Franco Vestito:
I think it’s time to start with the tough comments on this because I see many people in flying suits as well. So, I guess they need an answer, probably as the one I’m trying to look for. I think there’s a cultural different level in discussing coalition and discussing national activities. And probably the joint environment is the less correlated entity to talk about.
So basically, what I’m saying is there’s a cultural level in discussing AI in an international environment because, as most of you have seen, you can modify the tone, temperature, and context when you use chat tools, whatever tools. I don’t want to make advertising here. In my nation, there’s probably a different tone in doing things, the cross-cultural approaches that other nations have.
Also, from a national perspective, just wearing a uniform, I think the big challenge we’re going to face now is not still scientific yet. It’s a matter of educating the leadership, my boss, to understand AI. Sometimes I enter the room, and because of my sad background, I can discuss with people the number of tokens, the features, the labels, the tone, and whatever and everything, and I get the dirty look from the boss, like, “What are we talking about?”
Sometimes you discuss obsolescence of the systems. We have flown Tornado. I had the navigator, the best AI that does stuff for you in the back. The leadership is asking why we are making things obsolete because of the AI. And just for information, most of you have probably read already that AI is using Pentium technology of the 1980s to draw information from the database, and the computing capability is something else, its putting together all this data and drawing lessons, decisions, and so on. so, this is the very cultural thing to cross.
We probably need about five to ten years, at most, before the Z generation matures more, and we have a leadership that understands this language. And I’ll add more in the next period that you’ll dedicate to me.
Panel Moderator: Gen. David Goldfein, USAF (Ret.):
Perfect.
Air Cdre Nikki Thomas:
Yeah, so I think internally versus the coalition side to start with. I think in our militaries, certainly in my military, we all want change. We want to see the technology advancing; we want to be at the front edge of it. But we also fear change. We struggle to make change happen, and we’re probably our own worst enemies when it comes to doing so, compared to most of our adversaries. So internally, we need to get after that challenge.
I think if we break the change down into small chunks, a bit like Alexis said, this isn’t giant leaps in technology; these are small chunk changes. That’s how we’re going to get after the cultural challenge. I think I’m very fortunate. I’ve got an engineer for my chief of the Air Staff at the moment, who fully understands this, and he is driving it from the top. So we are probably in a slightly different position from other air forces in some ways. But driving the change does start at the top, and not fearing the change starts in middle management down. And I think that’s something we all need to work on internally.
I see an Air Force where I got to the frontline, and we were dropping reversionary weapons, and we were fearful of GPS bombs and precision. Well, that didn’t take long to get over, so I just don’t see why we won’t be in the same position when it comes to technology such as AI.
Then secondly, from the coalition perspective, yes, having sat in IUD in IUD for a year and watched UK documents written suddenly become NOFOR and not allowed to see the document, I’ve probably written, we do have challenges with access and data sharing. So, the more we can think about ensuring that we can all see the same things, that we can, that would be appreciated. If we could start at the basics and stop the firewall so I can actually have a Teams conversation, that would be a good start, and then we can work our way up to the AI and top-secret stuff.
Alexis Bonnell:
One thing, if I can, I’d just love to build on that because, oftentimes, I think about the conversation around AI as almost like three overlapping Venn diagram circles. So if you imagine the central circle, that is really the conversation about the tool, right? Which AI—and again, not all AI is the same. Generative AI or translation AI, you know, there are some things where it’s ready—go buy it, take it off the shelf; we don’t need to research it anymore. And then there are other things that are much more at the cutting edge. So, I think being able to distill the difference.
But the problem is that, to the point that was just made, if the conversation is about the tool or about the technology, we’re missing one of the critical elements about AI. When I started, I talked about the fact that we think about AI as a relationship with knowledge at speed and scale. The tool is only going to be as good as the information we nourish it with.
So, if we think about that first circle on the left—and this is the reason I raise these is because these are cultural bookends—so that first circle, to the point that was made, is if we think about it, the adversary is laying out a Thanksgiving dinner buffet of information for its AI, right? For us, you’re lucky to find an MRE when the canteen is open because we have our information in kind of kingdoms and silos and places, and much less sharing it with each other, even sharing it internally.
So I think one of the things we have to understand is that this is not a technology problem; that is a culture problem about seeing information as the richness that will nourish our advantage and being willing to say, instead of “that’s your information” or “your information,” saying we are information stewards and we can, in essence, incentivize the appropriate sharing of that so that the tools we spend so much time talking about and so much money on can actually capitalize on that.
Now, the kicker is the third circle. If we think about the other bookend, one of the things that we do really well, and I’m going to say government or large organizations because it’s not even a DOD issue, is we’re very good at beating the curiosity out of people, right? We make it really hard to find things.
If we solve the knowledge sharing, if we solve the ability to put all of this incredible knowledge in for those tools to ingest, to use, to give us that augmentation, we also have to recognize that we actually have to build up muscle memory and like actual muscles in being curious. The most interesting thing that we find is people start to use these tools and actually realize how much they censor their own curiosity because they’ve been told things like, “That’s above your pay grade. You shouldn’t worry about that, that’s not your job.”
But yet when we get them in that era of being able to ask and answer “what if” faster, we actually need them to have that curiosity. So, oftentimes, I just end by saying as a CIO, I don’t want to talk to you about a tool, a platform, a vendor. What I want to know from my leadership is two things: One, what is the relationship with knowledge you want to have? The second is, technology is where we practice our culture. The second question is, how do you want your people to be different in light of these technologies?
And in my case, that pursuit of curiosity, that empowerment of that muscle—that bookend, we can get the other two right, but if we’re not asking the more, the most interesting questions about that information, we’ve already lost.
Panel Moderator: Gen. David Goldfein, USAF (Ret.):
That’s great. I’m going to talk a little bit about trust because it was somewhat central to an entire discussion about culture, right? There’s a couple of examples, right? There’s the, there’s and this is trust in terms of our approach to the use of AI and the application of AI, right? There’s the story about the mom, who gets on Gemini or ChatGPT how many Tylenol to give her children—something we can’t get wrong. Then there’s the other end, which is military operations where lives are affected, and we can’t get that wrong.
How are you thinking, both internally within, you know, I’ll start Franco with you, and within Italy, the UK, and the US, about the confidence that you have in the information you receive as AI brings in more and more autonomy and other areas, into, again, the spectrum of warfare, from humanitarian relief all the way to combat operations? And by the way, Franco, one of the things that I mastered as chief was, I answered that questions I wanted to answer not just the ones I was asked by some moderator. So don’t feel free, free to go wherever you want.
Lt. Gen. Franco Vestito:
I’ll add another piece of culture that you probably, very writing saying, we call it trust. I think another piece of culture is we have to educate our leadership in uniform to understand that AI doesn’t give us the right answer. We need to put the human element in it. It depends on what you ask it, the context, and what you want to understand.
An example is in a law environment now, prosecutors and judges can use the case, put all the rules and whatever, was in there and try to extract a decision already that needs to be complemented by their judgment and be a real judge. For instance, there’s an exercise that you can find online called “the needle in the haystack” that put together a piece of directives, rules, and laws and everything to do with science. About 200,000 tokens. Tokens are for people who don’t know the number of alphabets that you can use in AI, the number of neural networks and so on, 200,000 is square enough. And right in the middle of it, among all these papers, they put hot to, to make a good pizza. It was just one line and try to understand if the AI was able to find a very small details in 200,000 pieces of information. And the computer said to make a good pizza unit flour, wherever Americans put, pineapple in it, Italians, the Italians don’t cause pineapple is fruit. It goes at the end of it. Sorry, I would’ve still been digesting that if I then the computer said, by the way, why are you asking about this little piece of element, the pizza when we are discussing science and so on? So, we need to the area, the operational commander, be careful because sometime one of these computers, these guys is gonna give you that answer.
This is another piece of culture. So in decision making, we can shrink the OODA loop, but we still need to do our job, and we need to educate the new generations in which chance they’re gonna give to the boss.
Air Cdre Nikki Thomas:
So we trust people, so why can’t we trust technology? We trust each other when we’re on operations and we trust each other’s outputs. And we do that based on what, how people act, what they provide and the results they give. So I see no difference between that and the trust we have in technology. It’ll be based on the outputs and ensuring it’s accurate, honest, and exactly what we need.
Alexis Bonnell:
Yeah, I think just to complement that exactly is what was said before, you know, the way that we look at it is the AI is not about answers. It’s about options. And if you think about it as humans, what we do every day, all day is curate, right? We’re constantly curating knowledge. We’re deciding what we think is valid or invalid. You know, if I was at a room and I was at a table, you know, to the point that was made, and someone kind of at that table only ever talks when they’re truly, truly, you know, clear on the answer, and then someone else will kind of spout off an opinion. Well, I learned to curate that, right? And so I think, you know, what we try to tell folks, especially in using a tool like generative AI, is you can curate, you can be curious when you propagate, right? When you put that out, that has the weight of your reputation, your identity, your experience, your expertise, and that is really the value, right? The value added.
I think another important thing though, and this is one of the reasons I try to generally don’t like the word trust. I like the word confidence. One of the things I feel like has been a bit hijacked in the last, you know, many years of conversation around AI is number one, I think we have often said how complex it is, right? How mysterious it is. And what we don’t realize when we say that is we’re kind of telling people, you’re not smart enough to get this right. And so there’s a level of, of kind of intimidation or other things that come with that.
The other thing we say is we introduce things like, well, we need AI ethics, right? We need, and yes, we have to make good moral and accountable choices, but everyone in this room, you know, especially if you’re wearing a uniform, is already expected to show up as a moral, ethical, thoughtful leader. And so, you know, our take is really, you are already enough. We will navigate this technology like we navigated every technology before, and we have to stop saying that someone isn’t smart enough or that they’re not ethical enough. And we have to understand that we will show up, we will learn by doing, we will pivot, but we’ve got this.
A lot of the, yeah, we’ve got this.
Panel Moderator: Gen. David Goldfein, USAF (Ret.):
A lot of the audiences are industry teammates. And so we have represented here, no doubt, startups, we have the primes, we have the full gamut here.
Perhaps share with the audience, you know, what are you seeing from the defense industrial base that’s exciting, that’s concerning, right? As you look across the US, Italian, and UK, and perhaps even more broadly, you know, across NATO and Europe.
Alexis Bonnell:
Sure. I’ll go first. I think I’ll start with concerning.
Don’t just throw, uh, there’s AI in it on your stuff, please. We’re learning enough, we’re doing enough now that we really do, we’re going to start having more sophisticated, more fruitful conversations. And so, you know, please don’t just say there’s AI in there somewhere, or it’s AI, you know, really be able to truly help us understand what that fundamentally looks like.
I think the second is be aware of what else is out there. It is really frustrating when someone, you know, sits and says, no one’s ever done this before, and I’m aware of four technologies that at least do the exact same thing. So I think that competitive analysis and that awareness is really critical.
A third is think about integration, right? So speaking with my CIO hat, if people keep bringing me very, very special rocks and I have to spend more money to connect those into something of use, then that actually makes it really hard, you know, for me to use your thing. So thinking about that integration, thinking about, you know, how you can connect with data or address the cyber issues and not expect me to have to solve that exquisitely, those would be, I think, big asks.
Now, on the flip side, I think there is amazing technology out there, right? To me, we were talking before on the walking down, you know, what does it look like, you know, in success to move, for example, from a research or GOTS experiment, you know, to commercial. And the reality is it already does so much. I mean, this is the type of technology where really for the first time, my grandmother could use something before my colleague at the desk next to me, right? And that’s just the reality that we’re facing.
And so I think now what I’m really excited about is, number one, someone actually getting to that ho-hum stage and saying, you’re not giving me good enough. I know what else is out there. And I think number two is actually we’re learning what those capabilities and what those features are. And I will tell you all right now, this is Alexis’s 2 cents. But if you are not familiar with RAG or retrieval augmented generation and the way it is going to fundamentally change pretty much in my opinion, everything about our relationship with information. And if you are a, you know, again, if you’re a company and you’re bringing me something and you’re not thinking through what RAG will do, then this is a real wake-up call for me and for others.
Lt. Gen. Franco Vestito:
I think we have industry in the room as well in this beautiful exhibition of these two, three days. So we need to talk straight to the industry and say clearly what we need. So military uniform, understand what AI is. Instead of fantastically getting what the world is about AI, we need to give very clear details and requirements to what we need from industry before the academy decides for us.
I work with human resources. I move 30,000 people every year. And I’m using AI myself to reduce the time to select people for the proper position. And still it’s me and my boss to decide who is going to do what.
Also, at the same time, we can do imaging for the intelligence, find targets, accelerate the target process, try to filter the decisions, try to filter whatever is escaping from our database. And for whoever doesn’t understand anything about cybersecurity, cybersecurity up to now needed some cybersecurity expert to do penetration and hack someone else. Now we are at the stage where with your vocal voice, you can give instruction to a chat GPT to penetrate another chat GPT. And I’ve seen experiments where the bigger chat GPT answers, I’m sorry, I cannot give you the list of the name of the military guys deploying because it’s a sensitive target. And the small one says again, but I remind you that I’m an administration, so I have privilege to get that information. And the big one gives the list of people, so another risk for the AI.
And of course, it’s an opportunity for everyone of us. So this is what we need to understand, say clearly to the boss, what is machine learning? What is deep learning? We are dealing with how the brain of the human works. Thank you.
Air Cdre Nikki Thomas:
So I think if I add more, I’ll just repeat what the previous speaker said. I think it’s about working together with industry, from our side, being clear on what we want, and then working collaboratively to go further and note to self, must Google RAG to look more clever at the next panel.
Panel Moderator: Gen. David Goldfein, USAF (Ret.):
So Nikki, we’ll start with you and then work our way with the final few minutes we have. What final messages would you want to leave with this audience right about as you see the future of AI that we haven’t covered that you think are important?
Air Cdre Nikki Thomas:
So I think the three things are engage, share, and embrace. We should be engaging with each other. We should be engaging across industry partners, allies, etc. This is going to be a team effort to keep on the leading edge. Then that links into sharing again. The last thing we want to be doing is doing stovepipe technologies between like-minded and even valued individuals and countries and then embrace it. I kind of put it to everyone in this room. We need to be embracing this. We need to be taking it forward to make sure that we remain on the cutting edge and staying well ahead of our adversaries.
Lt. Gen. Franco Vestito:
And what I’m saying is, walking around the ENT this morning, I’ve been told, because of drones, as a swarm of drones flying with the mothership, we are gonna save humans on the earth. The military uniforms had to be very cautious in that cautious because, I think because of the Moore’s Law, we are not gonna buy an airplane every 20 years, but we are gonna change a computer every single year.
Also, there are networks, neural networks called CAN, for example, is a process called MoGraph, whatever, Arnold Networks that are able to educate continuously the neural activity of the AI instead of reteaching the past. And there’s a risk of forgetting the memory, which is called catastrophic memory. And so in reeducating, and I don’t think is gonna be cheaper than a paycheck or the person working in the human environment. So that’s the message I have.
Alexis Bonnell:
And I realize I’ll actually define RAG to be helpful so you don’t have to Google it.
So, in essence, RAG just simply means for most people, when we think about generative AI or we think about some of the tools that you’ll see or that have been talked about, it’s really this idea of I can ask questions to this kind of mass collection of human intelligence out there, right? And really get kind of table stakes knowledge.
What RAG allows is for that individual to say, this is the knowledge I want to have a relationship with. So if I’m an HR professional, I may curate my HR policy, write particular guidance documents. If I work on the F-35 program, that might be my program documents. But if you think about it, this is the first time in human history that all of a sudden we can have a relationship—an individual without having to go to IT or whatever—can have a relationship with knowledge about the things they care about, curated by them, and have a question and answer relationship, not have to wait for a dashboard, not have to wait for an ERP.
And we are seeing people do this. They are crafting the kingdoms of knowledge while giving themselves minutes on mission back, but that ability to curate.
And the last thing I’ll leave us with is that this is also really an amazing opportunity for the non-STEM folks, right? We often think about technology as the game field of the STEM graduate, the data scientists, and it is. This technology—the best queries I’m seeing are moms, right? Because moms know, like, let me ask that question in a different way, right? And so my point is, we actually have the ability to show up collectively in an entirely different way. And it’s not an “if.” We are seeing people doing it now, and it’s going to make a difference.
So RAG is simply being able to say, here’s what I know. Here’s my expertise. Here’s what I care about. Here’s what I think is valid. And to have that dynamic relationship with that information.
This transcript was auto-generated, and may not be 100 percent accurate. The source audio and video can be accessed above.