WATCH: In SASC Hearing, Kelly Presses on AI-Enabled Drone Strikes and Human Oversight 

Today, during a Senate Armed Services Committee hearing, Arizona Senator and Navy combat veteran Mark Kelly asked Pentagon officials about whether AI is being used in targeting decisions in Iran, and what level of human involvement exists before a strike is carried out. 

His questioning comes as Kelly is raising concerns about the Trump administration’s move to cut off federal work with Anthropic after the company objected to the removal of safeguards preventing its models from being used for autonomous weapons and surveillance.  

Sen. Kelly questions Major General Steven M. Marks at a SASC hearing. 

Click here to download a video of Kelly’s questioning. See the transcript below: 

Sen. Kelly: 

So, General Marks, I want to ask you something I think that is going to define warfare, now for the rest of our lives, for generations, and that’s the role of artificial intelligence in what we’ve seen play out here over the last several days. But certainly, into the future, it’s going to be a new feature of combat operations in many different ways. But specifically, the LUCAS, the low-cost unmanned combat aircraft system, attack system, those drones deployed in Operation Epic Fury have documented autonomous anti-jamming and I believe also some swarming capability. So, my question is about what’s underneath all of that. Are AI systems being used to assist in targeting decisions during this operation? 

Major General Steven M. Marks: 

So, Senator, thank you for the question. I am familiar with the LUCAS system. At this level, open hearing, I’m not able to go into great depth on what is inside of the LUCAS system, but I would be willing to get on your calendar, on the Committee’s calendar, and provide you a classified briefing.  

Kelly:  

Okay, so my next question is kind of irrelevant there, because I was going to ask about who validated the systems, who safeguarded them, and what human oversight exists at the moment a drone selects or confirms a target. So, let’s do that in a closed session as well. But I also want to just state for the record here that companies like Anthropic and others in the AI industry have published their own safety frameworks of how advanced AI systems should be deployed. But Congress has not yet set any kind of clear statutory framework for how AI can be used in lethal military operations. There’s a DoD directive, directive 3000.09, which requires what is called, and I’m quoting from the directive, “appropriate levels of human judgment over the use of force.” But that language doesn’t necessarily mean a human is involved at the moment a target is selected or engaged. So, before we rapidly scale up production and field more of these systems that have AI incorporated into their capability, we need a clear answer on this. At the moment, a drone identifies and confirms a target, whether or not a human has to make the final decision to strike the target, or can a system execute the engagement autonomously once it’s been activated? These are questions we haven’t yet dealt with here in Congress, and we need to. So, General, I just want to get your thoughts on that, independent of what LUCAS or any other system can do.  

Marks:  

Thank you, Senator, for the question. Any system, any capability that the department procures has to comply and be compliant with the Law of Armed Conflict. I would say that any commander that deploys these systems, just like any weapon system, it has to comply with the Law of Armed Conflict.  

Kelly:  I am not sure that the Law of Armed Conflict has dealt with this issue, so LOAC might not be exactly clear, and that’s why I think it’s up to us, Mister Chairman, that we take this issue of humans in the loop seriously and create the framework that DoD will apply to these systems with regards to their autonomous nature and the ability for a system to make a decision on targeting the enemy. Thank you. 

Print
Email
Share
Tweet