AI Combat Decisions Raise Legal Risks as US Expands Use
US plans wider use of AI system Maven in military strikes, raising concerns over legal responsibility and control when AI influences combat decisions.
The use of artificial intelligence systems to make combat decisions instead of humans creates a gap in determining responsibility for the use of force, said Ruslan Rashitkhanov, Deputy Director of the Institute of Information and Media Security at Kutafin Moscow State Law University (MSAL).
According to Reuters, the Pentagon plans to permanently integrate the Maven AI system into the US armed forces. The system is already being used in strikes against Iran.
Rashitkhanov noted that under the current legal framework, AI is considered a tool for information and analytical support, while the final legally significant decision on the use of force must remain with an authorized official. He stated that when AI effectively replaces this human decision, a legal gap возникает, as there is no clearly defined decision-making subject and responsibility becomes difficult to assign.
He stressed that this is why Russian law and military doctrine maintain that the final decision to use force must remain with a human, and no technical system can replace it.
According to him, once AI outputs directly influence combat decision parameters, the situation goes beyond internal regulation of information systems and falls under international law governing means and methods of warfare, including the 1977 Additional Protocol to the Geneva Conventions.
He explained that legal assessment applies not to the algorithm itself but to the specific configuration of its use in the application of force. At the same time, there is an obligation to take all feasible precautions and comply with the principles of distinction and proportionality.
He also noted that neither Russian law nor current international law recognizes AI systems as independent entities authorized to make combat decisions.