How Artificial Intelligence Is Changing Modern Warfare
Experts warn that AI used in strikes on Iran and a Venezuela operation could transform warfare, as algorithms analyze intelligence and help select targets.
New details have emerged about the use of artificial intelligence in strikes on Iran and during a military operation in Venezuela. Experts say these cases may signal the beginning of a profound shift in the nature of warfare, with neural networks expected to play an increasingly significant role in battlefield decision-making. The issue is examined in a report by RIA Novosti.
The Pentagon has already acknowledged that AI technologies are being used in military operations, though officials have not disclosed specific details.
What is known is that intelligence platforms — including systems developed by Palantir Technologies — are capable of collecting vast amounts of information, ranging from satellite imagery to human intelligence reports. Algorithms then process this data, identify priority targets, select appropriate weapons and even assess the legal justification for potential strikes.
According to Dmitry Stefanovich, a researcher at the Center for International Security of the Institute of World Economy and International Relations of the Russian Academy of Sciences, artificial intelligence makes it possible to plan the deployment of forces and equipment with greater efficiency. By analyzing multiple variables — from an adversary’s defensive capabilities and local geography to terrain features and weather conditions — such systems can calculate how to ensure a successful strike.
AI platforms are also capable of generating operational scenarios almost instantly. For now, however, the final decision to use force remains in human hands. At the same time, discussions are already taking place within military circles about expanding the autonomy of neural networks in combat operations.
Specialists warn that the main danger does not lie in the technology itself, but in the growing tendency to rely uncritically on algorithmic conclusions without fully understanding how those decisions are reached.
Scientists have raised additional concerns. In a recent experiment at King’s College London, three advanced neural networks — GPT-5.2, Claude and Gemini — were tasked with simulating a nuclear crisis. In 95 percent of the modeled scenarios, the AI opted to use nuclear weapons. The outcome suggested that the strategic logic of machines can differ sharply from human reasoning and may prove far more aggressive.
Experts increasingly caution that if algorithms are granted broader authority in military decision-making, the consequences for global security could become extremely serious.