Will AI-Powered Weapons Protect Civilians or Increase the Risk to Them?
- AgileIntel Editorial

- Dec 2
- 9 min read

The rapid advancement of artificial intelligence (AI) is transforming numerous sectors, including healthcare, logistics, urban planning, and communications. However, when AI intersects with defence and warfare, the stakes become existential. Autonomous defence systems, that is, AI-enabled weapons or platforms with the capacity to make targeting or engagement decisions, raise serious moral, legal and strategic questions. Can machines ever be trusted with decisions over life and death? And if they are deployed, under what constraints and governance to ensure that human dignity, international law and humanitarian principles are not compromised?
Autonomous defence systems present a dual-edged promise. On the one hand, they offer speed, precision, and reduced risk to human soldiers. On the other hand, they pose grave challenges to human judgment, accountability and the very norms that distinguish war from indiscriminate violence. This makes ethical deployment not only a technical requirement but a defining test of global security leadership and responsible innovation.
What Are Autonomous Defence Systems?
Autonomous defence systems, also known as autonomous weapons systems (AWS) or lethal autonomous weapons systems (LAWS), refer to weapons or military systems that, once activated, can perform certain functions such as target identification, selection, and engagement with minimal or no human intervention.
Different international organisations, states, and researchers define AWS differently, depending on the degree of autonomy, the part of the targeting lifecycle covered, or the extent of human control remaining. Due to this definitional ambiguity, establishing common regulatory frameworks or international norms becomes significantly more challenging.
Examples of AWS or systems with varying levels of autonomy include armed drones, loitering munitions, missile-defence systems with automatic response features (for instance, point-defence systems), and other AI-powered robotic platforms.
Why Defence Establishments Are Investing in AI Autonomy
From an operational standpoint, autonomous defence systems offer several perceived advantages:
Speed and Efficiency: AI systems can process sensor data, imagery, radar, heat signatures, and other battlefield intelligence far faster than humans. This enables rapid decision-making in high-tempo combat scenarios.
Persistent Operations and Risk Reduction: Autonomous platforms, such as drones, loitering munitions, or uncrewed ground/sea vehicles, can operate in environments too dangerous for humans, including contested zones, chemical or biological threat zones, and remote border areas. They can also remain on duty for long durations without fatigue or risk to human soldiers.
Proliferation and Cost Dynamics: The global AWS market is reportedly growing rapidly. One estimate suggests the market value is rising significantly, reflecting heightened interest from multiple states and defence suppliers.
Military Advantage and Strategic Edge: Many countries, including major powers, are investing in autonomous weapons systems to maintain or gain a strategic advantage.
Given these potential benefits, many defence establishments view AWS as the next frontier in military capability and modernisation.
Real-World Risks, Incidents and Data
The theoretical advantages of AWS are real. But the recorded risks, misuses and humanitarian costs are likewise significant. Recent data and case studies highlight some of the dangers of deploying autonomous or semi-autonomous weapon systems without robust oversight and regulation.
Surge in Drone and Autonomous Weapon Use
Analysts tracking global conflicts report a dramatic increase in drone attacks over recent years. Some estimates suggest a multiple-fold increase globally between 2020 and 2024.
From an estimated few thousand drone-related strikes in 2023, the number reportedly jumped significantly in 2024.
These attacks are no longer limited to state actors. Non-state armed groups and militias are increasingly using inexpensive, commercially available drones, sometimes modified as loitering munitions or kamikaze drones, making lethal force more widely accessible.
This dramatic increase underscores the urgency of establishing ethical, legal, and regulatory guardrails before the use becomes pervasive.
Civilian Harm and Human Rights Violations
Reports have documented hundreds of civilian deaths in conflict zones attributed to drone strikes or autonomous weapons between 2021 and 2024. In certain regions, nearly half of those deaths were linked to strikes in a single country.
In one documented tragic outcome, a retaliatory drone strike after a bombing killed several civilians, including children and a humanitarian worker.
Observers have noted that some AI-powered targeting tools used by armed forces in recent conflicts, particularly in densely populated urban areas, appear to lack sufficient accuracy. This increases the risk of civilian casualties and raises serious questions about compliance with international humanitarian law (IHL) and human rights protections.
In short, while AWS may offer a veneer of precision and efficiency, real-world deployments have already resulted in civilian harm, unintended casualties, and significant moral, legal and humanitarian concerns.
Growing Proliferation and Risk of Misuse
Recent research highlights that the proliferation of AWS, among both state and non-state actors, is accelerating. Existing weapons systems, such as drones, loitering munitions or missile-defence platforms, can be retrofitted with autonomous modules, making upgrades easier and more widespread.
Given the affordability of commercial drones, increasing availability of AI software, and porous regulatory frameworks, there is concern that autonomous weapons might eventually become accessible to non-state actors, militias or extremist groups, significantly raising the risk to civilians and global security.
Given these trends, the global community’s challenge isn’t simply technical. It is ethical, legal and political.
Ethical, Legal and Human Rights Concerns
When decisions about targeting and lethal force are delegated, even partially, to machines, we must confront multiple fundamental issues:
Loss of Human Moral Agency, Accountability and Judgment
AI systems, no matter how sophisticated, lack human moral reasoning, conscience, empathy or the ability to make judgment calls in ambiguous, dynamic situations such as civilians fleeing, surrendering combatants, non-combatants mingled with combatants, or children in conflict zones.
When an autonomous system errs, misidentifies a target, misclassifies civilians or miscalculates collateral damage, who is responsible? The programmer, the operator, the manufacturer or the state that deployed it? This “accountability gap” undermines trust in AWS and raises serious ethical and legal concerns.
Risk of Dehumanisation and Objectification of Human Life
When humans are reduced to “heat signatures”, “object classes”, or sensor/algorithm-generated profiles, autonomous weapons risk stripping away individuality, context and human dignity. Targets become data points, not people with rights, histories and potential innocence.
This dehumanisation can lead to a higher tolerance for collateral damage, civilian harm or indiscriminate force, especially in chaotic warfare environments such as urban warfare or insurgency zones.
Technical Limitations, Bias and Unpredictability
AI models depend heavily on training data, assumptions and constraints. In many real-world scenarios, especially in conflict zones, data may be incomplete, biased, or insufficient to capture the fluid and chaotic nature of war. Civilians mingling with combatants, nontraditional combatants, rapidly changing situations, environmental obscurities, and low visibility can lead to AI decisions becoming brittle, unpredictable, or dangerously unreliable under such conditions.
Additionally, designers of weapons systems may not foresee all potential uses or misuse. AI is a dual-use technology: even tools developed for non-military civilian applications may be repurposed for conflict, raising moral responsibility concerns across the entire lifecycle of development and deployment.
Challenges to International Humanitarian Law (IHL) and Human Rights
Principles central to IHL, such as distinction (between combatants and non-combatants), proportionality (avoiding excessive collateral damage) and necessity, may be violated if autonomous systems cannot reliably adhere to them.
Human rights organisations warn that deployment of AWS without adequate oversight, regulation and accountability mechanisms threatens the right to life, dignity and due process, particularly for civilians and vulnerable populations.
Global Response: Regulation, Governance and Calls for Controls
Given the risks, many in the international community are calling for robust governance, regulation or outright bans on fully autonomous lethal weapons.
In 2023, the First Committee of the United Nations General Assembly (UNGA) adopted a resolution on autonomous weapons. The vote passed with a large majority, reflecting broad concern at the global level about AWS.
A joint appeal by United Nations officials and humanitarian organisations urged states to negotiate a legally binding instrument to set explicit prohibitions and restrictions on autonomous weapons systems.
Many countries have publicly declared that fully autonomous lethal weapons, that is, weapons without meaningful human control, should be banned or, at a minimum, strictly regulated.
The research community and international bodies, such as disarmament experts, have produced reports and frameworks assessing the human rights, legal, and security implications of AWS and emphasising the need for transparency, accountability, and global cooperation.
Yet, despite mounting calls, there is still no universally binding treaty or legal instrument that fully regulates AWS globally. The debate continues; many states remain cautious or undecided.
Toward Responsible Deployment: What Ethical AI in Defence Should Look Like
Given the dual-edged nature of autonomous systems, a pragmatic and ethically sound path forward would combine innovation with restraint, ensuring that human values such as dignity, accountability, and humanity remain central. Here are some guiding principles and measures:
Adopt a Clear, Shared Definition and Transparent Standards
Because definitions of “autonomous weapons” vary widely, states and international organisations should converge on a value-neutral, widely accepted definition of what constitutes an AWS or LAWS. This clarity is a prerequisite for any meaningful regulation, oversight or treaty framework.
Clear standards would also help avoid loopholes where semi-autonomous or systems described as human-in-the-loop but minimally supervised become de facto autonomous weapons.
Ensure Meaningful Human Control Throughout the Targeting Process
Any weapon system capable of lethal force should require meaningful human oversight over target selection and engagement decisions. This preserves human moral agency, accountability, and the ability to assess context, such as compassion, mercy, and judgment. Many civil-society actors and legal experts insist on this as an absolute.
In technical development, this may require hybrid systems and human-AI teams, rather than fully autonomous systems. Recent academic literature supports such human–machine collaboration, combining human judgement with machine speed and precision.
Establish Robust International Governance, Treaty Frameworks and Export Controls
Given the cross-border implications of AWS proliferation, from regional instability to non-state actors obtaining such weapons, global governance is essential. A treaty or a legally binding instrument under international law should regulate the development, deployment, export, transfer, and use of AWS.
Such governance should include transparency obligations, documentation, audit logs and compliance with IHL and human rights law.
Ethical Design, Testing, Transparency and Accountability Mechanisms
AI developers and defence contractors must embed ethics from the design stage. This includes thoroughly testing AI systems across diverse, real-world-like scenarios, ensuring explainability and accountability, mitigating bias, and incorporating fail-safes or shutdown mechanisms.
Furthermore, third-party oversight from independent experts, civil society and legal scholars should be part of governance to avoid conflicts of interest and ensure legitimacy.
Stakeholder Engagement: Civil Society, International Bodies, Technologists, States
This is not a challenge that militaries or technology companies should decide alone. The stakes, human life, dignity, global security, demand that policymakers, international organisations, human rights groups, technologists, ethicists and the public be part of the conversation.
Only through collective deliberation, balancing innovation, security, ethics and humanity, can we navigate the risks responsibly.
Why Ethical Deployment Matters More Than Just Strategy
The debate over autonomous defence systems is not only about battlefield effectiveness or strategic advantages. It is about what kind of world we want to live in, even in war. Key reasons why ethical deployment, a moratorium, or a ban matters deeply:
Human Dignity and Moral Responsibility: War involves human lives, suffering and moral weight. Delegating decisions about life and death to machines risks stripping away humanity from conflict. Autonomous systems that treat humans as data points may view life as calculable, programmable, and replaceable.
Protection of Civilians and Vulnerable Populations: Conflicts are increasingly involving civilians, urban zones, and nontraditional combatants. Without human judgment, the risk to non-combatants multiplies. The recent spike in drone and AWS-related civilian casualties underscores this danger.
Prevention of Arms Race and Proliferation: As AWS become more common, cheaper, and more accessible to states and possibly non-state actors, the threshold for entering conflicts could be lowered, making war more frequent, more destructive, and less controllable.
Upholding International Law and Human Rights: The principles of distinction, proportionality, necessity and accountability, core to IHL and human rights law, must not be sacrificed in the name of efficiency or strategic gain.
Accountability and Justice: When machines make lethal decisions, holding humans or institutions accountable becomes a complex task. This undermines justice, impedes redress for wrongful harm, and erodes public trust.
This is not simply a technological or military issue. It poses a profound ethical, humanitarian, and legal challenge to humanity.
Conclusion and The Way Forward
As AI continues to advance rapidly and autonomous defence systems become more capable, the possibility of a future where machines independently conduct lethal functions is no longer science fiction. Many states are already investing in AWS, and the global market is expanding.
Given this reality, a principled and proactive approach is not only desirable but imperative. Ethical AI deployment in defence must be grounded in human dignity, legal accountability, protection of civilians, and international cooperation. Doing so requires:
Global agreement on definitions and standards for what counts as an autonomous weapon.
Guarantee of human control over lethal decisions, no machine should be allowed to kill without meaningful human oversight.
A robust international governance framework, possibly under a treaty, to regulate development, deployment, export and use of AWS.
Ethical design, transparency, and accountability are built into the lifecycle of all AI systems used in defence, including auditability, explainability and oversight by independent bodies.
Inclusive stakeholder engagement involving states, international institutions, civil society, technologists, ethicists, and public voices to ensure legitimacy and global acceptance.
At stake is not only the future of warfare, but also the values of humanity that we choose to uphold, even in conflict. In a world where machines might someday make decisions about who lives and who dies, we must ask ourselves whether efficiency and strategic advantage are worth sacrificing moral responsibility, human dignity and accountability.
If we choose to navigate this path, it must be with caution, deliberation, regulation and, above all, respect for life.







Comments