The Failure of “Meaningful Human Control”
In November 2020, an autonomous drone reportedly tracked and killed a nuclear scientist in Iran without a human operator issuing a direct fire command. The frameworks that were supposed to prevent this did nothing. Lethal autonomous weapons systems capable of selecting and engaging targets without meaningful human involvement exist and are actively being developed by the world's most powerful militaries. The phrase that has emerged in talks around autonomous weapons systems (AWS for short), in the United Nations, Department of Defense policies, and throughout the academic literature, is "meaningful human control." It appears in these contexts as if it names something concrete, as if the ethical problem of algorithmic killing had already been resolved by the act of naming it. It has not. "Meaningful human control," as it currently exists, does not constrain the delegation of lethal decisions to autonomous systems. It permits it. There are four levels to this ambiguity: the language used to describe human oversight is incoherent; the philosophical frameworks proposed to replace it cannot be satisfied by existing technology; even a philosophically coherent and technically feasible standard would leave an ethical objection intact; and even if all three of those problems were resolved, the states most capable of deploying these weapons have every incentive to ensure that no universal regulations appear around them. Each level forecloses one possible exit from the problem. By the time all four have been examined, what remains is not an open debate but one that has already been quietly settled by the states with the least interest in settling it honestly. For these reasons,the phrases "meaningful human control" and "human in the loop" do not represent the oversight or regulation of autonomous lethal weapon systems in the way we would ideally like them to.
The first failure is in the language surrounding AWS. The common conception of human oversight of autonomous weapons is the concept of the "human in the loop," a phrase meant to convey that a person occupies some meaningful position in the decision chain through which lethal force is authorized. Joseph Chapa, an Air Force officer and philosopher, has shown that this framing does not accurately describe existing policy and that its inaccuracy is not a coincidence. United States Department of Defense directives do not require a human being to be present at every point of an autonomous system's decision process. What they require is an "appropriate level of human judgment," a standard so abstract it allows obligation to be placed virtually anywhere (Chapa). The word “appropriate” stands in for the kind of concrete standard that genuine oversight would require, but it commits to nothing specific.
The deeper problem Chapa argues is structural rather than terminological. When we define the machine's decision process as "the loop" and then ask where a human fits within it, we have already accepted a system in which the machine's logic is the starting point and human agency is something to be inputted after. Chapa illustrates this with an analogy to anti-lock brakes. A driver who presses the brake pedal is not the agent making braking decisions in any substantive sense. The ABS system makes those decisions, within parameters that the driver cannot access or modify in real time. The driver initiates, the machine executes, and the moral texture of the act gets distributed across a system that was never designed to bear it. When we apply this logic to lethal force, the human's role is reduced to something closer to a trigger rather than a decision-maker. The language of the loop has obscured the fact that no standard of any substance exists at all.
If the existing vocabulary is broken, why not replace it with something more rigorous? Philosophers Filippo Santoni de Sio and Jeroen van den Hoven have attempted exactly this. They argue that meaningful human control over autonomous systems requires two conditions to be simultaneously satisfied. The first is a "tracking" condition, which holds that an autonomous system must be responsive to the moral reasons of its human designers, capable of behaving in ways that reflect the values its designers intended to encode. The second is a "tracing" condition, which holds that the outcomes of an autonomous system's actions must be traceable back to a specific human agent who can be held morally and legally accountable for them (Santoni de Sio and van den Hoven). Without both conditions, they warn, a "responsibility gap" opens in which autonomous harm occurs but no person is identifiably responsible for it. The framework is philosophically serious, but the problem is that current technology cannot satisfy it.
Luca Righetti and his colleagues, writing in the IEEE Robotics and Automation Magazine, examine lethal autonomous weapons systems against the requirements of International Humanitarian Law. These laws require distinction between combatants and civilians, proportionality in the use of force, and precautionary measures to limit harm. Their analysis reveals a fundamental incompatibility between how machine learning systems actually function and what Santoni's tracing condition requires. Machine learning algorithms do not follow explicit rules that map predictably from inputs to outputs. They identify patterns in training data and generalize from them in ways that can behave unpredictably when confronted with situations that fall outside the distribution of that training data (Righetti et al.). In a battlefield environment, which is by definition full of novel and unanticipated situations, this unpredictability is not a temporary limitation to be engineered away. It is a feature of how these systems work. If a system's behavior in a given context cannot be reliably predicted, it cannot be reliably traced to a human author. If it cannot be traced, the responsibility gap Santoni describes does not merely threaten to open. It is already open.
The third failure is ethical, and it operates independently of the first two. Even granting that the philosophical standard could be satisfied and that technology could eventually meet its requirements, a further objection remains that would survive both improvements. Leonard Kahn, a philosopher working on the ethics of autonomous weapons, has argued that sufficiently advanced systems capable of distinguishing combatants from civilians more reliably than human soldiers would not merely be permissible but morally required. His reasoning is grounded in a genuine moral commitment: if the goal of International Humanitarian Law is to protect non-combatants, and if a non-human system can do that more effectively than a human soldier, then a principled preference for human decision-making would be purchasing moral comfort at the cost of non-combatant lives (Kahn). It might sound counterintuitive, but there is merit to his argument. Kahn is not defending autonomous killing for its own sake. He is asking whether the insistence on human involvement might be some sort of bias towards our own morals. After all, should we not use the most effective solution possible for serious matters?
Linda Eggert's response, however, pushes back on this argument. Kahn's framework treats accuracy as the morally relevant consideration, such that the permissibility of a lethal decision turns on whether the system correctly identified its target. Eggert argues that this misunderstands what is morally at stake when one agent decides to kill another human being. The objection to purely algorithmic lethal decisions is not that machines make errors, though they do. The objection would remain even if machines were perfectly accurate, because what targets of lethal force are owed is not precision but something that precision cannot supply: individual moral regard (Eggert). A human being who decides to use lethal force against another person is capable of recognizing that person as a rights-holder, of weighing their particular circumstances, and of bearing genuine moral responsibility for what follows. An algorithm processes a data profile and returns an output. Reducing a human life to an input in that process denies that person something they are owed, not because the technology is insufficient but because the technology is, by its nature, indifferent. That indifference is the ethical problem, and no improvement in accuracy resolves it. Kahn's argument, which makes accuracy an ethical factor, allows us to shift the conversation away from dignity entirely. By shifting the focus of the debate from dignity to precision, the ethical objections to AWS do not get answered, but rather bypassed.
The fourth consideration is political, and in some ways it is the most important, because it does not depend on conceptual confusion or technological limitation. It addresses the practical reality of our geopolitical system. The states most capable of addressing the problem are the states with the strongest reasons not to. Ingvild Bode and her colleagues have examined how the United States, China, and Russia have engaged with the United Nations Convention on Certain Conventional Weapons (CCW) debates on the regulation of lethal AWS. They found that in the absence of binding governance, these states have participated in deliberations over AWS in ways that project an image of engagement while quietly using and improving AWS on the side (Bode et al.). In effect, the CCW process allows states to put on a show of good faith without actually having to demonstrate any compliance.
Bode's analysis makes clear that this outcome is not an accident. The states producing and deploying the most sophisticated autonomous weapons systems are also the states whose strategic interests are most directly threatened by binding restrictions on those systems. A genuinely enforceable international agreement would require that the actors who benefit most from the current ambiguity voluntarily surrender the advantage that it provides. Bode notes that governance may still be possible if pressure comes from actors outside the major weapons-producing states, but she does not suggest that this pressure is imminent or that the institutional mechanisms for generating it are in place (Bode et al.).
Historian Thomas Hughes argued that technological systems develop through stages, and that human choices are most effective early in that development. Once a system gains momentum, steering it becomes far more difficult. The longer the CCW process stalls, the closer autonomous weapons systems come to the kind of momentum that makes binding governance not just politically unlikely but structurally too late. We have a situation in which the philosophical arguments have been made, the technical limitations have been documented, the ethical objections have been articulated at length, but the states with the most autonomous weapons continue to develop them. The line has not been drawn because the parties with the authority to draw would prefer not to.
What remains is a question that none of these sources fully answer. Is it still possible for the international community to generate the kind of pressure, from outside the circle of major weapons producers, that binding governance would require? Bode thinks it might be. But the more unsettling possibility is that by the time that pressure materializes, the technology will have achieved the kind of momentum that Langdon Winner, in his essay “Technologies as Forms of Life,” warns will make steering extremely difficult. We may already be sleepwalking into a world where the phrase "meaningful human control" means whatever the states with the most autonomous weapons decide it means. The question is not only where should we draw the line, but whether we can even draw it at all.
Works Cited
Bode, Ingvild, et al. "Prospects for the Global Governance of Autonomous Weapons: Comparing Chinese, Russian, and US Practices." Ethics and Information Technology, vol. 25, 2023, article 5, doi.org/10.1007/s10676-023-09678-x.
Chapa, Joseph O. "Please Stop Saying 'Human-In-The-Loop.'" Institute for Future Conflict, 3 Sept. 2024, ifc.usafa.edu/articles/please-stop-saying-human-in-the-loop.
Eggert, Linda. "Autonomous Weapons Systems and Human Rights." AI Morality, Oxford University Press, 2022, pp. 1–17.
Kahn, Leonard. "Lethal Autonomous Weapon Systems and Respect for Human Dignity." Frontiers in Big Data, vol. 5, 9 Sept. 2022, doi.org/10.3389/fdata.2022.999293.
Righetti, L., et al. "Lethal Autonomous Weapon Systems: Ethical, Legal, and Societal Issues." IEEE Robotics & Automation Magazine, vol. 25, no. 1, Mar. 2018, pp. 123–129.
Santoni de Sio, Filippo, and Jeroen van den Hoven. "Meaningful Human Control over Autonomous Systems: A Philosophical Account." Frontiers in Robotics and AI, vol. 5, 28 Feb. 2018, article 15, doi.org/10.3389/frobt.2018.00015.