Project Ploughshares researcher calls for ‘meaningful human control’ over AI-enabled warfare

A soldier operates the Turkish-made STM Kargu, a combat drone and loitering munition equipped with artificial intelligence and facial recognition technology. Photo: Armyinform.com.ua, CC BY 4.0, via Wikimedia Commons
By Matthew Puddister
Published July 17, 2025

Artificial intelligence (AI) has moved in recent years from the realm of science fiction to a part of everyday life, incorporated into everything from search engines to virtual assistants to generative tools and used in a wide variety of settings, including the church. One rapidly expanding use of AI is in autonomous weapons systems.

Branka Marijan. Photo: Contributed

Branka Marijan is a senior researcher at Project Ploughshares, the peace research institute of the Canadian Council of Churches that works with churches, governments, and non-governmental organizations across Canada and around the world to prevent armed violence and foster peace. In recent years Marijan has focused on the growing use of AI and autonomous weapons systems, and the need for “meaningful human control” or “appropriate care” over them. In November 2024 she spoke before the House of Commons national defence committee, urging that Canada’s military policy include more guidance on the deployment of these technologies.

The Anglican Journal spoke to Marijan about the significance of human control over AI-enabled warfare from the perspective of Christians striving for peace. This interview had been edited for length and clarity.

Tell us about your work as a senior researcher at Project Ploughshares.

My research area is on emerging military and security technologies. About a decade ago when I started at Ploughshares, we did a scan of issues of concern. We came across this issue of autonomous weapons as one that we should be focusing on because our colleagues from other work that we’ve done, like Amnesty International [and] Human Rights Watch on the arms trade, were a part of this new campaign, Stop Killer Robots. This issue of lethal autonomous weapons systems was being discussed at the United Nations office in Geneva. Ever since 2015, I’ve focused on the issue of autonomous weapons and increasingly responsible military applications of artificial intelligence.

How is AI currently used in warfare? We know in its ongoing attacks on Palestinians in Gaza, Israel has been using U.S.-made AI models to select bombing targets.

AI is being used in a variety of ways, not all of which are concerning from an international regulatory perspective or even a humanitarian perspective. The most concerning applications are those that you’ve identified, which is the use of AI in decision support systems. These are systems that are used in targeting or identifying targets.

The issue is that these systems are essentially making very fast recommendations to humans who spend about 20 seconds reviewing the recommendation. What we’ve seen in Gaza in particular is an increasing and really a dire impact on civilian populations. Some of these systems have allowed the Israel Defense Force to move from having about 50 to 100 targets a year in Gaza to 50 to 100 targets a day. That increase in scale and speed of violence is what we’re seeing when these AI decision support systems are used.

Autonomous weapons systems are particularly weapons that are able to select, identify and engage targets. We’ve seen some early evidence of these types of systems such as loitering munitions which have been stated or alleged to act with a degree of autonomy where essentially, what these systems are doing is selecting targets and exploding upon impact. [Editor’s note: “Loitering munitions” are aerial weapons, typically drones, which can hover in an area for some time before picking and hitting a target.]

In Ukraine, we see a lot of deployment of AI for a variety of purposes, [such as] to provide information about the opposing forces and their movements to direct firepower. This is where Ukrainians have seen a lot of gains in efficiencies. This is why there’s such an appeal for militaries. For example, when they’re using these AI-enabled drones, they’re able to have a much greater military impact on their targets.

We have seen an increasing interest from a number of states around the world. There’s a lot of research and development going into AI applications in the military domain, particularly thinking through how these existing platforms and existing systems could be adapted—for example, how drones can be used to integrate AI applications or how to enable them with AI. There’s also the use of these systems in misinformation or disinformation campaigns.

What impacts are AI weapons systems having on civilians?

One of the issues we have seen is that the increase in targeting has resulted in greater civilian casualties. Because there are a number of supposed legitimate targets, and these targets are identified as residing in particular areas in what is really a densely populated area, then of course the impact on civilians has been significant for years.

Supporters of these technologies have told us that these technologies will lead to protection of civilians and less civilian loss of life. But what we’ve seen in practice is quite the opposite. What we’ve seen is that civilian casualties and deaths of civilians have gone up when these systems have been deployed, particularly given the context of a densely populated area. A lot of the targeting has focused on private residences of these targets, which leads then to greater civilian loss.

What is “meaningful human control” in the context of AI warfare?

One of the terms that we’ve been looking at in these international discussions for a decade now is this term “meaningful human control.” The idea behind meaningful human control is to identify what would be a significant enough degree of human control exercised over these systems to lead to a degree of accountability by the human operators.

We’ve clearly identified cases where humans are just rubber-stamping or approving actions recommended by an AI system of some sort. We want to come up with a way to understand how we can preserve the role of humans in warfare to ensure that we don’t see the scaling up of violence; that we don’t see further escalations or errors and mistakes that are made when increasingly autonomous systems are deployed.

Regardless of technological advancement, humans have to be accountable for actions that are taken by systems. By preserving that human accountability, we also preserve humanity in the sense of treating human life as sacred and not just a line of code.

In an article for The Ploughshares Monitor, you refer to the work of Richard Moyes and Heather Roff in describing what meaningful human control looks like.

What they weren’t trying to do is provide limitations for states. They were just trying to say, if a system is deployed and it is acting somewhat with a degree of control, it needs to be predictable and reliable technology, have transparent systems, have users in possessions of accurate information, have an opportunity for timely human action intervention and have mechanisms for accountability.

If an AI system feeds you information, as the human operator who is ultimately accountable for its action, you have to have a degree of understandability of what this system is doing. Maybe you don’t need to understand every single [artificial] neuron of its functionality. But you do have to understand how the system functions. You cannot be provided with [a] system which you don’t understand [where] you don’t know the exact parameters of its actions, you don’t know how it could potentially act in certain different scenarios. In those cases, the human would not have a meaningful sense of control over these systems, and ultimately they could not be held accountable if the system acted in ways that the operator could not foresee and did not understand.

Some countries say existing international humanitarian laws are sufficient. They address what are proportionate attacks, what are legitimate military targets. But the question here is that we’re dealing with systems that could potentially learn. A lot of them are black boxes, so it’s very difficult to understand how the system arrived at a particular decision.

We’ve all experienced this sense of automation bias. When you use your GPS and it leads you down roads that maybe you shouldn’t go down or there’s a better way, you assume Google knows better. When we’re handed these systems, we as humans tend to overly trust them because we think it’s math, it’s calculations.

Referring to Gaza again, how valid is the concept of meaningful human control over AI weapons in a context where military and political leaders have openly expressed genocidal intentions and commit war crimes with no accountability?

I think this is the most concerning situation that has evolved, and one that humanitarian organizations have from the very beginning of these international discussions warned about. When we were being told these technologies were going to save civilian lives, we countered with exactly these kinds of scenarios, saying there are countries that can act outside of these international norms with very little impact on them and no pathways to accountability.

For these international discussions, we have not learned anything about these technologies from Israel. All of this has come from reporting and investigative reporting. This is really a challenge because countries are not providing information on how they’re using these systems. The international regulatory discussion [on] this concept of meaningful human control is a slow-moving process. The reality is in real time; we have people who are suffering and dying because of the deployment of some of these technologies.

What we’ve seen in Gaza, and what we’re going to see in other places that don’t receive as much media attention or don’t feature as much on the international stage, is precisely this impact on civilians with the testing and use of these technologies. We’re going to see it more and more as countries try and ensure that they have some technological edge over their adversaries in various contexts.

Because we started so early on, we could have had norms and rules and regulations in place. That missed opportunity is now showing us that these theoretical applications of the technology do not match with the reality of their applications. Now I don’t know if there is a sense of urgency on behalf of some states, because there is this idea that some states can just use these tools and technologies against civilian populations with seemingly very little cost to them and impact on their international participation in these discussions.

There’s certainly a reputational cost. I don’t think that the use of these technologies and the impact on civilians has gone unnoticed. There has been a real sentiment of this dystopian reality that has been warned about coming to fruition. But I do think that for some states, that reputational cost is maybe not as important as it would be for others.

That’s a major concern, because we do have an increasingly authoritarian bend internationally. That is a challenge for these multilateral efforts at building norms and rules at the international level, and it’s going to only become more difficult in the years to come as we grapple with the new U.S. administration and other shifts in global politics.

Some influential countries, particularly the United States, have opposed this idea of meaningful human control in AI warfare. Can you explain their opposition?

The United States has suggested alternate framings that seem to provide a bit more flexibility in terms of the level of involvement. They use [terms] like “appropriate levels of human involvement” or “appropriate care.”  All of this provides a bit of distancing from this concept of control, because with control, it’s clear that there has to be human decision-making, particularly on the critical function that is the selection engagement of targets.

What appropriate levels of human involvement means hasn’t been clearly defined by these states. They’ve talked about the fact that humans would be the ultimate decision makers, and there would be great care taken in the type of systems that are developed and deployed. But when you use the term “appropriate human involvement”, that seems to hint at the possibility that in some scenarios, it could be deemed that there was no need for human involvement or the necessary human involvement was deemed to be minimal.

It could be that it’s sufficient to have this term “appropriate care” as long as it’s clearly defined—as long as there’s a norm amongst the states who are endorsing this declaration that appropriate care actually means having continuous assignments of accountability at various stages of weapons development and deployment.

For the United States, their concern is really that they will lose a technological edge in some future competition with a major power like China, where they could constrain themselves in some way [by] having the human in control who then makes them vulnerable in some way because they’re facing an adversary that might be more willing to remove that level of control. That speed in decision-making which AI offers would be decisive on a future battlefield.

What do you think needs to be done to ensure we have meaningful human control or appropriate care over AI weapons systems? Do you favour either of those terms?

We have to be open to whatever terminology the greatest number of states can agree to that still ensures a significant level of human control. We do need to ensure that there are humans who can be held accountable for these systems—that we should never be in a position where a human commander deployed a system that they had insufficient levels of control over and a lack of understanding of the system’s functionality, and then that resulted in a certain amount of civilian deaths.

People generally get that there is a role for humans to play in deploying technologies. Nobody really wants a scenario where systems are acting without a clear understanding of a human who’s overseeing these systems. If decisions are being made autonomously that impact our lives, I think people would know at a very basic level that this is not acceptable. Whatever terminology is agreed upon at the international level, it has to ensure that we maintain that level of human control and a sense of human dignity as well, because I think we need to ensure that we aren’t treated as a set of zeros and ones.

Would you like to see some kind of international agreement or framework regarding human control over AI weapons?

Absolutely. What we really need is a multilayered governance process. We need to ensure that we have an international agreement on autonomous weapons in these AI decision-support systems that can be very broad, but still provide a sense of a rule and norms for countries to follow. Then I do think we will need additional agreements. That initial broad agreement has to be legally binding.

How can we enforce international humanitarian law, though, including over control of AI warfare? I recall a quote by the Greek philosopher Anacharsis: “Written laws are like spiders’ webs; they will catch the weak and poor, but are torn in pieces by the rich and powerful.”

Yeah, I think it’s a fair criticism to say that international humanitarian law is violated constantly. It is broken. There’s no sort of sense of accountability by some states. Enforcement is a real challenge at the international level.

What we have to recognize though is that what we have in place is better than nothing. What we have in place still provides a sense of acting through some of these institutions, even for the less powerful to show how they’ve been wronged. You just have to look at the length some states will go to provide justification for their invasions, like the United States—the case they made for invading Iraq. If these international institutions did not matter at all, why go through that effort?

You’re correct that these international laws and norms and institutions can be used by the powerful against the weaker states. But at the same time, they do provide weaker states with a sense of recourse or a way to engage that would not be present with these institutions. It’s very disappointing to see violations of IHL. But there are also ways in which these systems protect civilians and protect populations and protect countries that are not always appreciated, because we focus on some of the major or constant failures.

But we don’t have alternatives. We don’t have anything other than the United Nations and the multilateral frameworks to try and work together as a number of countries with different interests and strategic goals. I think there is a need to revitalize these institutions and rebuild some of this sense of these norms. That is going to be incredibly challenging because we are in a very complex geopolitical moment globally. Even as Canada, we are in unprecedented times when it comes to our defence and security.

What’s the role of Christians when it comes to advocacy for meaningful human control or appropriate care over AI warfare?

One of the interesting things that we have observed in these discussions is the role of religious and interfaith communities in arguing for precisely this notion of human dignity and this concept that human life is sacred and shouldn’t be reduced to an algorithm. There have been stakeholders who have been very prominent, including the Vatican, who’ve been really thoughtful about the impact of these technologies on civilian populations. The World Council of Churches has had constant representation in these discussions as well.

It may seem removed from the life of ordinary Christians in Canada. These are high-level discussions [on] international security. But at the end of the day, it is about humanity and shifts that we’re seeing in warfare that are enabled by technologies which are having a profound impact on populations and civilians.

Regardless of denomination, I think people really care about this idea that we preserve human life and that we ensure that there is a greater effort towards ensuring peace. These systems pose a great risk to international security and that has to be recognized. International security, as we’re learning more and more in Canada, impacts all of us. These are not issues that are going to stay in some distant lands. These are things that are going to have an impact across the board in terms of warfare, but also in law enforcement and where some of these tools and technologies are deployed.

Related Posts

Author

  • Matthew Puddister is a staff writer for the Anglican Journal. Most recently, Puddister worked as corporate communicator for the Anglican Church of Canada, a position he has held since Dec. 1, 2014. He previously served as a city reporter for the Prince Albert Daily Herald. A former resident of Kingston, Ont., Puddister has a bachelor's degree in English literature from Queen’s University and a master’s degree in journalism from the University of Western Ontario.

Skip to content