IRRC No. 915

Protecting the global information space in times of armed conflict

Reading time 46 min read
Download PDF
This article is also available in

Abstract
The legal implications of information activities in the context of armed conflict against the background of the digital transformation have so far received only scarce attention. This article aims to fill this gap by exposing some of the legal issues arising in relation to mis- and disinformation tactics during armed conflict in order to provide a starting point for further debate in this respect. Specifically, it explores the existence and content of existing limits imposed by international humanitarian law on (digital) information operations and inquires whether the current framework adequately captures the humanitarian protection needs that arise from such conduct. * This is the revised and updated version of the Geneva Academy Working Paper by Robin Geiß and Henning Lahmann, “Protecting the Global Information Space in Times of Armed Conflict” (February 2021). The author wishes to thank Dr Kubo Mačák for his helpful comments on an earlier draft of the article.

Introduction

The growing number of allegations of foreign influence activities over the past couple of years, carried out by a variety of international actors, directed against democratic decision-making processes in other States have put the problem of adversarial information operations, broadly understood as “any coordinated or individual deployment of digital resources for cognitive purposes to change or reinforce attitudes or behaviours of the targeted audience”,1 high on the international agenda. The interference in the 2016 US and the 2017 French presidential elections as well as that in the 2016 Brexit referendum in the UK are only the most prominent examples. The phenomenon is certainly neither abating nor geographically limited: in late 2020, for instance, Somalia expelled Kenya's diplomatic staff after accusations of electoral meddling.2 Since the beginning of 2020, an unprecedented surge of misinformation and disinformation3 surrounding the COVID-19 pandemic has added a new sense of urgency while at the same time expanding the scope of the legal questions. However, so far the ensuing debate among scholars and policy-makers has been focused on international human rights law and other questions of peacetime international law, such as whether and under which circumstances an (online) disinformation campaign targeting audiences abroad may amount to a violation of the target State's sovereignty, the principle of non-intervention, or even – in extreme cases – the prohibition of the use of force.4 The legal implications of digital information warfare in the context of armed conflict, on the other hand, have so far received scarce attention.5 This contribution aims at filling this gap by exposing some of the legal issues arising in relation to mis- and disinformation tactics during armed conflict in order to serve as a starting point for further debate in this respect:

What, if any, limits exist concerning adversarial information operations in armed conflict? Does the humanitarian legal framework adequately capture the humanitarian protection needs that arise from these types of (military) conduct? Where and how to draw the line between effects and side-effects of digitalized information warfare that should remain either within or without the protective ambit of international humanitarian law (IHL)? What are, or what should be, the limits of disinformation campaigns, “fake news”, deep fakes, and the systematic manipulation of a given information space in times of armed conflict? Does IHL, which is traditionally and primarily focused on preventing physical harms, sufficiently account for, and is capable of mitigating, potentially far-reaching consequences that such types of operations can have on societies? If not, should it?

While the laws of armed conflict have proven to be flexible enough to anticipate technological innovation in general and are applicable also to new means and methods of warfare, as thoroughly discussed in relation to the application of IHL to cyber warfare,6 it is less obvious whether the protection they provide remains adequate in all instances in which novel forms of warfare are employed. And while it is certainly true that disinformation campaigns, ruses and other methods of deception and propaganda have always been part and parcel of warfare, recent technological developments, especially in the fields of cyber and artificial intelligence, are to be seen as a veritable gamechanger of (dis-)information warfare. Considering the scale, scope and far-reaching effects of peacetime information activities, and taking into account the constantly increasing level of military cyber capabilities, this article argues that the traditional assumption that military influence operations have always been an immanent feature of warfare and are thus generally permissible during armed conflict7 should be revisited. The intention is to start a debate and to question whether the long-standing practice of psychological and influence operations, considering how powerful and damaging some of these operations have become in the wake of global digitalization, is still to be seen as a “common feature of war” with only few constraints in IHL as it stands today.

 

After presenting a few brief scenarios of possible (military) information operations in situations of armed conflict to illustrate what is potentially at stake, the main part examines whether and to what degree existing rules of IHL put limitations on the conduct of information warfare. A short look at international criminal law and international human rights law follows before the article concludes with an outlook on potential paths to advance the debate.

Mapping the threat landscape: Risks to the information space in contemporary armed conflict

Psychological and influence operations in the context of armed conflicts can occur in vastly different contexts and can have a variety of different effects on the targeted societies and civilian populations, depending on the mode of conduct, namely the technologies employed, the scope, scale and sophistication of the operation or campaign, the target audience, and the aims pursued. In order to illustrate the matter, a set of hypothetical scenarios – loosely based on past events – follows below.

Scenario A: Social media-enabled foreign electoral interference

The governmental armed forces of State A are involved in a protracted, low-intensity non-international armed conflict with Insurgent Group G, which controls parts of the territory of State A. In the months prior to a general election in State A, the military cyber unit of neighbouring State B – which has been supporting Insurgent Group G with weapons, logistics and covert special forces operations over the course of the conflict – sets up a concerted disinformation campaign on social media in close coordination with domestic groups belonging to G. Employing tools such as fake accounts, bots, and micro-targeting algorithms, the operation disseminates misleading and false political content to State A's electorate in order to discredit the incumbent and boost support for her contender, who publicly supports the main demands of Insurgent Group G, including secession, and a close future alliance with State B. Despite having trailed in the polls for months, the contender surprisingly wins the election and assumes the presidency.

Scenario B: Large-scale distortion of the media ecosystem

During a situation of sustained political tension between State A and State B, the military information operations unit of State B starts an open propaganda campaign, disseminated via social media, video streaming platforms and State-owned television channels, that attempts to undermine public support in State A for the policies of its government vis-à-vis State B by highlighting arguments that contradict the official justification of the government's positions. As the campaign does not seem to yield discernible results, the military of State B launches a limited number of missiles against the territory of State A while the military information operations unit spreads a video via social media – using fake accounts that appear to belong to ordinary citizens of State A – that ostensibly shows a high-ranking political leader admitting that the armed conflict was actually initiated by State A under false pretences. Shortly thereafter, the military of State B starts a large-scale cognitive warfare operation aiming at the distortion of the entire online media ecosystem of State A. The content on the websites of all of the most important public broadcasting services and the leading newspaper publishers is subtly, and at first virtually imperceptibly, falsified and manipulated, in line with the official position of State B. At various points, the leading news websites furthermore suffer from seemingly random DDoS attacks that render them inaccessible for considerable amounts of time.8 The military information operations unit even carefully rewrites the main points of already published expert opinions and academic studies dealing with political issues that are points of contention between the two countries. The combined operation leads to a lasting corrosion of the media ecosystem of State A and results in widespread and sustained confusion among the civilian population. As the official language of State A is the lingua franca of much of the globalized markets, science and scholarship, and international diplomacy, the manipulation of the State's news media even has ripple effects across the globe. Although the original content can gradually be reinstalled and it eventually turns out that the video had been fabricated using “deep fake” algorithms, support for the government and the war effort in State A drop significantly. Eventually, the military of State A is forced to retreat. The upheaval in the country proves to be lasting due to the loss of public trust in both the media and political structures, resulting in a sustained period of political instability that is further exploited by State B to achieve its own goals at the expense of State A.

Scenario C: Manipulation of civilian behaviour to gain military advantage

While a severe respiratory disease pandemic is spreading across the globe, State A and State B are engaged in an armed conflict that mainly revolves around disputed territory that is a province of State A but claimed by State B. The information operations unit of the armed forces of State B gains access to private groups on a social media platform that are used and frequented mainly by members of the armed forces of State A. Pretending to be soldiers of State A, the unit disseminates the false information that ingesting methanol helps to prevent contracting the virus. Although the information is only shared within the closed groups, screenshots quickly spread all across the social network, which leads to the death of both members of the armed forces and civilians who drink pure methanol after having been exposed to the false information.

Further on, the information operations unit of State B disseminates via various social media platforms the false information that the contested territory has seen several large and severe outbreak clusters of the disease and that for that reason, the authorities of State A have imposed new health guidelines for the province, including a total lockdown for fourteen days. The information leads to confusion and fear among the resident civilian population. While the government of State A tries to correct the disinformation and re-establish order, the armed forces of State B exploit the confusion and the lockdown to make extensive territorial gains.

Scenario D: Compromising and extorting civilian individuals through information warfare

During an armed conflict between State A and State B, the cyber operations unit of State A hacks into servers that store sensitive personal information about D, who is the chief executive officer (CEO) of a large defence contractor in State B. The unit subsequently starts to disseminate the information via social media platforms and to journalists working at major news outlets in State B; while most of the information is factually correct, the unit also subtly falsifies a number of documents and photographs to further compromise D. Finally, the cyber operations unit conveys the message to D that it will release the most intimate, embarrassing and humiliating information unless D agrees to delay the further development of an advanced fighter jet by his company.

Scenario E: Disinformation as incitement to violence

State A has been ravaged by a protracted civil war that has mostly been fought along ethnic lines. The military, which is primarily composed of members belonging to the majority ethnic group, starts using a social media platform, which serves as the dominant means of communication and information in State A, to disseminate dehumanizing disinformation about one of the minority ethnic groups which the government considers not to be part of the “legitimate people of State A”. At least partly as a result of the sustained disinformation campaign, openly hostile attitudes towards the minority group among the majority population increase considerably. After the military suffers from some setbacks in its combat operations against various rebel groups, it begins to spread false rumours about certain members of the minority group having raped a woman belonging to the majority ethnicity. This false information, which spreads quickly and widely via the platform, leads to severe violence against the minority by civilian members of the majority population.

Protecting information spaces under existing legal frameworks

As the brief scenarios show, the manipulation of specific pieces of information and the distortion of the digital information ecosystem in an entire country, a region, or even globally can take a variety of modes and manifestations. All of the above examples are, to a greater or lesser extent, based on real-world cases, although most of them did not occur in the context of an ongoing international or non-international armed conflict. However, how such scenarios could play out as part of a military campaign is easily imaginable and it is only a question of time before at least some of them will occur during armed conflicts. The subsequent section analyses the legal implications of such operations within the framework of existing IHL. As already mentioned in the introduction, the article thereby applies a broad understanding of the concept of “adversarial information operations” that follows the recently adopted “Oxford Statement on International Law Protections in Cyberspace: The Regulation of Information Operations and Activities”, which defines such conduct as “any coordinated or individual deployment of digital resources for cognitive purposes to change or reinforce attitudes or behaviours of the targeted audience”.9

International humanitarian law

In the following, it will be examined whether and to what extent existing IHL offers protections against adversarial information operations and other forms of cognitive warfare that target the civilian population in situations of armed conflict. For the purpose of legal analysis, a distinction between the specific elements of such operations has been suggested, as different rules and legal consequences might attach. These identifiable elements are, at least: (1) the content of the communicative act; (2) the mode of disseminating the information; (3) the target audience; and (4) the (actual or foreseeable) consequences of the communicative act.10

The correct observation that “[t]he conduct of information operations or activities in armed conflict is subject to applicable rules of [IHL]”11 notwithstanding, the pertinent legal frameworks of the laws of armed conflict address communication and information activities only tenuously and non-systematically. This is primarily a consequence of IHL's traditional focus on the physical effects of armed conflicts.12 Thus, for instance, while Article 79 of Additional Protocol (AP) I clearly states that journalists “shall be considered as civilians” and “be protected as such under the Conventions and this Protocol”, it has been pointed out that the scope of this specific protection only covers the individual journalists as natural persons, but not (at least not directly) “their journalistic activities or products, such as content posted on a website”.13 When it comes to questions regarding the content of information more broadly, the Tallinn Manual submits that the general rule is that “psychological operations such as dropping leaflets or making propaganda broadcasts are not prohibited even if civilians are the intended audience”.14 In line with this, it has been suggested that “through the longstanding, general, and unopposed practice of States, a permissive norm of customary law has emerged, which specifically permits” such operations “as long as [they] do not violate any other applicable rule of IHL”.15 For example, the German law of armed conflict manual states that “[i]t is permissible to exert political and military influence by spreading – even false – information to undermine the adversary's will to resist and to influence their military discipline (e.g. calling on them to defect, to surrender or to mutiny)”.16

At the same time, there are a number of specific rules in existing IHL that impose limits on certain forms of information operations. As will be shown below, principal among these rules are the prohibition of perfidy, the prohibition to terrorize the civilian population, the prohibition to encourage violations of IHL, the obligation to treat civilians and persons hors de combat humanely, as well as the obligation to take constant care of civilians and civilian objects during military operations. What is more, information operations that qualify as military operations, and especially information operations that amount to an attack in the sense of IHL, are subject to additional legal constraints.

The problem in all of this, however, is that many of these rules entail limiting criteria or thresholds that sit oddly with 21st-century digital disinformation campaigns. The relevant rules are anchored, understood and interpreted in light of 20th-century warfare practices. Typically, these rules are linked, in one way or another, to violent activity. Their rationale is to protect the integrity of IHL (perfidy), to limit violence and its most drastic psychological effects (prohibition of encouragement of IHL violations, prohibition of terrorizing civilians), or are focused on the protection of individuals (human dignity, humane treatment). These protection rationales undoubtedly continue to be relevant and these rules impose important limits for certain types of information campaigns in times of armed conflict. However, they are not aimed at protecting national or even the global “civilian” information space as such. This is particularly relevant when discussing military information operations, the aim of which is not to terrorize, incite violence or to expose targeted individuals but to degrade information spaces during armed conflict, systematically undermine public trust in a country's public institutions, media and democratic decision-making processes, and to spread large-scale confusion among the civilian population (Scenario B above). Of course, in keeping with IHL's overarching rationale to mitigate the worst – but not all – humanitarian impacts of war, it may well be argued that such effects should remain outside the protective realm of IHL even under the conditions of 21st-century warfare. And clearly, noting that the first victim of war is the truth, overly restrictive limits on information operations during armed conflict would be utterly unrealistic. At the same time, the nature, scope and impact of manipulative information operations occurring in peacetime and their long-lasting divisive and corrosive effects on public trust and societal stability require that more attention be given to these types of operations during armed conflict. Does IHL impose any limits on such adversarial information operations?

Digital perfidy and ruses of war

For one, whereas generally speaking an information operation would be lawful if it were to be qualified as a permissible ruse, it would violate IHL if amounting to a (prohibited) perfidious act. “Perfidy”, in accordance with Article 37(1) of AP I, is an act that invites “the confidence of an adversary to lead him to believe that he is entitled to, or is obliged to accord, protection under the rules of international law applicable in armed conflict, with the intent to betray that confidence”. As is quite obvious, the scope of this prohibition – especially when considered against the backdrop of modern disinformation practices as described in the scenarios above – is relatively narrow. It has been emphasized that “the perfidious act must be the proximate cause” of the death, injury or capture of a person belonging to the adversary party.17 This will only ever be relevant in relation to very specific information operations that directly aim at such (physical) consequences with a particular mode of deception. Ruses of war in the sense of Article 37(2) of AP I, on the other hand – understood as “acts intended to mislead the enemy or to induce enemy forces to act recklessly”18 have a broader scope of application that generally includes psychological warfare activities. This is implied by the provision's phrasing, which explicitly mentions “misinformation” as a type of permissible ruse, loosely understood as the dissemination of any type of information aimed at misleading the enemy.19 Jensen and Crockett present the example of “a deep-faked video including inaccurate intelligence information [which] might significantly impact the conduct of military operations”.20 Such deception of the adversary by way of a communicative act must, however, not be in conflict with any other applicable rule of the laws of armed conflict.

Notably, however, the examples typically provided for permissible ruses of war refer to instances in which new information – in whichever form – is distributed, rather than existing and trustworthy sources of information (e.g. a country's online news environment) that are being manipulated or falsified.21 Thus, when talking about a permissive norm of customary law22 it might be necessary to draw further distinctions between different types of information operations. What is more, like in the German law of armed conflict manual cited above, which speaks of “the adversary's will to resist” as well as “military discipline”, there is often a reference to an overarching military purpose of the information operation without it being clear whether such a limitation is considered to be somehow prescribed by IHL or whether it is rather to be seen as simply reflecting the typical context in which such operations are likely to occur. It is telling that the 1987 Commentary on Additional Protocol I defines a ruse of war as consisting “either of inducing an adversary to make a mistake […], or of inducing him to commit an imprudent act” and therefore appears to understand ruses of war as practices that have at least a nexus to concrete military operations against enemy forces.23 The Commentary lists “simulating the noise of an advancing column”, “creation of fictitious positions”, “circulating misleading messages” and “simulated attacks” as examples of ruses of war.24 On the basis of this definition and the examples of ruses provided above, actively corroding a civilian information space with the aim to spread confusion and uncertainty among the civilian population and without any direct link to combat activity – e.g. by manipulating content in all major online newspapers in a given country – does not qualify as a permissible ruse of war.

Personality rights

The obligation of humane treatment might constitute one of the rules that prohibit certain types of information operations in situations of armed conflict. Pursuant to Article 27 of Geneva Convention (GC) IV, “[p]rotected persons are entitled, in all circumstances, to respect for their persons, their honour, their family rights, their religious convictions and practices, and their manners and customs. They shall at all times be humanely treated, and shall be protected especially against all acts of violence or threats thereof and against insults and public curiosity.” The International Committee of the Red Cross (ICRC) has submitted that such public exposure is prohibited even when it “is not accompanied by insulting remarks or actions” as it is “humiliating in itself”.25 Crucially, it has clarified that “[i]n modern conflicts, the prohibition also covers … the disclosure of photographic and video images, recordings of interrogations or private conversations or personal correspondence or any other private data, irrespective of which public communication channel is used, including the internet”.26

The 1958 Commentary to the Fourth Geneva Convention calls the obligation of humane treatment the “leitmotiv” of all four Conventions.27 For this reason, “[t]he word ‘treatment’ must be understood in its most general sense as applying to all aspects of man's life”.28 Rule 87 of the ICRC Customary Law Study stipulates a general obligation to treat civilians and persons hors de combat humanely under customary international law. What is more, in the context of non-international armed conflicts, Article 3 common to the four Geneva Conventions prohibits outrages upon personal dignity, in particular humiliating and degrading treatment. The ICRC's 2016 Commentary lists, inter alia, “forced public nudity” and “enduring the constant fear of being subjected to physical, mental or sexual violence”, as relevant acts violating this prohibition.29 Therefore, one may argue that an adversarial information operation targeting a civilian and amounting to a violation of that person's personal dignity, such as the operation in Scenario D that aims at humiliating the CEO in order to blackmail him, would be in violation of the customary law obligation to treat civilians humanely. However, the pertinent treaty and customary rules relating to the obligation of humane treatment require the affected person to be “in the hands of” (Article 4 of GC IV) or “in the power of” (Article 75 of AP I; Customary Rule 87) the enemy. Prima facie, it is difficult to sustain the contention that this applies to the CEO, given that it is only his personal data that is in the hands of the adversarial party but not he himself. At the same time, considering the object and purpose of the obligation and the fact that the digital transformation has vastly expanded the possibilities to negatively impact a civilian person's dignity, an expansive interpretation that encompasses such conduct might be justifiable in light of the “leitmotiv” function of humane treatment in situations of armed conflict.

Incitement of violence

Pursuant to Article 1 common to the four Geneva Conventions as well as Article 1(1) of AP I, parties to an armed conflict are under an obligation to respect and ensure respect for the rules of IHL “in all circumstances”. While some aspects regarding the interpretation of common Article 1 remain controversial, it is widely accepted that common Article 1 entails a prohibition to encourage violations of IHL.30 According to the ICRC Commentary, the rationale of this negative obligation is that “[i]t would be contradictory if common Article 1 obliged the High Contracting Parties to ‘respect and ensure respect’ by their own armed forces while allowing them to contribute to violations by other Parties to a conflict”.31 This implies that a State would violate this rule in a situation of armed conflict if it disseminated information that induced combatants or civilians to attack and harm other civilians, for instance in inter-ethnic violence in the course of a civil war.32 Despite the fact that some existing law of war manuals of armed forces, for example the German law of armed conflict manual, employ the terminology of “instigating” (“Aufforderung”),33 it can hardly make a difference whether the encouragement to violate IHL is made explicitly or implicitly. Thus, it is argued that the inducement can be carried out by way of disseminating inciting disinformation via social media as described in Scenario E, which is modelled after recent events in Myanmar.34 There are therefore good reasons to conclude that such violence-inciting types of disinformation in armed conflict would amount to a violation of existing IHL.35

Terrorizing

The prohibition against terrorizing civilians might also provide protection against certain adversarial information operations in armed conflict.36 According to Article 51(2) of AP I, “[a]cts or threats of violence the primary purpose of which is to spread terror among the civilian population are prohibited”. This rule is furthermore accepted as part of customary IHL, applying to all kinds of armed conflicts.37 However, two aspects of this rule considerably limit its scope vis-à-vis this type of military conduct. For one, the communicative act in question must either amount to an attack within the meaning of IHL or a threat thereof.38 Whether an information operation may constitute an attack in and of itself at all will be discussed below; either way, it seems indisputable that typically most such conduct will not reach this threshold. Thus, even if disseminated disinformation spreads fear and terror among targeted civilians, the operation will not automatically come within the protective ambit of Article 51(2) of AP I if it does not, at the same time, constitute or threaten an act of violence. A “threat” is a purposely directed speech act “that suggests to the addressee the future occurrence of a negative treatment or event”.39 The mere exploitation of a state of fear and terror or the spreading of fear for general destabilization as in Scenario C, whether related to the aim of gaining a military advantage or not, will therefore typically not suffice to trigger the prohibition in the absence of an actual or threatened act of violence. Furthermore, it must be the primary purpose of the act or threat of violence to spread terror. This implies that in situations where other motives and objectives take precedence, the prohibition (as it currently stands) is not applicable even if the result of an information operation is extreme fear among the civilian population on the receiving end.40 In light of the far-reaching and terrorizing effects that digital information warfare campaigns can have in the 21st century, it should be reconsidered whether such operations, whenever it is their (primary) purpose to spread terror among the civilian population, should not be explicitly prohibited regardless of whether or not they can be qualified as an act or threat of violence.

“Military information operations”: Constant care to spare the civilian population

Furthermore, adversarial information operations in armed conflict might violate the obligation of constant care as stipulated by Article 57(1) of AP I: “In the conduct of military operations, constant care shall be taken to spare the civilian population, civilians and civilian objects.”41 The International Law Association's Study Group on the conduct of hostilities agreed that “the obligation to take constant care to spare the civilian population applies to the entire range of military operations and not only to attacks in the sense of Art. 49 AP I”.42 Against this backdrop, at least those communicative acts by armed forces that aim at furthering military goals could be considered “military operations” within the ambit of the provision, in line with the legal position put forward in certain military manuals such as the US Department of Defense law of war manual, which deals with military operations and includes a section on “propaganda”.43 This broader reading of the notion of military operations does not align with traditional interpretations of the term that, in keeping with 20th-century warfare practices, understood it to refer to physical military operations (such as manoeuvres or troop movements). However, the term's natural meaning does not preclude the possibility to interpret it in a way as to include communicative acts such as military information operations affecting the civilian population. In view of the object and purpose of the precautions regime entailed in Article 57 of AP I, namely, to mitigate impact on the civilian population as much as possible, a more expansive reading seems defensible.44

At the same time, even if we accept the applicability of the obligation to take constant care to military information operations in principle, it is questionable how far-reaching this protection really is in view of the possibilities of contemporary digital technologies to deeply affect a target population in a variety of ways. Again, given that IHL is traditionally focused on the violent physical effects of warfare,45 the question is whether the existing rules still suffice. Jensen and Crockett suggest that the use of deep fake video technology to deceive the civilian population ahead of an attack with kinetic force with the result that the number of incidental civilian casualties rises would violate the obligation.46 However, in situations that are not followed by such destructive events, as in information operations that target democratic decision-making processes or promote a general sense of uncertainty and a loss of trust in media sources or a national information space as a whole (see Scenario B in particular), the protective reach of the rule is much less obvious. After all, even if it is accepted that the notion of military operations can be interpreted broadly to include certain types of military information operations, the question remains what “sparing the civilian population” means and whether the interpretation can be expanded beyond violent effects in a more traditional sense. While there is no conceptual barrier to such an interpretation, there is hardly any State practice to support it. Opening up the interpretation as to which effects the notion of “sparing the civilian population” might entail beyond violent effects immediately raises difficult line-drawing and definitional questions. After all, an obligation to avoid all detrimental impacts on the civilian population in times of armed conflict, even considering the relative due diligence nature of the constant care obligation, would be unrealistic and would go too far, certainly in the eyes of most States. Here is not the place to flesh out these issues in full, also considering that by and large the obligation to exercise constant care to spare the civilian population has generally remained somewhat underexplored. For the purposes of the present article, it suffices to conclude that while Article 57(1) of AP I and its customary law pendant may impose limits also on military information operations, at the present juncture the exact protective reach of these provisions vis-à-vis digital disinformation campaigns is unclear.

Information operations reaching the threshold of an attack

As hinted at above, the last aspect to be considered is the question whether certain information operations may even qualify as “attacks” within the meaning of IHL, making them directly responsive to the rules on targeting, such as the principle of distinction, the principle of proportionality, and the principle of precautions in attack. According to Article 49 of AP I, attacks are “acts of violence against the adversary, whether in offence or in defence”. The concept of “violence” in this regard may concern either the conduct or its effects, which implies that even means of warfare that do not by themselves use physical force, such as biological or chemical agents, fall within the scope of “attack” in this sense.47 In this context, it is noteworthy that most recently, in the context of health-related misinformation campaigns in the course of the COVID-19 pandemic, Milanovic and Schmitt argued that “[d]epending on the scale of the sickness or death caused and the directness of the causal connection, a cyber misinformation operation even could rise to the level of a use of force”.48 Whereas this contention concerns the jus ad bellum rather than the jus in bello under scrutiny here, the argument's rationale might be suitable to being applied to the question at hand. As described previously, the Tallinn Manual 2.0 defines a “cyber attack” as “a cyber operation, whether offensive or defensive, that is reasonably expected to cause injury or death to persons or damage or destruction to objects”.49 While it has been argued above that information operations are per se analytically distinct from cyber operations, even if conducted by digital means, the same consideration should pertain to this type of conduct.

Therefore, just like other types of military violence, if the causal nexus between an instance of disinformation and physical harm is sufficiently strong so as to render such operation an attack, it “must respect the distinction, precaution, and proportionality triad”.50 If this contention is accepted in principle, one might be inclined to make the argument that Scenario C involves an “attack” by means of an information operation as the false information led members of the armed forces and civilians to ingest harmful methanol. To be sure, causation is of course the decisive issue. Whether or not an “attack” occurred in this scenario hinges on the question of whether the causal relationship between the piece of information and the death of the persons is sufficiently direct for the operation to be considered an “attack”. After all, as opposed to a cyber operation against an information technology system that triggers a physical chain of events that leads to damage, an instance of disinformation requires the targeted audience to act upon the received information and because of that inflict harm on itself. This is in any case an entirely different type of causal connection, and it is not inherently obvious that this type of “attack” was meant to fall within the ambit of existing IHL. In the context of international criminal law in regard to “instigation” as a speech act that mentally induces the target audience to act in a harmful manner – which in this sense is similar to disinformation in its causal mechanics – the International Criminal Tribunal for the Former Yugoslavia (ICTY) and the International Criminal Tribunal for Rwanda (ICTR) have held that while it is “not necessary to demonstrate that the crime would not have occurred without the accused involvement”,51 the (instigating) speech act needs to have been a “substantially contributing” factor for the crime to occur.52 Analogously, one may perhaps ask whether the piece of disinformation substantially contributed to the harmful event, in this case the ingestion of the methanol. To be sure, this analogy requires that the standard of causality applied by the Tribunals in the context of “instigation” is appropriate for the context under scrutiny, i.e. the necessary causal proximity between the piece of disinformation and the harmful event (ingestion of methanol) for the conduct to qualify as an “attack” within the meaning of IHL. This question does not seem to have been addressed in the literature or in State practice to date, and a different standard might be considered more suitable. At any rate, the absence of any engagement with the particularities of causation again shows that the modes of military conduct analysed in this article fall outside the ambit of what traditionally has been considered to be subject to the law of armed conflict.

If one supports the conclusion that the dissemination of disinformation might qualify as an “attack”, it must be asked whether the operation was in compliance with the rules pertaining to the conduct of hostilities. Given that the disinformation was targeted at members of the adversarial armed forces, the principle of distinction was arguably observed. At the same time, it is questionable whether the same holds true as regards proportionality and precautions in attack in view of the fact that it was probably reasonably foreseeable that the harmful disinformation would not stay confined to the soldiers’ closed groups on social media but instead further spread to civilian audiences as well. Information is by definition difficult to contain once it has been published.53

With reference to the Tallinn Manual 2.0, it has furthermore been suggested that an information operation might also amount to an “attack” within the meaning of IHL if it merely causes the psychological condition of “severe mental suffering”, which supposedly follows from the phrasing of the already mentioned Article 51(2) of AP I, prohibiting acts or threats of violence, the primary purpose of which is to spread terror among the civilian population.54 Certainly, there is no reason to exclude mental injury from the protective ambit of IHL as a matter of principle. The problem, however, is that the degree of mental suffering is difficult to establish, given that, for instance, “[i]nconvenience, irritation, stress, [and] fear are outside of the scope” of the proportionality principle.55 It should follow that at least not every psychological reaction to an information operation can be sufficient to render the conduct an attack. In order to make such an expansive interpretation of the protective rules of IHL workable, one would have to find clear, reliable and detectable criteria to enable the assessment of mental injury caused by an adversarial information operation. Either way, it is argued that many conceivable operations, as demonstrated by above scenarios, will not lead to a sufficient degree of distress. In this context, it may be suggested that this calculation may shift towards the assumption of “severe mental suffering” if a large-scale information operation – as for example in Scenario B above – leads to widespread confusion and sustained insecurity among the civilian population of the target State. However, even in such a scenario, the rule's ambit would still be concerned with the mental well-being of (a number of) individual civilians, but not with the integrity of the targeted information space as such. Again, it may thus be asked whether existing IHL remains sufficient to adequately protect civilian societies and their digital information spaces against the perils of novel modalities of modern warfare.

International criminal law

In the context of the use of information operations in situations of armed conflict, it is worth mentioning briefly that some forms of disinformation may not merely constitute breaches of IHL but may also rise to the level of an international crime. For example, disinformation about protected individuals or groups with the aim of instigating members of the armed forces or civilians to attack them can be qualified as inducing a war crime or another crime within the jurisdiction of the International Criminal Court: Article 25(3)(b) of the Rome Statute stipulates that “a person shall be criminally responsible and liable for punishment for a crime within the jurisdiction of the Court if that person … induces the commission of such a crime which in fact occurs or is attempted”.56 Recently, the United Nations (UN) Human Rights Council presented a detailed fact-finding report on the situation of the Rohingya in Myanmar that laid out the ways in which dehumanizing disinformation can be weaponized in situations of inter-ethnic tensions.57

Relatedly, the Rome Statute furthermore provides in Article 25(3)(e) that “a person shall be criminally responsible and liable for punishment for a crime within the jurisdiction of the Court if that person … directly and publicly incites others to commit genocide”. Incitement, too, is a mode of criminality that can – and often will – be committed by way of disseminating hateful disinformation about a targeted group. Note that as opposed to instigating or inducing the commitment of a crime, incitement does not require the genocide to actually have occurred; for criminal liability to be established, it is sufficient to show that the inciting speech act created the risk of genocidal acts to be carried out by the recipients.58

International human rights law

Adversarial information operations are obviously also capable of implicating the human rights of targeted civilian populations. A piece of disinformation disseminated by a State via social media that urges people to ingest methanol in order to avoid contracting a deadly virus prima facie violates the right to bodily integrity and the right to life, as guaranteed by virtually all existing human rights treaties such as the International Covenant on Civil and Political Rights (ICCPR) or the European Convention on Human Rights.59 A State-run disinformation campaign that pursues the purpose of interfering in the democratic decision-making process in another State might be considered a violation of the right to vote in elections that guarantee “the free expression of the will of the electors” (Article 25(b) of ICCPR) and of the collective right to self-determination, which is enshrined in Article 1(1) of ICCPR as well as Article 1(1) of the International Covenant on Economic, Social and Cultural Rights.60 More generally, it may even be worth inquiring whether and under which circumstances State-led adversarial disinformation from abroad interferes with a person's right to information pursuant to Article 19(2) of ICCPR.

However, the application of these and other human rights is contingent on two conditions: first, it needs to be established whether and to what extent States’ human rights obligations apply extraterritorially, in view of the fact that the ICCPR, for example, stipulates that “[e]ach State Party to the present Covenant undertakes to respect and to ensure to all individuals within its territory and subject to its jurisdiction the rights recognized in the present Covenant” (Article 2(1) of ICCPR).61 Some authors have recently re-emphasized that there are persuasive reasons to assume that States have an obligation not to infringe upon the rights of individuals located in other States given that the digital transformation as well as recent developments of weapons technologies have vastly increased the possibilities of States to endanger and compromise the enjoyment of human rights of persons abroad who otherwise possess no link to the acting State.62

Second, information operations and other forms of hybrid warfare add renewed urgency to the question of the relationship between the application of the laws of armed conflict (IHL) and international human rights law in situations of armed conflict. If the current state of the debate is that on a case-by-case basis, the lex specialis principle “determines for each individual situation which rule prevails over another” depending on the degree of detail the rule provides vis-à-vis the situation,63 one may conclude that novel forms of warfare such as the ones presented in this article allow for a broader consideration of the human rights implications of adversarial military operations given that the law of armed conflict, as shown throughout, does not address many of the relevant legal questions that arise in the context of adversarial information activities in armed conflict. After all, at least election interference or the coercion of individual civilians by way of an information operation is nothing that the Geneva Conventions or their Additional Protocols envisaged – with the possible, albeit in any case limited, exception of the law of military occupation as laid down in GC IV. Of course, a possible and potentially rather sweeping counterargument against a stronger reliance on human rights protections regarding information operations during armed conflict could be IHL's explicit recognition of permissible ruses of war. If “ruses of war are not prohibited”, as stated by Article 37(2) of AP I, IHL could potentially be invoked as the lex specialis in times of armed conflict whenever an information operation qualifies as a ruse of war. The same provision, however, clarifies that permissible ruses are limited to operations “which infringe no rule of international law applicable in armed conflict”. This forestalls any sweeping invocations of the lex specialis argument and leaves considerable room for human rights law in the assessment of wartime information operations.

Conclusion: The limits of existing law and options for advancing the debate

The protection of civilian populations against the consequences of armed conflict is a central object and purpose of IHL. IHL's anchoring in 20th-century kinetic warfare and its traditional focus on the physical impact of military operations still pervade contemporary understandings and interpretations of the humanitarian legal framework. The extent of this “physical anchoring” marks the linchpin in current debates about accommodating and mitigating the far-reaching intangible harms (potentially) inflicted by 21st-century modes of warfare.

Shifts in the nature of conflict have seen an emergence of new modes of hybrid warfare combining the employment of traditional kinetic force, cyber operations and disinformation campaigns to destabilize or gradually demoralize the adversary. Digital technologies allow for information operations that can deeply affect targeted civilian populations and public structures in ways that were hitherto inconceivable.

On the other hand, it is still very much an open question whether the adverse (intangible) consequences on modern interconnected societies and information spaces are humanitarian concerns in the sense that contemporary IHL should be the legal regime addressing them. Are the potential harms laid out in this article in fact reflective of protective gaps that humanitarian law should fill? If so, should such protection be achieved on the basis of existing rules and via links to traditional forms of violence or physical or mental impacts on individuals? Or are systemic values such as “the integrity of national or global information spaces” or “public trust” increasingly to be seen as 21st-century humanitarian values that IHL should protect as such – at least against the worst types of impact when disinformation campaigns are designed to systematically corrupt and corrode informational spaces nation-wide?

There are essentially two paths available to move forward from here: one is to accept such adverse consequences as in principle within the ambit of the raison d’être of IHL, which would imply the need for a more progressive re-interpretation and (potentially) development of the existing body of the laws of armed conflict.

The other one is to consider threats from contemporary information operations beyond the (deliberately limited) reach of IHL given that it is the principal task of these rules to provide fundamental protection (rather than full-scale protection) against the worst (and not all) perils of war. In that case, other rules would have to step in lest civil societies were left without clear legal protection against some of the most consequential forms of modern conflict, as exemplified in Scenario B. The long-running but as-yet unsettled questions of the extraterritorial (“virtual”) and substantive reach of international human rights law in situations of armed conflict, however, suggest that States remain reluctant to proceed with the second option.

As far as information operations are concerned, however, most States so far do not seem to be prepared to treat their consequences as humanitarian concerns either.64 In part, this may be due to the difficult line-drawing and definitional questions inherent in any attempt at broadening classic IHL understandings to include intangible impacts that for the time being might be seen to militate against any such ostensibly “radical” extensions. In fact, despite growing engagement within the community of international legal scholars, there is a palpable reluctance to address the issue within the framework of international law at all. While there is an increasing trend among States to publicly position themselves in regard to the application of international law to cyber operations, the same cannot be said about the growing phenomenon of adversarial conduct against a target State's information ecosystem, i.e. operations that are carried out solely on the content layer of network infrastructures without affecting the physical or logical layers as well. Regarding the legal implications of such operations, States have so far by and large remained silent and abstained from any nuanced categorizations.65 In line with this reluctance to employ the language of international law, States and regional organizations have so far preferred an approach that focuses on monitoring adversarial campaigns by other States and disseminating counter-information to correct distorting or false media narratives.66

The present article has shown that digital communications technologies open up entirely new possibilities to affect the adversary, societies and the civilian population of a given area, State, region or even globally in situations of armed conflict. The foregoing analysis of existing legal framework allows for the following conclusions:

  1. Certain kinds of adversarial information operations in a situation of armed conflict and their consequences are covered by existing rules of IHL, in particular in regard to incitement, de-humanization of the adversary and the terrorization of a civilian population.

  2. The legal concepts of “constant care” and “attack” allow, in principle, for an expansive interpretation that encompasses certain modes of information operations and resulting harms, such as the spreading of false health information that prompts the target audience to engage in directly harmful behaviour, as exemplified in Scenario C and further analysed in the sections “‘Military information operations’: Constant care to spare the civilian population” and “Information operations reaching the threshold of an attack”. Such expansive understanding is contingent on corresponding mutual consent of the relevant international actors and should be supported.

  3. Adversarial conduct during armed conflict against the information space of a belligerent party beyond these relatively narrowly circumscribed scenarios finds only scarce (clear) limitations under existing legal frameworks. To a certain extent, this is a reflection of IHL's unusually explicit permissive stance on ruses of war and a widespread sentiment that information operations (against the adversary) must remain legal. At the same time, recent developments suggest a significant shift towards more pervasive, all-encompassing operations that may lead to a large-scale corrosion of public information spaces without discernible military necessity. With the ever-increasing digitalization of societies across the globe, the adverse impact of such conduct might be too sustained and too grave to remain unaddressed by IHL. Given the further observation that in information warfare the lines between times of war and times of peace become increasingly blurred, there even appears to be an emerging need – and room – for a broader rule against systematic and highly corrosive military information operations against civilian information spaces that is not limited to situations of armed conflict but spans the entire spectrum of peace and war.

To be sure, I am under no illusion about the prospects of such a rule materializing any time soon. Instead, all of this first and foremost calls for a policy debate about humanitarian values on the future digital battlefield. If anything, we need to move on from the current widespread instinctive perception that the general rule is one of permissibility of adversarial influence activities during armed conflict, as demonstrated at the outset of the article. In view of the possibilities and adverse impacts of digital information warfare in the 21st century, and for the sake of protecting civilian societies in the digital era, such an attitude can no longer reasonably be upheld. Therefore, avoiding or mitigating the worst and most disruptive impacts that digital information warfare can have on civilian populations and societies should be considered a central humanitarian objective in 21st-century warfare.

 

  • 1See Dapo Akande, Antonio Coco, Talita de Souza Dias et al., “Oxford Statement on International Law Protections in Cyberspace: The Regulation of Information Operations and Activities”, Just Security, 2 June 202, available at: https://www.justsecurity.org/76742/oxford-statement-on-international-la… (all internet references were accessed in February 202).
  • 2See Abdi Latif Dahir, “Somalia Severs Diplomatic Ties with Kenya”, The New York Times, 15 December 00, available at: https://www.nytimes.com/00/1/15/world/africa/somalia-kenya.html.
  • 3Whereas misinformation signifies information that is factually wrong yet not intentionally so, disinformation is “deliberately false or misleading”; see Caroline Jack, “Lexicon of Lies: Terms for Problematic Information”, Data & Society Research Institute, 2017, available at: https://datasociety.net/wp-content/uploads/2017/08/DataAndSociety_Lexic…, pp. 2–.
  • 4See Marko Milanovic and Michael N. Schmitt, “Cyber Attacks and Cyber (Mis)Information Operations During a Pandemic”, Journal of National Security Law & Policy, Vol. 11, No. 1, 2020.
  • 5The Oxford Statement, above note 1, notes: “The conduct of information operations or activities in armed conflict is subject to the appliable rules of International Humanitarian Law (IHL). These rules include, but are not limited to, the duty to respect and ensure respect for international humanitarian law, which entails a prohibition against encouraging violations of IHL; the duties to respect and to protect specific actors or objects, including medical personnel and facilities and humanitarian personnel and consignments; and other rules on the protection of persons who do not or no longer participate in hostilities, such as civilians and prisoners of war.”
  • 6See Laurent Gisel, Tilman Rodenhäuser and Knut Dörmann, “Twenty Years on: International Humanitarian Law and the Protection of Civilians Against the Effects of Cyber Operations During Armed Conflicts”, International Review of the Red Cross, September 2020, available at: https://international-review.icrc.org/sites/default/files/reviews-pdf/2…, pp. 11–1; Michael N. Schmitt (ed.), Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations, Cambridge University Press, Cambridge, 2017, pp. 373 ff.
  • 7See e.g. Marco Sassòli and Yvette Issar, “Challenges to International Humanitarian Law”, in A. von Arnaud, N. Matz-Lück and K. Odendahl (eds), 100 Years of Peace Through Law: Past and Future, Duncker & Humblot, Berlin, 2015, pp. 219–220; M. N. Schmitt, above note 6, Commentary to rule 123, p. 495; Michael Schmitt, “France Speaks out on IHL and Cyber Operations: Part II”, EJIL Talk!, 1 October 2019, available at: https://www.ejiltalk.org/france-speaks-out-on-ihl-and-cyber-operations-…; German Ministry of Defence, Law of Armed Conflict: Manual, Joint Service Regulation (ZDv) 15/2, May 2013, available at: https://www.bmvg.de/resource/blob/93610/ae2428ce99dfa6bbd889c269ed214/b…, para. 48.
  • 8A “distributed denial of service” (DDoS) attack utilizes a number of computers to overwhelm the target systems to render them unavailable for legitimate users.
  • 9Oxford Statement, above note 1.
  • 10See Pontus Winther, International Humanitarian Law and Influence Operations: The Protection of Civilians from Unlawful Communication Influence Activities during Armed Conflict, Acta Universitatis Upsaliensis, Uppsala, 2019, 147 ff.
  • 11Oxford Statement, above note 1.
  • 12See Michael N. Schmitt, “Wired Warfare 3.0: Protecting the Civilian Population During Cyber Operations”, International Review of the Red Cross, Vol. 101, 2019, p. 344: “International humanitarian law was crafted in the context of means and methods of warfare, the effects of which were to damage, destroy, injure or kill. While the civilian population might have suffered as a result of military operations that did not cause these consequences, the threat of harm was overwhelmingly from such effects. Thus, IHL rules are grounded in the need to shield civilians and civilian objects from them, at least to the extent possible without depriving States of their ability to conduct essential military operations.” The author speaks insofar of the “cognitive paradigm of physicality”, see ibid., note 69.
  • 13M. N. Schmitt, above note 6, rule 9, para. 3; to clarify, this encompasses the information (as data) itself and not the physical infrastructure necessary to display the information; physically destroying the server that stores the journalistic website content, as a civilian object, would be subject to the principle of distinction just like a newspaper printing house; this is a function of the conceptual distinction between the physical and the non-physical, and IHL's principal focus on the former.
  • 14M. N. Schmitt, above note 6, rule 93, para. 5.
  • 15See International Cyber Law: Interactive Toolkit, “Scenario 12: Cyber Operations against Computer Data”, 22 May 2020, available at: https://cyberlaw.ccdcoe.org/wiki/Scenario_12:_Cyber_operations_against_….
  • 16German Ministry of Defence, above note 7 (emphasis added).
  • 17M. N. Schmitt, above note 6, rule 122, para. 5.
  • 18M. N. Schmitt, above note 6, rule 123, para. 2.
  • 19Yves Sandoz, Christophe Swinarski and Bruno Zimmermann (eds), Commentary to the Additional Protocols, ICRC, Geneva, 87 (ICRC Commentary on APs), paras 1520–1522.
  • 20Eric Talbot Jensen and Summer Crockett, “‘Deepfakes’ and the Law of Armed Conflict: Are They Legal?”, Articles of War, 19 August , available at: https://lieber.westpoint.edu/deepfakes/.
  • 21ICRC Commentary on APs, above note 19, para. 15.
  • 22M. N. Schmitt, above note 6, rule 139, para. 3.
  • 23ICRC Commentary on APs, above note 19, para. 1515 (emphasis added).
  • 24Ibid., para. 1516.
  • 25ICRC, Convention (III) Relative to the Treatment of Prisoners of War. Geneva, 12 August 1949. Commentary of 2020. Article 13: Humane Treatment of Prisoners, available at: https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Comment.xsp?action=op…, para. 1624.
  • 26Ibid.
  • 27ICRC, Convention (IV) relative to the Protection of Civilian Persons in Time of War. Geneva, 12 August 1949. Commentary of 1958. Article . Treatment: General Observations, available at: https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Comment.xsp?action=op…, p. 204.
  • 28Ibid.
  • 29ICRC, Convention (I) for the Amelioration of the Condition of the Wounded and Sick in Armed Forces in the Field. Geneva, 12 August 1949. Commentary of 2016. Article 3: Conflicts not of an International Character, available at: https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Comment.xsp?action=op…, para. 672.
  • 30See Military and Paramilitary Activities in and against Nicaragua (Nicaragua v. United States of America), Merits, Judgment 27 June 1986, ICJ Rep 14, para. 220; Commentary of 2020 to GC III, above note 25, para. 191; Jean-Marie Henckaerts and Louise Doswald-Beck (eds), Customary International Humanitarian Law, Cambridge University Press, Cambridge, 2005, rule 144.
  • 31ICRC, Convention (I) for the Amelioration of the Condition of the Wounded and Sick in Armed Forces in the Field. Geneva, 12 August 1949. Commentary of 2016. Article 1: Respect for the Convention, available at: https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Comment.xsp?action=op…, para. 158.
  • 32See International Cyber Law: Interactive Toolkit, “Scenario 19: Hate Speech”, 1 October 2020, available at: https://cyberlaw.ccdcoe.org/wiki/Scenario_19:_Hate_speech, para. L16.
  • 33German Ministry of Defence, above note 7, para. 487.
  • 34See Paul Mozur, “A Genocide Incited on Facebook, with Posts from Myanmar's Military”, The New York Times, 15 October 2018, available at: https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide….
  • 35See Oxford Statement, above note 1.
  • 36See e.g. Unnati Ghia, “International Humanitarian Law in a Post-Truth World”, Cambridge International Law Journal Online, 17 December 2018, available at: http://cilj.co.uk/2018/12/17/international-humanitarian-law-in-a-post-t…; E. T. Jensen and S. Crockett, above note 20; P. Winther, above note 10, 147 ff; International Cyber Law: Interactive Toolkit, above note 32, para. L15.
  • 37See ICRC, “IHL Database: Customary IHL: Rule 2. Violence Aimed at Spreading Terror among the Civilian Population”, available at: https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule2.
  • 38See M. N. Schmitt, above note 6, rule 98, para. 3.
  • 39P. Winther, above note 10, p. 148.
  • 40Ibid., p. 152; but see ICRC, Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977. Commentary of 1987. Protection of the Civilian Population, available at: https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Comment.xsp?action=op…, para. 19, which leaves open the possibility of a broader interpretation.
  • 41See also Protocol Additional (II) to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of Non-International Armed Conflicts, 1125 UNTS 609, 8 June 1977 (entered into force 7 December 1978), Art. 13(I); J.-M. Henckaerts and L. Doswald-Beck, above note 30, rule 15.
  • 42See International Law Association Study Group on the Conduct of Hostilities in the 21st Century, “The Conduct of Hostilities and International Humanitarian Law: Challenges of 21st Century Warfare”, International Law Studies, Vol. 93, 2017, p. 380.
  • 43See P. Winther, above note 10, p. 131.
  • 44Likewise, M. N. Schmitt, above note 6, rule 92, para. 2.
  • 45See the “International Humanitarian Law” section.
  • 46E. T. Jensen and S. Crockett, above note 20.
  • 47Cordula Droege, “Get Off My Cloud: Cyber Warfare, International Humanitarian Law, and the Protection of Civilians”, International Review of the Red Cross, Vol. 94, No. 886, 2012, p. 557.
  • 48M. Milanovic and M. N. Schmitt, above note 4, p. 269.
  • 49M. N. Schmitt, above note 6, rule 92.
  • 50Vishakha Choudhary, “The Truth under Siege: Does International Humanitarian Law Respond Adequately to Information Warfare?”, GroJIL Blog, 21 March 2019, available at: https://grojil.org/2019/03/21/the-truth-under-siege-does-international-….
  • 51Prosecutor v. Kvočka et al., Judgment, ICTY-IT-98-30/I-T, 2 November 2001, para. 252.
  • 52Prosecutor v. Ndindabahizi, Judgment, ICTR-2001-71-I, 15 July 2004, para. 463; Prosecutor v. Kordić and Cerkez, Appeals Judgment, IT-95-14/II-A, 17 December 2004, para. 27; Prosecutor v. Orić, Judgment, IT-03-68-T, 30 June 2006, para. 274; Prosecutor v. Nahimana et al., Judgment, ICTR-99--A, 28 November 2007, para. 501.
  • 53See V. Choudhary, above note 50.
  • 54See M. N. Schmitt, above note 6, rule 92, para. 8.
  • 55M. N. Schmitt, above note 6, rule 113 para. 5.
  • 56See Antonio Coco, “Instigation”, in Jérôme de Hemptinne, Robert Roth, Elies van Sliedregt, Marjolein Cupido, Manuel J. Ventura and Lachezar Yanev (eds), Modes of Liability in International Criminal Law, Cambridge University Press, Cambridge, 2019.
  • 57UN Human Rights Council, Report on the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar, UN Doc A/HRC/39/CRP.2, 17 September 2018.
  • 58Jens David Ohlin, “Incitement and Conspiracy to Commit Genocide”, in Paola Gaeta (ed.), The UN Genocide Convention: A Commentary, Oxford University Press, Oxford, 2009, p. 212.
  • 59See M. Milanovic and M. N. Schmitt, above note 4, pp. 267–269.
  • 60See on this Jens David Ohlin, Election Interference: International Law and the Future of Democracy, Cambridge University Press, Cambridge, 2020; Nicholas Tsagourias, “Electoral Cyber Interference, Self-Determination and the Principle of Non-Intervention in Cyberspace”, EJIL Talk!, 26 August 2019, available at: https://www.ejiltalk.org/electoral-cyber-interference-self-determinatio….
  • 61Also see Convention for the Protection of Human Rights and Fundamental Freedoms (European Convention on Human Rights, as amended), Art. 1.
  • 62In the context of disinformation and cyber operations, see M. Milanovic and M. N. Schmitt, above note 4, pp. 261–266.
  • 63See ICRC, How Does Law Protect in War? – Online Casebook, Chapter IHL and Human Rights, available at: https://casebook.icrc.org/law/ihl-and-human-rights.
  • 64There are a few noteworthy exceptions: see Norwegian Military Manual, p. 200; French Military Manual, p. 68.
  • 65See Henning Lahmann, “Information Operations and the Question of Illegitimate Interference Under International Law”, Israel Law Review, Vol. 53, No. 2, 2020, pp. 209–217.
  • 66See as an example the European External Action Service (EEAS) “EU vs. Disinfo” initiative, available at: https://euvsdisinfo.eu/.

Continue reading #IRRC No. 915

More about International Humanitarian Law (IHL), IHL, Disinformation, Digital technologies and war, Cyber warfare, New technologies and IHL