• Homepage
  • Blog
  • General
  • The Best of Intentions: Comment on the ICC’s Draft Policy on Cyber-Enabled Crimes and the Absence of Mens Rea and AI Considerations

Symposium | Symposium on the ICC Office of the Prosecutor’s Draft Policy on Cyber-Enabled Crimes


The Best of Intentions: Comment on the ICC’s Draft Policy on Cyber-Enabled Crimes and the Absence of Mens Rea and AI Considerations

by Christine Carpenter
Published on 15 May 2025


A concept image of a hacker at work in a dark room. The image was created for a piece of content but has since been released for use under a Creative Commons License.

The International Criminal Court (ICC)’s recent Draft Policy on Cyber-Enabled Crimes could not be more timely.  While, as the Draft Policy acknowledges, ‘[t]o date, the question of cyber-enabled crimes under the Statute…has only arisen at the margins of the Court’s work and has not yet been addressed in any detail,’ that is likely to soon change. In its invasion of Ukraine, Russia has conducted cyberattacks to cut Ukraine’s power and water supplies, disrupt communications to emergency responders, and disable mobile data services responsible for transmitting air raid warnings—all of which have come under the purview of the ICC’s investigation against the State for war crimes.  Israel, meanwhile, has leveraged its control of the electricity and internet cables servicing Gaza to impose internet shutdowns and communication blackouts in the territory, which in itself, scholars of international humanitarian law (IHL) have argued could constitute a war crime.   It has likewise relied on artificial intelligence (AI) systems to produce bombing targets and other tactical choices, which experts explain has led to an increase in violence and casualties. Thus, it is clear, cyber-enabled crimes are coming to the International Criminal Court, and in this Draft Policy, the Court endeavours to be ready for their arrival.  This comment addresses a few key considerations in how aptly it does so.

The Draft Policy Provides Clarity on International Criminal Law’s Engagement with Technological Change

As a threshold issue, the Draft Policy establishes that the Rome Statute applies to cyber-enabled conduct that otherwise falls under the ambit of international crimes.  Indeed, the Draft Policy states unambiguously that: ‘As a matter of law, genocide, crimes against humanity, war crimes and aggression, as well as offences against the administration of justice, can all be perpetrated or facilitated by cyber means’ (para. 10).  This represents considerable added clarity to the ICC’s position on the Rome Statute’s encompassing of cyber-enabled international crimes to date.  For instance, the Strategic Plans intermittently published by the Office of the Prosecutor of the ICC were notably silent on this question.  In the 2016–2018 Plan, Victor Tsilonis observed that ‘focus is placed one-dimensionally on the need to utilize more new digital technologies for “the identification, collection and presentation of evidence through technology,” with only a […] somewhat vague referral to the immediate need of gaining “insight into new possibilities and threats coming from technological evolution.”’  Likewise, the 2023–2025 Plan aims to ‘finalize a comprehensive review and consolidation of its policy framework on gravity/prioritization/completion of investigations.’  However, the 2023-2025 Plan does maintain that ‘[o]ther new policies that will be completed during the period of implementation of this strategic plan will address areas including cybercrime …’. ICC Prosecutor Karim A.A. Khan KC then clarified in January 2024 that cyber-enabled crimes ‘may fall within the ICC’s jurisdiction if the requirements of the Rome Statute are met,’ and that ‘my Office may investigate or prosecute such conduct.’ Accordingly, this Draft Policy represents a fulfilment of the position previously suggested by the OTP, and puts to rest any qualms that the Rome Statute might not apply in international cyber contexts.

While on the whole, the Draft Policy represents a helpful step in the Court’s preparedness to grapple with the cyber-enabled international crimes that are at its doorstep, in the interest of space, the remainder of this post will call attention to two interrelated aspects of cyber-enabled international crimes that were noticeably absent from the Draft Policy—namely the challenges posed by AI and the lacking discussion of mens rea—and advocate for their consideration by the Court.  This post provides what is hopefully a helpful overview of these issues, but further analysis can be found in my recent article available here.

The Elephant in the Room: Where is Engagement with Challenges of Artificial Intelligence? 

The Draft Policy takes the overarching position, support for which it draws from the two Tallinn Manuals, that ‘while cyber-specific lawmaking can be helpful in certain areas, existing international law is largely adequate in covering cyber operations and other cyber-enabled conduct. International criminal law is no different’ (para. 9-10, emphasis added).  I, along with some others in the literature, identify certain instances where this is not the case—especially where AI is concerned.

Regarding AI, the Draft Policy states: ‘The Office is also keenly aware of rapid developments in the field of artificial intelligence (AI), which may become relevant to issues of individual criminal responsibility under the Statute’(para. 87).  Contemplating what this relevance may ultimately look like, the Draft Policy describes a spectrum of AI’s involvement, on one end of which is AI being ‘used as a mere tool to commit such crimes;’ on the other end is ‘the potential development of artificial general intelligence (AGI) that can surpass human cognition and achieve a level of sentience, even personhood,’ and ‘[s]omewhere in the middle of that spectrum would be the use of various autonomous AI tools that produce effects that are not intended, or even foreseen, by those who designed or used them’ (para. 87, emphasis added).  This would suggest that, in the authors’ own view, AI has the capacity to evade international criminal responsibility’s basic elements. Yet, the Policy only concludes that, ‘While it will likely become necessary to address the question of crimes resulting from the use of such technologies, at the present stage the Office can only note that such cases before the Court would need to be resolved in accordance with the same principles as any other case including the requirement for mens rea’ (para. 30).

Given the rapid expansion in availability and complexity of AI capabilities over the mere two and a half years since it hit the open market, this hesitancy to address AI’s implications for the applicability of international criminal law (ICL) to cyber-enabled international crimes is perplexing.  Much of the literature that has contemplated these implications—present company included—has come to the conclusion that existing elements of international crimes under the Rome Statute may be confounded by AI-enabled conduct (for more discussion, see, e.g., here (regarding AI systems as ‘actors’), here (regarding predictability of outcomes), cf. here).

As one example, take the leadership requirement of the crime of aggression, which cybercrimes already uniquely complicate as they are commonly undertaken by cybercriminals at the behest of a state, but in a manner that is often difficult to trace back to the state’s authority (see here).  On this, the Draft Policy only says, ‘The leadership requirement largely does not pose issues that are unique to the cyber context,’ but rather that where the crime is attributable to a state, that State’s act of aggression ‘can be committed through a proxy non-State actor if a sufficient relationship of control can be shown’ (para. 76).  In reaching this conclusion, the Draft Policy declines to grapple with the fact that demonstrating attribution to a state actor is uniquely challenging in the cyber context, because cyber-enabled crimes such as cyberattacks can be difficult to trace back to a state actor (or any actor).  Such attribution is only made harder with the use of AI, which, for example, allows malware to evolve in real time to evade detection.

For instance, MIT’s Technology Review just earlier this month reported that the commission of cyberattacks by AI agents is not a question of if, but when. Indeed, the piece quotes a cybersecurity expert in saying, ‘I think ultimately we’re going to live in a world where the majority of cyberattacks are carried out by [AI] agents.’  While there is still a leap to be made from the commission of more mundane cyberattacks to the commission of international crimes via cyber-enabled means, it is clear that the future of AI’s role in international crimes will only further confound our current understanding of criminal liability.  Thus, the Draft Policy’s limited engagement with these challenges means they will require further examination in short order. 

The Draft Policy Declines to Address Technology’s Challenges to Providing “the Requisite Mental State” 

The second, and interrelated, major area of concern regarding the Draft Policy is its general avoidance of discussing how cyber tools, particularly AI, may confound conventional interpretations of the mens rea element to criminal responsibility.  Where the Draft Policy does engage with definitional challenges posed by cyber-enabled conduct—and still, not AI-enabled conduct specifically—it focuses almost exclusively on challenges to the actus reus requirement. As to mens rea, the Draft Policy only makes passing mention that a given cyber-enabled crime may incur criminal responsibility ‘if the mens rea requirements are met,’ following a detailed discussion of whether the actus reus of the crime in question can be satisfied when commissioned through cyber means (see, e.g., paras. 46, 50, 66, 85). The result is an incomplete discussion of ICL’s applicability to cyber-enabled crimes.

The literature has considered at length the ways in which advancements in state cyber capabilities, specifically those involving AI, can give rise to a ‘legal vacuum in which the AI can be responsible for enough of an act, or make enough of the decision to act, to render the human decision-maker not liable for the harm caused.’ The result is what the discourse (predominantly in IHL) describes as a ‘responsibility gap’ between the conduct in question and the existing legal framework’s ability to address them. The Draft Policy at present does not suggest how the ICC intends to close that gap.

That said, the literature has made a number of recommendations. Marta Bo, for instance, suggests construing the mens rea requirement for war crimes as dolus eventualis or recklessness, arguing: ‘This interpretation would allow for the ascription of criminal responsibility for indiscriminate attacks where the human operator using or deploying autonomous systems envisages the risk of directing attacks against persons or objects immune from attack and decides nonetheless to proceed with the attack.’  Anna Rosalie Greipl, meanwhile, proposes that the domestic criminal law doctrine of transferred intent could provide a solution to the responsibility gap that results where AI-enabled crimes harm an unintended target.  I have recently argued that ICL should contemplate incorporating a strict liability regime where AI-enabled crimes of aggression are concerned, rather than allowing wrongful actors to evade liability due to the black-box nature of AI decision-making.

It would be beneficial for the current Draft Policy to provide further guidance on how the Court will engage with the increasing sophistication of cyber-enabled crimes, and how it will continue to pursue aims of justice and accountability for state misconduct that may, under current circumstances, be able to evade responsibility on a technicality.