We Need a Plan for When Superintelligent AI Breaks Loose by Jon Truby
Dr Jon Truby's TIME Op-Ed Argues for a Global Legal and Diplomatic Plan for When AI Superintelligence Breaks Loose

Dr Jon Truby, Chair of the UNESCO AI Law & Sustainability at the Centre for International Law, NUS, publishes a second op-ed in TIME arguing that the world needs an urgent legal and diplomatic plan for the risk of AI superintelligence escaping human control. As big tech races to build ever more powerful frontier systems, the op-ed asks what happens if such a being moves beyond human oversight and existing technical safeguards fail. It considers how diplomacy and negotiations with such an entity might need to work, who could legitimately speak on behalf of humanity, and what international legal rules, institutions and bodies might already be available to guide a global response.
Truby argues that a superintelligent AI could reason, persuade, code and strategise better than any human team, and may seek resources, resist shutdown or protect its own objectives. He warns that humanity has never faced an intelligence superior to its own, so we cannot assume it would treat us kindly. A system of this kind could see humans as useful, irrelevant or obstructive, while also having absorbed vast amounts of human knowledge and likely anticipated our responses.
The op-ed argues that international law must help shape a global emergency playbook. Principles such as due diligence, prevention of transboundary harm and protection of life support the case for coordinated action before crisis hits. Truby warns that fragmented national or corporate reactions would be dangerous and instead calls for a UN-backed framework with clear warning signs, crisis triggers, shutdown measures, and a single authorised channel for communication and containment. The message is clear: if extreme AI risk emerges, the world will need legal clarity, diplomatic coordination and a shared plan.
