• Homepage
  • Blog
  • AI Law and Governance
  • DeepSeek’s AI Disruption: Implications for Global Climate Policy on Digital Decarbonisation, Energy Transitions and International Law

DeepSeek’s AI Disruption: Implications for Global Climate Policy on Digital Decarbonisation, Energy Transitions and International Law

by Jon Truby
Published on 8 February 2025


Background

The US President’s announcement of flagship $500 billion Stargate AI project with OpenAI was trumped a day later by a little-known Chinese start-up, DeepSeek, which shocked the tech world and wiped $1 trillion off the value of the stock market in a day. Silicon Valley technology companies have invested heavily in AI technologies reliant upon AI microchips and hardware that are typically power-hungry, to such an extent that data centres now emit 1% of global energy-related greenhouse gas emissions. With the computational power needed for sustaining AI’s growth doubling every 100 days, and predictions of AI technologies consuming 21% of the world’s electricity, Big Tech firms have become the largest corporate purchasers of renewable energies. The level of energy currently used by AI appears unsustainable even compared to other types of technologies: a ChatGPT request consumes ten times the electricity of a Google Search.

With Chinese companies unable to access high-performing AI chips due to US export controls seeking to limit China’s technological opportunity in the global competition race for AI supremacy, Chinese developers were forced to be highly innovative to achieve the same productivity results as US competitors. DeepSeek’s technological workaround resulted in the development a highly efficient AI model, less dependent on high-performing AI chips and, importantly, much less energy consuming.

DeepSeek’s R1 model demonstrated that AI models do not necessarily need to demand astronomical quantities of electricity to perform to similar standards, breaking from existing designs and potentially meaning AI energy consumption rates can be reduced considerably without sacrificing performance. A caveat here is that the R1 model is at the time of writing still being understood and evaluated, so its claims on energy performance are subject to scrutiny.

Explainer of the breakthrough for lawyers: What has happened and how did they do it?

DeepSeek’s early 2025 breakthrough in developing a more energy-efficient large-scale AI model represented a game-changing shift, demonstrating that the entrenched path dependence on technological designs of energy-intensive technology could be disrupted. Faced with US export restrictions on advanced microchips and AI hardware, Chinese companies were forced to optimise their computational efficiency, particularly in GPU usage.

DeepSeek responded by innovating methods to cut memory usage by 93.3% and accelerate processing speeds without significantly compromising accuracy. By employing a Mixture-of-Experts (MoE) architecture, the system activates only a small fraction of its parameters during inference, allowing for more efficient computation while maintaining performance. This ‘sparse computation’ approach not only minimises resource consumption but also shortens development cycles. Additionally, to ease GPU demands, DeepSeek prioritised delivering accurate responses rather than explicitly detailing every logical step, significantly reducing computational load while maintaining effectiveness.

At the time of writing, DeepSeek’s latest model remains under scrutiny, with sceptics questioning whether its true development costs far exceed the claimed $6 million.

An early evaluation suggests the figures of net efficiencies are misleading. It is argued that although DeepSeek’s methods such MoE improves training efficiency, when it comes to inference, it employs Chain-of-Thought reasoning, which results in much longer answers and significantly higher per query energy consumption. So although the training was conducted with low energy consumption, the deployment may result of the model may lead to substantially higher energy consumption.

How does this impact the status quo?

This has enormous implications for Big Tech which has invested heavily in increasing energy supply to fuel power-hungry AI technologies, including nuclear and wind renewables as well as fusion research. By demonstrating that AI can at least be trained in a more efficient way, the pressure is now on existing providers to considerably reduce the levels of energy in their models to save costs and climate impact.

Legal and policy implications

Higher scrutiny over AI’s climate impact

While announcing the StarGate AI project, President Trump promised that OpenAI would be given the energy it needs – based on an estimate of at least 50 megawatts of power per Stargate data centres. The dominant paradigm that scaling up AI models is the best way to achieve Artificial General Intelligence (AGI) – a goal of OpenAI and other technology firms – has justified the need for such colossal data centres which create enormous negative environmental externalities including carbon emissions.

The efficiency of DeepSeek’s R1 has shown that such climate damage might be unnecessary and avoidable. AI developers will now be expected to justify their negative climate impact. From a legal and policy perspective, this could empower government bodies to scrutinise AI’s high energy demands and emissions; national energy grids have already been concerned about how to maintain supply. With investment and construction of data centers being a global phenomenon, the discussion on AI and its climate impact will be an important topic in upcoming international forums such as the Paris AI Action Week, the AI for Good Summit and COP30. R1 is proof of concept that more energy efficient AI is possible, and there are further concepts for sustainable alternative design to achieve digital decarbonisation

Justification for policy intervention

Cognizant of Chinese competitors, leading US AI companies might shift away from the ‘size is everything’ approach and towards prioritizing efficiency. The market economy gives the impression of at least partially handling AI’s climate change problem, inadvertently resulting from US-China competition. However, even with relative efficiency, AI technology remains highly energy-intensive, and not all companies may follow suit to switch to models similar to MoE. Furthermore, should AI become increasingly energy-efficient, a perverse outcome resulting from the Jevons’ Paradox is that overall demand for AI technologies may increase through a rebound effect, leading to higher net fuel consumption and carbon emissions.

Earth cannot wait for Big Tech to solve the climate crisis and policy intervention may thus be required to impact AI energy costs and avoid increases in energy consumption. AI systems may need classification based on energy consumption to enable legal and policy interventions that curb reliance on unsustainable models and promote efficiency-driven designs. Regulation could discourage energy-intensive datasets, prioritise sustainability in AI development, and prevent the externalisation of environmental costs onto taxpayers. Policy measures such as carbon taxation, emission trading schemes, emissions import taxes or regulation on technology developers may be considered as a response.

International law implications and tools

To mitigate these risks, regulatory frameworks grounded in the prevention principle of international law may be relevant. This can involve implementing environmental impact assessments, adopting best practices and ensuring transparency in AI development and deployment.

In jurisdictions like the EU that utilise a risk-based model for AI governance, AI models could be categorised based on carbon intensity, with high-emission models restricted or banned. Developers may need to determine that environmental harm may also constitute a fundamental rights issue, affecting the right to life. The European Climate Law commits the EU to climate neutrality by 2050, providing a legal basis for extending emissions regulations to AI. Compliance could be enforced through transparency mandates, lifecycle carbon reporting and financial penalties, ensuring sustainability remains central to AI innovation.

Article 10 of the Paris Agreement provides a legal basis for addressing AI’s climate impact through technology development, transfer and cooperation. The Technology Mechanism (Article 6.3) enables governance coordination and support for developing states, ensuring AI aligns with sustainability goals while mitigating its environmental costs. The 2024 United Nations General Assembly Resolution on AI acknowledges AI’s dual role in addressing and potentially exacerbating climate challenges. It advocates for safe, transparent, and inclusive governance to harness AI’s benefits while mitigating its risks.

The UN Sustainable Development Goals (SDGs) further offer a framework for sustainable AI, particularly SDG 7 (Clean Energy) and SDG 13 (Climate Action). Aligning AI with these goals ensures its development supports environmental sustainability. A dedicated oversight body, such as the UNFCCC’s Tech Committee (TEC), could integrate AI into sustainability policies, promote energy-efficient AI technologies, and set international standards for sustainable AI development.

Further, there is the possibility of utilising existing international legal frameworks. The Paris Agreement incorporates the Market-Based Incentive approach to climate regulation based on Article 6. Article 6.2 recognises that states may enter bilateral or multilateral arrangements to trade credit to count towards their respective ‘nationally determined contributions’ (NDC). Article 6.4 also establishes a new centralised carbon credit market, often referred to as a ‘sustainable development mechanism’ (SDM), overseen by a UN entity. The mechanism aims to ‘promote the mitigation of greenhouse gas emissions while fostering sustainable development’ by issuing credits for emission-reducing projects, policies and programs. The SDM platform may be able to promote sustainable AI or climate technology using AI to facilitate credit issuance to projects that actively engage AI in the emission reduction process and those that rely on AI models with maximised efficiency.

Further, OECD AI Principles and UNESCO’s AI Ethics Recommendations influence industry practices by emphasising AI’s environmental impact, while ISO/IEC 42001 sets AI management standards that can integrate climate-conscious practices, ensuring responsible AI use. A global standard for licensed data centres could further enforce sustainability in AI infrastructure. Singapore for example already mandates approval for new data centres to ensure sustainable capacity, a model that could be expanded internationally to regulate their environmental impact.


Jon Truby is Visiting Associate Research Professor in AI Law and Governance at the Centre for International Law, National University of Singapore.