AI Governance and Global Power: The New Digital Cold War

AI Governance and Global Power: The New Digital Cold War

Tara Gunn
11 Min Read

In the race to harness the capabilities of artificial intelligence (AI), the question isn’t simply what the technology can do but who controls it. From multinational tech firms to national governments to supranational bodies, governance of AI is rapidly becoming a central axis of global power. The stakes are high: AI systems influence labour markets, geopolitics, surveillance, democracy and economic growth. In this article, we examine the emerging players, the rules-of-the-game, and the implications for who will shape the future. We’ll explore how governance frameworks are forming, how power is concentrated, and what that means for business leaders, policymakers and society at large.

Credits Pinterest

Who Sets the Rules? The Architecture of AI Governance

The meaning of AI governance

The term AI governance refers to the processes, rules, standards and institutions that aim to ensure AI is developed and used safely, ethically and aligned with human rights. Major players emphasise its role in mitigating risk while enabling innovation.

According to IBM, AI governance “includes oversight mechanisms that address risks such as bias, privacy infringement and misuse while fostering innovation and building trust.”

Global governance is fragmented

Despite the global nature of AI, governance is far from unified. A 2024-2025 study finds that multiple jurisdictions are designing disparate policies rather than a single global standard.

A commentary by the Carnegie Endowment for International Peace states:

“In the absence of a binding international treaty, the global governance of AI has become fragmented, with different regions, organizations, tech giants and great powers… all developing their own approaches.” Carnegie Endowment

Who are the key actors?

Governance of AI is shaped by several overlapping categories of actors:

  • Governments / states: They legislate, regulate, invest in infrastructure and set national strategies.
  • Industry / Big Tech firms: They control much of the compute, data, research talent and deployment of large-scale AI systems.
  • Supranational organisations & standards bodies: Bodies like Organisation for Economic Co‑operation and Development (OECD), United Nations Educational, Scientific and Cultural Organization (UNESCO) and others attempt to convene global norms and frameworks.
  • Civil society, academia and public interest groups: They push for transparency, rights-based governance, and scrutiny of power concentration.

Key frameworks & initiatives

Here are some of the major governance initiatives:

  • The OECD’s AI Principles and related policy frameworks.
  • The UN system’s White Paper on AI Governance, under UNESCO/ITU.
  • The concept of “compute governance” targeting the infrastructure (chips, cloud / data-centre) on which AI depends.
  • Efforts by the AI Now Institute to highlight how a small number of firms dominate the ecosystem.

Thus, while no single actor “controls” AI governance globally, the architecture is emerging in a plural set of markets, jurisdictions and institutional arenas.

Where the Power Lies: Data, Compute, Regulation and Influence

Concentration of industry power

According to the AI Now Institute, a “small handful of firms not only control the resources needed to build AI systems – the cloud infrastructures, data, and labour – they’ve also set the trajectory for AI development.”

Put simply: whoever controls the data, the compute infrastructure, the research talent and the deployment in the real world holds enormous power.

Sovereign control and digital sovereignty

States are increasingly aware of AI as a strategic asset tied to digital sovereignty. A recent academic essay posits:

“AI systems are becoming … pivotal geopolitical instruments. We examine how these technologies have become national assets central to sovereignty, access, and global influence.”

This means: countries want to ensure they don’t fall behind technologically, they can domestically govern AI systems (or foreign ones), and they can influence supply-chains (e.g., chips, data, cloud).

Where regulation intersects with power

Governments and regulatory regimes matter because they shape who can deploy AI, under what conditions, and with what accountability. One analysis warns:

“…influential actors governments or corporations shape AI policy in ways that are likely to sideline concern for human rights.”

So, power isn’t just about technology, it’s also about who writes the rules and who enforces them.

Geopolitics of AI: The Big Players and Their Strategies

United States

The U.S. remains home to many of the world’s leading AI firms and research institutions. It has so far preferred a lighter regulatory hand compared to some regions, emphasising innovation and investment over strict controls.

China

China has taken a state-led, wide-scale approach: integrating AI into public administration, surveillance, industry and exports. It emphasises application of AI across society. In doing so, it signals a governance model where the state plays a central role.

European Union

The European Union sees AI as not just a technological challenge but a governance one: pushing ethical AI, digital sovereignty, and regulation (for example the proposed AI Act). Scholars note the EU as offering a “third way” in AI governance between U.S. innovation-led and Chinese state-led models.

Middle East and Global South

Emerging regions are also seeking a role. For example, the United Arab Emirates has created a dedicated ministry for AI and aims to position itself as a global regulatory sandbox.

This spreads the power map beyond the usual West/East axis.

Implications for Business, Society and Global Development

For businesses

  • Firms must navigate differing regulatory regimes (U.S., EU, China) and anticipate governance risks.
  • Access to compute, data and talent remain competitive levers; those locked out may be disadvantaged.
  • Governance frameworks will increasingly affect funding, deployment and liability (especially in regulated sectors).

For society and rights

  • Mitigation of bias, privacy violations and surveillance is critical. AI governance isn’t just a techno-issue it’s about human rights.
  • Concentration of AI power may lead to fewer choices for consumers and more opaque decision-making.
  • Digital divides risk being exacerbated: countries with less capability may become dependent rather than sovereign.

For global development

  • Regions in the Global South face governance capacity gaps (funds, talent, regulation). The architecture of global AI governance will affect whether they build domestic AI or remain consumers of foreign-built systems.
  • Data flows, standards and governance frameworks will shape trade and influence in other words, who gets to decide what AI means and does.

What Holds Back a Unified Global Governance Regime?

  • Geopolitical competition: States view AI as strategic; they hesitate to cede authority to supranational governance.
  • Institutional fragmentation: Many initiatives, multiple standards, no binding treaty (yet).
  • Business incentives: Leading AI firms have power and may resist governance that limits their freedom or business model.
  • Technical complexity: AI governance isn’t just about rules it involves compute infrastructure, data flows, cloud services and emergent capabilities (agentic AI). Many regimes are still catching up.

Real-World Example: Governance and Power in Action

Consider how region A champions a high-standard regulation (e.g., ethical, rights-based), region B pushes for state-controlled deployment and region C focuses on rapid commercialisation. These diverging paths illustrate how governance connects to power: region A seeks to set the global “gold-standard”, region B seeks domestic control and global leadership, and region C may seize economic first-mover advantage.

A concrete case: The UN’s 39-member advisory panel on AI reflects this mix of stakeholders and the emerging attempt to coordinate global policy.

Another example: The UAE’s ambition to become a regulatory sandbox shows how smaller states may leverage governance strategy as a form of power and positioning.

Conclusion & Actionable Takeaways

Key takeaways

  • Governance of AI is not controlled by one actor it is shaped by a mesh of states, firms and institutions.
  • Power in the AI era resides in compute + data + talent + governance. Control of any one of these elements offers strategic advantage.
  • Divergent governance models (U.S., China, EU, Global South) mean that global rules are still taking shape; there is no unified “world regulator” yet.
  • For business and societies, the governance landscape is as important as the technology how it’s governed will determine who benefits and who is excluded.

Forward-looking recommendations

  • For executives: Map your AI strategy to the governance terrain anticipate regulatory shifts, invest in compliance and consider data/control dependencies.
  • For policymakers: Coordinate internationally while investing domestically in compute/data/talent to avoid being peripheral in the AI power game.
  • For global development actors: Build capacity for governance, ensure inclusive participation of the Global South, and avoid being locked into dependent roles.

In short, the future of AI isn’t just about algorithms it’s about architecture: who builds it, who rules it and who benefits from it. The power centre of AI governance is still forming but it will shape geopolitics, economics and society for decades to come.

author avatar
Tara Gunn
Share This Article
Leave a Comment

Please Login to Comment.