AI is Everywhere, All at Once: Is This the End of the Nation-State?

Posted by:

|

On:

|

Ana Catarina De Alencar

Rethinking the Social Contract

The challenge of governing artificial intelligence extends far beyond engineering constraints or conventional legal frameworks. As AI systems grow more autonomous, they pose a profound dilemma for democratic governance. 

In a world increasingly steered by opaque algorithms and infrastructures beyond human oversight, how can we preserve democratic agency and the foundational principle of popular sovereignty?

While the notion of the social contract has played a central role in political philosophy from Hobbes, Locke, and Rousseau to Rawls, it should be understood as a conceptual framework rather than a historical event in which individuals literally consented to governance. Despite the absence of a universally accepted foundation for this pact, modern democratic societies operate on the assumption that a basic understanding exists between the state and its citizens. This presumed agreement underpins the legitimacy of public authority, guides participatory decision-making, and defines the contours of individual rights.

Contemporary society bears almost no resemblance to the conditions envisioned by classical social contract theorists. From an individual standpoint, algorithms today do more than deliver information: they actively influence human emotions, preferences, and behavior. Artificial intelligence systems have gained the capacity to detect and manipulate emotional responses, contributing to a phenomenon described by Han (2014) as affective capture, wherein users increasingly engage with emotionally responsive technologies rather than with other individuals or forums for democratic discourse. 

This emotional mediation, coupled with the hyper-customization of content streams, reinforces cognitive isolation by confining individuals to personalized information environments (Han, 2020), thereby reducing encounters with diverse perspectives and eroding the deliberative foundations of the public sphere.

In addition, the foundational conditions for political engagement have undergone a profound transformation. Today, income, labor, goods, and services are increasingly mediated by digital platforms and dominated by centralized information-based economic systems. We now live under the logic of data capitalism and the attention economy, in which algorithms extract value from user engagement, amplifying inequalities based on who holds access to and control over key technologies.

Control over the flow of information and communication, which is essential to democracy, is now concentrated in the hands of a limited number of tech intermediaries. These entities determine how content is ranked, filtered, and disseminated, driven more by profit motives and algorithmic optimization than by any commitment to pluralism or democratic fairness.

At the societal level, public decision-making is now shaped by algorithmic processes that remain largely inscrutable to both citizens and their elected representatives. These systems introduce a new layer of technocratic authority, effectively displacing democratic sovereignty and centralizing power in the hands of the corporations that govern digital infrastructures. 

Responding to these challenges, many authors such as Caron and Gupta (2020) contend that the development of a renewed social contract one may be the only way to normatively anchor the integration of AI into essential domains of public and social life.

If humanity fails to construct a renewed social contract suited to the age of artificial intelligence, we may witness the consolidation of techno-authoritarian regimes in which algorithms effectively dictate public agendas, categorize individuals, allocate resources, and regulate access; all without meaningful civic oversight. The deployment of AI for surveillance, behavioral profiling, and suppression of dissent extends authoritarian practices beyond the confines of the state, embedding control mechanisms within the routines of everyday life through digital mediation.

The Race for Algorithmic Dominance

This trajectory is further exacerbated by the ongoing global race for algorithmic dominance. In this competition for informational and cognitive control, both nation-states and private entities seek to secure strategic advantages. As Pasquale (2020) warns, those who achieve primacy in developing general-purpose AI or achieving large-scale integration of intelligent systems may shape the new governance model circumventing traditional democratic constraints and mechanisms of accountability. 

The consequence is not merely an intensification of technocratic neoliberalism but the rise of a novel form of infrastructural authoritarianism, in which norms are embedded and enforced by those who control the technology.

Against this backdrop, the call for a new social contract in the post-AI context goes beyond safeguarding individual rights or fine-tuning digital governance mechanisms. It is, more profoundly, a call to preserve the very conditions necessary for political life understood as a participatory and collective endeavor for shaping a shared world.

As Dardot and Laval (2014) emphasize: “the common” should not be seen as a pre-existing resource but as a political project instituted through collective will and democratic practice. The growing reliance on opaque algorithmic systems (systems that operate autonomously and outside deliberative scrutiny) undermines not only popular sovereignty but also erodes the very infrastructure of shared meaning-making. The threat, then, is not only institutional or procedural; it is ontological and political. Without a common space sustained by collective deliberation, democratic life itself becomes untenable.

Strategies for an Algorithmic Democracy

While there is no universally accepted blueprint for legitimizing a renewed social contract in the age of artificial intelligence, current scholarly discussions and emerging governance initiatives suggest at least three converging strategies: the establishment of international multilateral agreements, the advancement of polycentric and multilayered digital governance models, and the creation of digital infrastructures embedded with constitutional democratic values.

Each of these strategies, however, faces inherent structural barriers. The voluntary nature of international agreements, combined with weak enforcement capabilities and the swift evolution of technology, often means that by the time these frameworks are adopted, they may already be outdated. Additionally, the immense concentration of power within major technology corporations whose economic and political influence frequently surpasses that of sovereign states will pose significant constraints on the effectiveness and legitimacy of such legal instruments (Caron and Gupta, 2020).

Beyond the legal dimensions, a reimagined model of popular sovereignty must also include technological sovereignty as a fundamental precondition. As Helbing et al. (2019) argue, when control over core digital infrastructures that mediate communication and access to information completely reside in private hands, the notion of collective self-determination becomes largely illusory. 

To begin rethinking democratic theory in the age of artificial intelligence, the current critical and interdisciplinary literature offers five foundational pillars that address the political, ethical, and subjective dilemmas posed by algorithmic systems:

  1. Algorithmic Deliberation with Public Auditability

Given the growing influence of algorithms in shaping public discourse, there is a pressing need for transparency and mechanisms of public oversight. Bruno Latour (2004) was among the first to stress that technologies function not as neutral tools but as hybrid agents that mediate and influence collective deliberation. Building on this, Helbing et al. (2019) advocate for the implementation of digital platforms that are inherently auditable, framing them as a cornerstone of digital democratic infrastructure. 

  1. Participatory Control over Technological Sovereignty

A functioning democracy cannot thrive without its citizens possessing a say over the technological frameworks that mediate their daily lives. Morozov (2013) cautions that democracy is at risk of erosion due to society’s structural reliance on privately owned digital platforms. Zuboff (2019) underscores this concern by demonstrating how surveillance capitalism extracts subjective data for economic gain. 

  1. Designing for Systemic Reversibility

To avoid irreversible technological dependencies, systems should be designed with embedded reversibility mechanisms. Lanier (2010) have both warned of the dangers of infrastructures that lack such flexibility, as they risk locking societies into rigid, opaque normative frameworks. Institutions like the AI Now Institute have also highlighted reversibility as a vital safeguard in maintaining democratic control over technical systems.

  1. Safeguarding Human Subjectivity

The emotional manipulation enabled by affective AI raises serious concerns for individual autonomy. Han (2014) critiques how digital systems turn emotional expression into an exploitable economic asset within the logic of psychopolitics. Elliott (2022) introduces the concept of algorithmic intimacy, noting how AI-mediated emotional connections reshape identity and self-perception. Devillers (2017) contributes by pointing to the dangers of affective nudges, calling for a framework of emotional transparency to uphold the user’s psychological integrity.

  1. Governance that Embraces Cultural and Epistemic Diversity

A truly democratic AI framework must be rooted in the recognition of diverse cultural contexts. Benjamin (2019) reveals how algorithmic systems can perpetuate racial and structural biases when deployed without critical reflection. Kovacs (2017) highlights the imperial tendencies of digital innovation when decoupled from local realities. 

A New Paradigm for the Nation State 

The centralized architecture of artificial intelligence is gradually altering the foundations of political subjectivity and authority. As decision-making processes, flows of information, and systems of knowledge become increasingly governed by opaque technical infrastructures, the locus of sovereignty shifts away from the public sphere and toward systems that escape democratic oversight. In this context, the traditional nation-state loses control over the key infrastructures that once underpinned its legitimacy: data flows, communication networks, and algorithmic decision-making tools.

This does not necessarily imply the formal disappearance of the state, but rather signals a deeper functional erosion of its authority. The state may continue to exist symbolically and bureaucratically, yet it becomes relegated to the role of a secondary administrator, managing policy within a framework that it neither designs nor fully understands. In effect, governance becomes dislocated from the institutions of collective deliberation and absorbed into a multi-layered, transnational ecosystem of digital infrastructure.

Unless a new social contract is envisioned, meaning one that responds to this radical transformation in how power is produced, distributed, and legitimized, the nation-state risks becoming a hollow shell. Its authority may persist in appearance, but it will lack the operational means to intervene meaningfully in a world increasingly orchestrated by technical systems that transcend territorial borders and democratic control. In such a future, the state becomes less a sovereign actor and more a functionary within a computational regime from which he is also dependent to keep on working.