top of page

The Social Construction of AI: how foundational choices shape Systemic Risks and Governance

  • janputs
  • 3 days ago
  • 9 min read

Updated: 3 days ago


ree

Why the EU’s fundamental rights-based approach could offer a path to trustworthy AI — Bridging AI safety and innovation in a turbulent world.


Artificial Intelligence is not just a technical challenge—it is a socio-technical process involving systemic risks, law, and geopolitics. As AI systems become increasingly embedded in society, the choices we make today about governance will determine whether AI advances equity and safety or exacerbates inequality and harm. This post argues that the EU’s fundamental rights-based approach to AI governance offers a superior model for managing systemic risks compared to game-theoretic strategies. Here’s why—and how to implement it.


To understand the stakes, consider two pivotal moments that exposed the systemic risks of AI. In 2018, the Cambridge Analytica scandal revealed how Facebook’s data could be weaponized to manipulate democratic elections. Far from being a mere technical glitch, this was a systemic failure that eroded public trust. Facebook admitted it had been “too slow to respond” and had “underinvested in safety and security.” COO Sheryl Sandberg stated: “There are operational things that we need to change (...).” Seven years later, in 2025, OpenAI’s CEO Sam Altman apologized after a lawsuit alleged that ChatGPT had contributed to a teenager’s suicide by offering harmful advice. In both cases, the public was assured that lessons had been learned—but only after irreversible harm had occurred.


These cases are not isolated incidents; they reveal a dangerous pattern. Too often, AI risks are treated as technical bugs to be patched after the fact. But AI risks are systemic—they are deeply rooted in how technologies are designed, deployed, and governed. Philosopher Karl Popper warned in The Open Society and Its Enemies (1945), that our institutions should be designed so that we can learn from our mistakes and correct them without causing irreversible damage. Popper’s insight is more than a philosophical abstraction—it is a call to action for AI governance today.


The EU’s approach to AI governance embodies this principle. By grounding regulation in fundamental rights and systemic risk management, the EU ensures that AI’s risks are addressed before deployment—not through reactive fixes after harm occurs. This stands in stark contrast to models driven by competition and special interests, where systemic failures like algorithmic bias or privacy violations are often dismissed as operational glitches. For EU Member States, this distinction is crucial as they implement the AI Act into national policies and engage globally. AI development is not just about technical innovation; it is about shaping a socio-technical future that prioritizes public good over profit.


To understand why this matters, we need an analytical tool that reveals how AI is shaped by human choices. The Social Construction of Technological Systems (SCOT) framework (Bijker, Hughes & Pinch, 1987) provides this lens. SCOT helps us see how different stakeholders—policymakers, industry, and civil society—define and influence AI’s trajectory. By examining AI through SCOT, we can uncover the values, priorities, and power dynamics embedded in its design and governance.

 

SCOT and AI Governance

Key elements of the SCOT framework:

  • Interpretive Flexibility: AI’s risks and benefits are perceived differently by stakeholders, depending on their values and interests.

  • Relevant Social Groups: Policymakers, developers, civil society, and industry each bring unique perspectives to AI governance.

  • Technological Frames: Cultural, legal, and institutional contexts shape how AI is designed and governed.

  • Closure and Stabilization: AI governance evolves through consensus or conflict among these groups.

Why SCOT matters for AI governance: SCOT demonstrates that AI is not a neutral tool—it embodies the values, priorities, and power dynamics of the groups that shape it. The EU’s rights-based approach prioritizes public good, while other regions may focus solely on competition and strategic advantage. This distinction is critical for understanding how governance models influence AI’s societal impact.


As Hendrycks (2024) argues: “The arrival of transformative AI systems will require thoughtful governance at multiple levels in order to steer uncertain technological trajectories in broadly beneficial directions aligned with humanity’s overarching interests.” (p.496 ‘Introduction to AI Safety, Ethics, and Society’)


Governing AI requires balancing innovation with safety and equity. The EU’s legal framework offers a roadmap for doing so—but only if systemic risks are addressed early and if global cooperation is prioritized over competition.


This raises the question: what do these systemic risks look like in practice, and why can’t they be solved with technical fixes alone?

 

Systemic Risks: more than just technical bugs

When we talk about “AI risks,” it’s easy to think only of software bugs or coding mistakes. But systemic risks go deeper—they are baked into how AI is designed, deployed, and governed.

The real-world examples of Facebook & Cambridge Analytica (2018) and the OpenAI & ChatGPT suicide Lawsuit (2025) mentioned in the introduction illustrate the consequences of neglecting risks.


These cases reveal a troubling pattern: the U.S. model often treats AI failures as operational glitches to be patched after the fact. However, systemic risks cannot be resolved with quick fixes. They encompass:

·        Technical failures: Bias, security flaws, and unsafe outputs.

·        Societal harms: Erosion of democracy, human rights violations, and global arms races.


That’s why the EU’s approach is different. Instead of waiting for harm, its AI Act requires risk management before deployment—treating AI not as a neutral tool, but as a system shaped by human choices with real social consequences.

 

The EU’s rights-based approach

The EU AI Act is grounded in fundamental rights such as privacy and non-discrimination, supported by supranational law (e.g., GDPR, UN principles). This approach demands proactive risk management, transparency, and accountability.

Under the Act, developers of high-risk AI systems must conduct rigorous risk assessments. These include evaluations of fundamental rights impacts like discrimination or privacy violations. While not explicitly called “fundamental rights impact assessments,” the Act’s conformity processes ensure these risks are addressed before deployment (EU AI Act, Articles 9, 10, 11, 19 https://artificialintelligenceact.eu/).


In this way, the EU’s framework reflects a technological frame prioritizing public good and rights protection, shaping how AI is socially constructed in Europe.

Yet, not all regions share this orientation. In contrast, the U.S. and others frame AI governance primarily through competition and strategic interests.

 

Alternative Approach: competition and strategic interests

An alternative model of AI governance prioritizes innovation, economic competitiveness, and national security, often framed through game theory.

In the United States, policies such as the CHIPS and Science Act (2022) and the National AI Initiative (2020) focus on technological leadership and strategic advantage, especially in competition with China. While they support AI research and development, the primary aim is to secure dominance in critical technologies.

This approach shapes AI through incentives rather than ethical or legal mandates. Policies are designed to “win” the global AI race rather than address systemic risks such as algorithmic harm or inequality.


The result is a technological frame centered on competition and special interests, contrasting sharply with the EU’s fundamental rights-based vision.

To make these differences clearer, the table below sets out the key contrasts.

 

Comparing the EU and alternative approaches of AI governance

Aspect

EU Approach

Alternative approach

Legal Foundation

International law, fundamental rights

Domestic law, strategic competition

Risk Management

Proactive, rights-based, systemic

Reactive, incentive-driven, market-based

Design Principles

Ethics-by-default, transparency, accountability

Innovation-first, market-driven

Global Cooperation

Multilateral agreements, global standards

Bilateral alliances, export controls

Stakeholder Priorities

Public good, fundamental rights

National security, economic dominance

Technological Frame

AI as a tool for societal benefit

AI as a tool for geopolitical leverage

These competing frames lead to divergent governance models. The EU treats AI as a public good, while the U.S. views it as a strategic asset in global competition.

Still, some argue that even the U.S. approach goes too far—that minimal regulation and market accountability should be the default.

 


Counterarguments: why some advocate for minimal regulation


Divergent perspectives on AI governance are evident in recent discussions. For example, Dean W. Ball’s blog "For All Issues So Triable: The First Landmark AI Lawsuit Emerges" argues for a laissez-faire approach, rooted in the belief that minimal regulation and tort liability (e.g., lawsuits after harm occurs) may yield better outcomes than proactive governance. This perspective claims that market forces and legal accountability are sufficient to address AI risks, without the need for prescriptive rules like the EU AI Act.

However, this approach has critical gaps when analyzed through the SCOT framework and the lens of systemic risk management:

Aspect

EU’s Rights-Based Approach

Laissez-Faire/Tort Liability Approach

Gap in the Laissez-Faire Model

Governance Strategy

Proactive, risk-based, preventive

Reactive, harm-based

Fails to prevent systemic risks; too late for irreversible harm (e.g., bias embedded in hiring AI).

Socio-Technical View

AI as socially constructed, multi-stakeholder

AI as a technical/legal issue

Ignores cultural, ethical, and power dynamics (e.g., who defines "harm" after the fact?).

Ethics & Rights

Fundamental rights, ethics-by-design

Legal liability as primary tool

Ethics reduced to compliance; lacks preventive frameworks for fairness or transparency.

Global Coordination

International standards, multilateralism

National tort systems

Inadequate for cross-border AI challenges (e.g., a US lawsuit can’t address harm caused by EU-deployed AI).

Innovation

Clear rules enable trust and investment

Liability risks create uncertainty

Undervalues predictability for businesses and investors.

 

While minimal regulation may appeal to those prioritizing innovation speed or market flexibility, the EU’s deliberate, values-driven policymaking is essential for trustworthy AI. As Ball’s discussion highlights, relying solely on lawsuits and liability leaves systemic risks unchecked and fails to protect vulnerable groups upfront. The EU’s approach—grounded in proactive governance, fundamental rights, and global standards—is better equipped to shape AI as a force for public good.


If the EU model is superior in theory, how can it be translated into effective policy? And given these gaps, how can the EU’s framework be implemented effectively? The following recommendations bridge theory and practice.


Policy Recommendations: bridging theory and practice

General Recommendations:

  1. Mandate fundamental rights impact assessments: require developers to assess systemic risks (e.g., bias, surveillance) before deployment.

  2. Promote Ethics-by-Default: integrate ethical guidelines into technical standards (e.g., IEEE, ISO) and regulatory frameworks.

  3. Strengthen global cooperation: build alliances with Canada, Japan, and others to promote a fundamental rights-based approach.


Stakeholder-Specific Roles:

For EU Policymakers: ensure consistent implementation of the AI Act across member states. Use the Scientific Panel on AI to bridge technical, ethical, and legal considerations.

For Industry: adopt ethics-by-design toolkits to align with EU standards.

For Civil Society: push for transparency—ask: "Who benefits from this AI system, and who might be harmed?"


These stakeholder roles feed into a broader governance framework, which can be broken down into five core elements.


A framework for AI Governance

1.  Analyse Systemic Risks: Identify technical, ethical, and societal risks through multi-stakeholder dialogues.

2.  Incorporate Risk Management: Use standards, audits, and impact assessments to embed safety into AI design.

3.  Align with Fundamental Rights: Ensure policies reflect international law and human rights.

4.  Foster Global Cooperation: Promote multilateral standards to prevent fragmentation.

5.      Level Playing Field: to avoid unfair competition, all digital goods and services entering the EU market should comply with EU law. This ensures a level playing field for European industry.

Taken together, these recommendations highlight the deeper point: the foundations we choose for AI governance will shape its long-term trajectory.


Conclusion: why foundational choices matter

The social construction of AI is a call to action. The EU’s rights-based approach offers a roadmap for trustworthy AI, but its success depends on risk management, international cooperation, and deliberate policymaking. As Popper warned, institutions must prevent irreversible damage. The EU’s AI Act does just that—by design.


As Latour reminds us, technology is not neutral; it embodies specific social relations, values, and interests. (Latour, B. (1987). Science in Action: How to Follow Scientists and Engineers Through Society. Harvard University Press).


Policymakers, developers, and civil society together will define whether AI advances equity and safety or becomes a tool of competition.


As Hendrycks (2024) has argued: “The deliberate implementation of policies, incentives and oversight will be essential to realizing the potential of AI to improve human civilization rather than destroy it.”


Artificial Intelligence is not an inevitable force of nature—it is the product of human choices, shaped by legal frameworks, cultural values, and strategic priorities.

While this makes the normative case for rights-based governance, the practical challenge remains: how can Europe stay innovative and competitive under such a model?


Epilogue: innovation and competitiveness in the EU

Responsible innovation requires clear principles but flexible implementation. Regulatory sandboxes and public–private partnerships can foster experimentation while upholding safety. Member states should tailor these tools to their national contexts while maintaining alignment with EU law.

Different stakeholders interpret “risk,” “safety,” and “innovation” through their own lenses. The table below summarizes these perspectives.

Social Group

Risk (Key Concern)

Safety (How It’s Understood)

Innovation

(How It’s Pursued)

Policy Tools (Main Instruments)

Policymakers

Compliance, liability

Trust, rights

Growth

Sandboxes, consultation

Industry (Startups)

Market barriers

Reputation, trust

Speed

Flexible compliance

Civil Society

Bias, ethics

Transparency, accountability

Inclusivity

Participatory impact assessments

 

These differences highlight why AI governance must be inclusive. Without balancing these perspectives, regulation risks favouring one group’s priorities at the expense of public trust.

 

This blog post was made as part of the summer course on AI Safety, Ethics, and Society (AISES) of the Center for AI Safety (CAIS).

 
 
 

Comments


bottom of page