Don't Be Fooled: The US is Regulating AI
The Guardian
SKIPPED
Details
- Date Published
- 22 Oct 2025
- Priority Score
- 3
- Australian
- No
- Created
- 26 Oct 2025, 11:01 am
Description
Beneath the free-market rhetoric, Washington actually intervenes to control the building blocks of AI systems
Summary
The article argues that the U.S. is indeed regulating AI, not through visible applications but by restricting the foundational elements of AI systems, like AI chips, as a national security measure. It highlights that, contrary to the free-market rhetoric, U.S. policies strategically control AI's building blocks while leaving the applications relatively untouched. This approach focuses on maintaining military and technological superiority, aligning with global trends where nations like China also impose strict controls for national security. The article emphasizes that transparency in AI regulation is crucial for the efficacy of global AI governance frameworks.
Body
‘No global framework will succeed if the US, home to the world’s largest AI labs, maintains the illusion that it’s staying out of regulation entirely.’Photograph: Narumon Bowonkitwanchai/Getty ImagesView image in fullscreen‘No global framework will succeed if the US, home to the world’s largest AI labs, maintains the illusion that it’s staying out of regulation entirely.’Photograph: Narumon Bowonkitwanchai/Getty ImagesDon’t be fooled. The US is regulating AI – just not the way you thinkSacha Alanoca and Maroussia LévesqueBeneath the free-market rhetoric, Washington actually intervenes to control the building blocks of AI systemsAt first glance, today’s artificial intelligence policy landscape suggests a strategic retreat from regulation. As of late, AI leaders such as the US have doubled down on this messaging. JD Vance champions AI policy with a “deregulatory flavor”. Congressconsidered a 10-year banon state AI legislation. On cue, the Trump administration’s “AI action plan” warns against smothering the technology “in bureaucracy at this early stage”.But the deregulatory narrative is a critical misconception. Though the US federal government takes a hands-off approach to AI applications such as chatbots and image generators, it is heavily involved in the building blocks of AI. For example, both the Trump and the Biden administrations have been hands-on when it comes to AI chips – a crucial component of powerful AI systems. Bidenrestricted chip accessto competing nations such as China as a matter of national security. The Trump administration hassought dealswithcountries such as the UAE.Both administrations have a track record of heavily shaping AI systems in their own way. The US isn’t deregulating AI – it’s regulating where most people aren’t looking. Beneath the free-market rhetoric, Washington actually intervenes to control the building blocks of AI systems.Once the AI bubble pops, we’ll all suffer. Could that be better than letting it grow unabated?Read moreTaking in the full range of AI’s technology stack – the collectionof hardware, datacenters and software operating in the background of applications such as ChatGPT – reveals that countries target different components of AI systems. Early frameworks like the EU’s AI Act focused on highly visible applications – banning high-risk uses in health, employment and law enforcement to prevent societal harms. But countries now target the underlying building blocks of AI. China restricts models to combat deepfakes and inauthentic content. Citing national security risks, the US controls the exports of the most advanced chips and, under Biden, evenmodel weights– the “secret sauce” that turns user queries into results. These AI regulations are hiding in dense administrative language – “Implementation of Additional Export Controls” or “Supercomputer and Semiconductor End Use” bury the ledes. But behind this complex language is a clear trend: regulation is moving from AI applications to its building blocks.The first wave of application-focused rules, in jurisdictions such as the EU, prioritized concerns such as discrimination, surveillance, environmental damage. The second wave of rules, by American and Chinese rivals, takes a national security mindset, focusing on maintaining military advantage and making sure malicious actors don’t use AI to gain nuclear weapons or spread fake news. A third wave of AI regulation is emerging as countries address societal and security concerns in tandem.Our researchshows this hybrid approach works better, as it breaks down silos and avoids duplication.Breaking the spell of laissez-faire rhetoric requires a fuller diagnostic.Seen through the lens of the AI stack, US AI policy looks less like abdication and more like a redefinition of where regulation occurs: light touch at the surface, iron grip at the core.No global framework will succeed if the US, home to the world’s largest AI labs, maintains the illusion that it’s staying out of regulation entirely. Its own interventions on AI chips say otherwise. US AI policy isn’t laissez-faire. It’s a strategic choice about where to intervene. Though politically expedient, the deregulation myth is more fiction than fact.The public deserves more transparency about how – and why – governments regulate AI. It’s hard to justify a hands-off stance on societal harms while Washington readily intervenes on chips for national security. Recognizing the full spectrum of regulation, from export controls to trade policy, is the first step toward effective global cooperation. Without that clarity, the conversation on global AI governance will remain hollow.Sacha Alanoca is a doctoral researcher at Stanford University and former John F Kennedy fellow at Harvard University. Maroussia Lévesque is a doctoral researcher at Harvard Law School and an affiliate at the Harvard Berkman Klein Center for Internet and SocietyExplore more on these topicsArtificial intelligence (AI)OpinionRegulatorsComputingcommentShareReuse this content