Back to Articles
If the Best Defence Against AI is More AI, This Could Be Tech’s Oppenheimer Moment

The Guardian

SKIPPED

Details

Date Published
3 Mar 2025
Priority Score
4
Australian
No
Created
10 Mar 2025, 10:27 pm

Authors (1)

Description

An unsettling new book advocates a closer relationship between Silicon Valley and the US government to harness artificial intelligence in the name of national security

Summary

The article examines a new book by Alexander C. Karp and Nicholas W. Zamiska, which suggests a deeper partnership between Silicon Valley and the U.S. government to leverage artificial intelligence for national security. Karp, co-founder of Palantir, argues that the technological talents in Silicon Valley have been misdirected towards consumer products rather than crucial national interests. The book reflects on historical collaborations such as the Manhattan Project, calling for a similar relationship to harness AI against geopolitical rivals like Russia and China. This discourse contributes to the AI safety conversation by advocating for a strategic approach to AI development that balances power and addresses existential threats.

Body

Composite: Observer design;Getty/iStockphotoView image in fullscreenComposite: Observer design;Getty/iStockphotoIf the best defence against AI is more AI, this could be tech’s Oppenheimer momentAn unsettling new book advocates a closer relationship between Silicon Valley and the US government to harness artificial intelligence in the name of national securityOscar Wilde’s quip, “Life imitates art far more than art imitates life”, needs updating: replace “art” with “AI”. TheAmazon pagefor Alexander C Karp and Nicholas W Zamiska’s new book,The Technological Republic: Hard Power, Soft Belief and the Future of the West, also lists: a “workbook” containing “key takeaways” from the volume; a second volume on how the Karp/Zamiska tome “can help you navigate life”; and a third offering another “workbook” comprising a “Master Plan for Navigating Digital Age and the Future of Society”. It is conceivable that these parasitical works were written by humans, but I wouldn’t bet on it.Mr Karp, the lead author of the big book, is an interesting guy. He has a BA in philosophy from an American liberal arts college, a law degree from Stanford and a PhD in neoclassical social theory from Goethe University in Frankfurt. So he’s not your average geek. And yet he’s an object of obsessive interest to people both inside and outside the tech industry. Why? Because in 2003 he – together with Peter Thiel and three others – founded a secretive tech company called Palantir. And some of the initial funding came from the investment arm of – wait for it – the CIA!The name comes frompalantíri, the “seeing stones” in the Tolkien fantasies. It makes sense because the USP of Palantir is its machine-learning technology – which is apparently very good at seeing patterns in, and extracting predictions from, oceans of data. The company was founded because at the time all theSilicon Valleytech companies either disapproved of government, or were staffed by engineers who were adamantly opposed to working for the US military. This created an opening that Karp and his colleagues astutely exploited to build a company which is simultaneously appears to be booming (current market capitalisation: $200bn), while also being regarded by critics of the industry as the spawn of the devil.Those critics will disdainfully read the book as a kind of extended tender for public sector contracts. Civil servants contemplating employing Palantir may be interested in the description of the approach its employees adopt when working in a client’s organisation. Interestingly, it’s an approach borrowed from a Toyota executive, Taiichi Ohno, as a way of getting to the root cause of a problem occurring in some part of an organisation’s operations. It’s called the “Five Whys”: ask why a problem occurred, and then ask why four more times.View image in fullscreenA still from the short newsreel film Atomic Power, in which Albert Einstein and Leo Szilard recreate the day in 1939 they drafted a letter to President Roosevelt warning him that work on the atom bomb was imperative.Photograph: Leo Szilard Papers. MSS 32. Special Collections & Archives, UC San Diego Library“Why did an essential update to an enterprise software platform not ship by a Friday deadline?” the co-authors write. “Because the team had only two days to review the draft code. Why did they only have two days to review? Because it had lost six software engineers in the budget review cycle late last year. Why did its budget decrease? Because the head of the group had shifted priorities elsewhere at the request of another group lead. Why was the request made to shift priorities? Because a new compensation model had been rolled out incentivising growth in certain areas. Why were certain areas selected at the expanse of others? Because of an ongoing feud at the company between two senior executives.” You get the idea. It’s not rocket science. Or AI, come to that. Maybe Keir Starmer should try it out. And it’ll be cheaper than employing McKinsey.But I digress. The argument of the book is suffused with indignation at what Karp sees as the arrogance and small-mindedness of Silicon Valley, which has collected the greatest concentration of engineering skill the world has ever seen – and then deployed it to create consumer toys and diversions that make tech founders insanely rich rather than using that talent to create technologies that would buttress the national welfare and security of the United States. What’s particularly galling to him is the fact that the wealth of Silicon Valley was built on a technological foundation that was laid – and paid for – by the state, and yet its beneficiaries appear to have nothing but contempt for government. They have prioritised consumer gratification and their own wealth-creation over everything else.“The grandiose rallying cry of generations of founders in Silicon Valley was simply to build,” write Karp and Zamiska. “Few asked what needed to be built, and why. For decades, we have taken this focus – and indeed obsession in many cases – by the technology industry on consumer culture for granted, hardly questioning the direction, and we think misdirection, of capital and talent to the trivial and ephemeral. Much of what passes for innovation today, of what attracts enormous amounts of talent and funding, will be forgotten before the decade is out.”Underpinning much of the book’s lamentations are two enduring themes. The first is a kind of nostalgic longing for the wartime and postwar collaboration between the American state and the scientists and engineers which made the US a technological colossus. For Karp, as for many other thinkers like him (including the UK’s own Dominic Cummings), the Manhattan Project that created the atomic bomb looks like a lost nirvana.The second theme is a chronicle of what the authors call “The Hollowing Out of the American Mind”: the abandonment of belief, the agnosticism of technology, the “assumption that the correctness of one’s views from a moral or ethical perspective precludes the need to engage with the more distasteful and fundamental question of relative power with respect to a geopolitical opponent, and specifically which party has a superior ability to inflict harm on the other. The wishfulness of the current moment and many of its political leaders may in the end be their undoing.” This is the “soft belief” of the book’s subtitle, and it’s why this section of the book sometimes evokes echoes of the conservative philosopherAllan Bloomon song.View image in fullscreenPalantir co-founder and author Alex Karp.Photograph: Bloomberg/Getty ImagesThere’s a lot of hegemonic anxiety in Karp’s musings. For him, American primacy is the key to the survival of the civilisational values that he reveres. He’s also a disciple of the Nobel laureate economistThomas Schelling, and shares his view that “to be coercive, violence has to be anticipated… The power to hurt is bargaining power. To exploit it is diplomacy – vicious diplomacy, but diplomacy.”But the power to hurt is a prerogative of “hard” (ie military) power, and Karp seems particularly incensed by what he sees as the “precious” reservations of Google employees about the possibility that their technologies might be put into military hands. (It may also have been one of the motivations for the founding of Palantir.) His irritation seems unduly harsh to me. All of these employees (and their parents and grandparents) have lived through an era in which the idea that the United States might again be involved in an all-out war seemed as preposterous as the idea that their inventions might be used in battle. In that sense, the west has been on an 80-year-long holiday from history, from which Putin has rudely awoken us.The lesson that Karp and his co-author draw from all this is that “a more intimate collaboration between the state and the technology sector, and a closer alignment of vision between the two, will be required if the United States and its allies are to maintain an advantage that will constrain our adversaries over the longer term. The preconditions for a durable peace often come only from a credible threat of war.” Or, to put it more dramatically, maybe the arrival of AI makes this our “Oppenheimer moment”.In the summer of 1939, Albert Einstein and Leo Szilardsent a letterto President Roosevelt, urging him to explore the construction of an atomic bomb – and quickly. The rapid advances in the technology, the two scientists wrote, “seem to call for watchfulness and, if necessary, quick action on the part of the administration”, as well as as a sustained partnership with “permanent contact maintained between the administration” and physicists.In that historical context, maybe the arrival of this book is timely. For those of us who have for decades been critical of tech companies, and who thought that the future for liberal democracy required that they be brought under democratic control, it’s an unsettling moment. If the AI technology that giant corporations largely own and control becomes an essential part of the national security apparatus, what happens to our concerns about fairness, diversity, equity and justice as these technologies are also deployed in “civilian” life? For some campaigners and critics, the reconceptualisation of AI as essential technology for national security will seem like an unmitigated disaster – Big Brother on steroids, with resistance being futile, if not criminal.On the other hand, some of the west’s adversaries (Russia, China) are already using this technology against us, and we urgently need to tool up to address the threat. When these thoughts were put to Mr Karp by aNew York Timesreporter, he replied: “I think a lot of the issues come back to: ‘Are we in a dangerous world where you have to invest in these things?’ And I come down to yes. All these technologies are dangerous. The only solution to stop AI abuse is to use AI.” Hobson’s choice, in other words.The Technological Republic: Hard Power, Soft Belief and the Future of the Westis published by The Bodley Head (£25). To support theGuardianandObserverorder your copy atguardianbookshop.com. Delivery charges may applyExplore more on these topicsArtificial intelligence (AI)The ObserverSilicon ValleyfeaturesShareReuse this content