Back to Articles
Resisting AI Hyper-Optimism: Why the Artificial Intelligence Summit in Paris Should Concern Us All

ABC News

SKIPPED

Details

Date Published
11 Mar 2025
Priority Score
4
Australian
Yes
Created
14 Mar 2025, 11:49 am

Description

Over the last two years, AI summits held in the UK and South Korea focused on safety and the need for guardrails in the development of this powerful technology — but this year’s summit in Paris gave way to an unrestrained enthusiasm.

Summary

The article examines the 2025 Artificial Intelligence Action Summit in Paris, where discussions favored unrestrained enthusiasm for AI's potential to address global challenges over serious consideration of its socio-ethical risks. Amidst the optimism, the summit overlooked the emphasis on AI safety present in prior summits in the UK and South Korea. The event, featuring prominent leaders like French President Emmanuel Macron and Indian Prime Minister Narendra Modi, highlighted AI's role in innovation and economic growth, but failed to adequately address the pressing concerns of AI-related biases, environmental impact, and necessary regulations. This discourse is crucial for global AI safety governance and frameworks, as laissez-faire attitudes towards AI regulation may exacerbate existing inequalities and social harms.

Body

ShareFacebookX (formerly Twitter)In 1900,the Paris Expositionshowcased the future — wireless telegraphs, moving footpaths and the cutting edge technologies of its time. In February, the Grand Palais, which was purpose-built for the Exposition, hosted the2025 Artificial Intelligence Action Summitwhere optimism about technological progress once more filled its halls, now over a century later.Unlike the previous summits inthe United Kingdom(2023) andSouth Korea(2024), both of which focused heavily on AI safety, this year’s gathering shifted gears. Co-chaired by French President Emmanuel Macron and Indian Prime Minister Narendra Modi, the event aggressively promoted AI’s power to tackle global crises — from climate change to disease outbreaks.While theprogramincluded a few safety events, the main stage was dedicated almost entirely to championing AI’s benefits. Talks focused on how AI could drive innovation, boost economic growth, enhance the future of work and offer solutions to complex global issues. Concerns about socio-ethical risks and potential harms were relegated, making it clear that this year’s summit was more about optimism than caution.French President Emmanuel Macron speaks during a plenary session at the Artificial Intelligence Action Summit, at the Grand Palais, in Paris, France, on 11 February 2025. (Photo by Gao Jing / Xinhua via Getty Images)Macron hailed AI as a “formidable technological and scientific revolution for progress”. He urged European leaders to cut and “simplify” regulations to keep the continent in the AI race. However, he conceded in his final speech that “rules” would still be necessary. Modi struck a more critical tone. Praising AI’s far-reaching impact, he cautioned against bias and stressed the need to include the Global South in shaping open-source systems that foster transparency, trust, and “people-centric applications”.US Vice President JD Vance, meanwhile, doubled down on AI’s potential, declaring that it is “going to make us more productive, more prosperous, and more free”. He warned that “excessive regulation could kill a transformative industry just as it’s taking off”, criticised those who were too preoccupied with risk, and condemned “ideological bias” in pushing social agendas. Yet, his interest in bias did not extend to any of the well-documented algorithmic biases related to gender, race or age — key social issues that have sparked widespread debate in academic and policy circles. Instead, he reinforced the United States’ resistance to an overly precautionary regulatory regime.Indian Prime Minister Narendra Modi and French President Emmanuel Macron arrive a plenary session at the AI Action Summit at the Grand Palais on 11 February 2025. (Photo by Press Information Bureau / Anadolu via Getty Images)Many technology companies welcomed these messages with enthusiasm, but several researchers and technologists voiced concern. Dario Amodei, CEO of Anthropic — the company behind the chatbot Claude — called the summit a“missed opportunity”to address the serious risks posed by AI. He pointed to Anthropic’sown evidence, warning that without careful training, “AI models can deceive their users and pursue goals in unintended ways even when trained in a seemingly innocuous manner”.AI bias and social harmResearch shows thatAI models often reinforce harmful biases, disproportionately affecting women, people of colour and, as our studies highlight,older people. These biases stem from flawed training datasets — which tend to over-represent white young men — and from the underlying ideals and assumptions shaping AI design. These biases are then carried over into how AI is used, worsening discrimination and deepening social inequalities.This pattern appears across diverse sectors, from the job market to aged care. For example, US researchers found that AI hiring tools, which are now widely used by companies, in 85 per cent of casespicked résumés with white male names.Similarly, in aged care,our study showsthat AI is shaped by a social deficit view that sees ageing as a burden and assumes all older people (aged 65+) are the same — costly, passive, incapable and uninterested in technology. Theseassumptionsare frequently mirrored by aged care staff who implement AI, thereby reinforcing ageist stereotypes.AI’s environmental costAI also poses severe environmental challenges. The development and use of generative AI like ChatGPT require enormous computational power, relying onmassive server farms housed in energy-intensive data centres. These centres not only consume vast amounts of electricity to power the hardware but need intensive cooling systems to prevent overheating. A single ChatGPT prompt consumes10 times more energythan a typical Google search.Satellites and antennas on the rooftop at One Wilshire, a high-rise office building in downtown Los Angeles that has been almost entirely converted into a server farm, on 10 September 2024. (Genaro Molina / Los Angeles Times via Getty Images)AI’s significant energy consumption results in major carbon emissions,placing strain on ecosystems. The production and disposal of AI equipment further contribute toresource depletion, environmental degradation and growing e-waste. And while AI demand is drivingsoaring emissions, companies arefailing to fully disclosethe data needed to accurately calculate its environmental consequences.These environmental and social risks were a central focus of the firstInternational AI Safety Report. The report concluded that current mitigation efforts fall short. As AI advances rapidly, these risks are expected only to intensify. This is concerning given the Paris summit’s decision to lean toward AI accelerationism, with its evident conviction that the rapid advancement of general AI is both inevitable and beneficial for humanity.Beyond the AI hypeAs sociologists of technology, we take a critical approach not only to themoral panicssurrounding AI but also to the hype and overpromising that accompany new technological advancements. Technology is not purely technical — it is deeply rooted in social, political, cultural and economic values and ideas. Its design and implementation are always influenced by historical processes, power relations and human agency. Thus, technology is not independent of society; it both reflects and reinforces existing structures of power and inequality.Recognising this interconnectedness is essential to ensuring that AI serves diverse communities equitably rather than exacerbates social and ecological divides. This means we cannot foster innovation without putting the right safeguards in place. At the same time, we must remain vigilant about how AI shapes public discourse and resist the current hyper AI-solutionism — the belief that AI alone will solve all our problems.Want the best of Religion & Ethics delivered to your mailbox?Sign up for our weekly newsletter.Your information is being handled in accordance with theABC Privacy Collection Statement.Email addressSubscribeThe summit wrapped up with a“final statement on inclusive and sustainable artificial intelligence for people and the planet”. Though it references inclusion, openness, sustainability and ethics, it lacked any concrete commitments or action. Even as a light statement of intent, it failed to secure full backing — the United States declined to sign it, in line with Vance’s speech, and the UK withheld support citing the absence of globalgovernance of AI and concerns over national security.This stands in stark contrast to the European Union’s stance. The EUAI Act, the world’s first comprehensive AI law, took effect in August 2024 and will roll out gradually over the coming months. Despite Macron’s rhetoric and Vance’s dismissal of regulations, these measures aim to make AI fairer, safer and more transparent.In this era of AI hyper-optimism, human rights along with social and environmental justice must not be sidelined in the rush for innovation. Not only must they remain part of the conversation — they must drive real action.Barbara Barbosa Nevesis a Horizon Fellow on AI and Ageing in theSydney Centre for Healthy Societiesat the University of Sydney, where she leads theAI Social Scienceresearch theme.Geoffrey Meadis a Research Fellow in theSydney Centre for Healthy Societiesat the University of Sydney.Posted12 Mar 202512 Mar 2025Wed 12 Mar 2025 at 10:45pm,updated16 Mar 202516 Mar 2025Sun 16 Mar 2025 at 8:57pmShareFacebookX (formerly Twitter)We know that AI is sexist and racist — is it also ageist?Why we shouldn’t want to be the pets of super-intelligent computersIf war is the next big disruption, what’s to stop the titans of Big Tech from breaking the world?Why automating good behaviour in the workplace could end up making you a worse personIs it ethical to create generative agents? Is it safe?When we look at Artificial Intelligence, do we see our own reflection?Artificial Intelligence,EthicsBack to top