Back to Articles
The EU's Accusation of Yielding to Big Tech on AI Regulation and Its Implications

ABC News

READ

Details

Date Published
18 Nov 2025
Priority Score
3
Australian
Yes
Created
19 Nov 2025, 11:24 am

Authors (1)

Description

Artificial intelligence could be trained using the sensitive health data of European citizens under proposed changes to the EU's rules on the sector.

Summary

The EU is facing criticism for its proposed amendments to AI regulations that allegedly favor big tech companies by delaying the enforcement of stricter rules on high-risk AI systems until December 2027. This move could enable firms to use anonymised sensitive data of EU citizens, including health and biometric information, which privacy advocates regard as a substantial rollback of digital rights. The article underscores global tensions in AI governance strategies, highlighting the balance between regulation for safety and innovation. It also touches on Australia's parallel efforts in contemplating AI regulations, aiming for a balanced approach between corporate interests and worker rights.

Body

The EU's been accused of caving to big tech on AI regulation. Here's whyTopic:Information and Communication6h ago6 hours agoWed 19 Nov 2025 at 4:19pmLarge language models like OpenAI could be trained using anonymised data of EU citizens under the proposed changes, which have been slammed by critics.(AP Photo: Richard Drew)The European Commission is proposing to delay some parts of its new guardrails on artificial intelligence, as part of a plan to streamline and ease a slew of tech regulation.The commission has defended the changes, with a spokesperson saying that "simplification is not deregulation".But critics argue they will give big tech companies access to vast amounts of data, including sensitive health and biometric information, to train AI models.It comes as Australia weighs up how it intends to regulate the rapidly expanding sector, amid concerns abouta lack of guard rails around the technology.The federal government has indicatedit wants to tread a "sensible middle path"between business and the rights of workers.Here's what the EU is changing, and why.AI could be trained using EU citizen dataAs part of the proposed changes, companies using what are termed "high-risk" artificial intelligence systems would get an extra 16 months before stricter regulations take effect, pushing the rules back until December 2027 from August 2026 currently.High-risk AI refers to usage related to law enforcement, education, justice, asylum and immigration, public services, workforce management, critical infrastructure such as water, gas or electricity, and using biometric data.Critics says the proposed changes are "the biggest roll back of digital fundamental rights in EU history".(Reuters: Gonzalo Fuentes)The commission has also said it wants to clarify when data stops being "personal" under privacy law, potentially making it easier for tech companies to use anonymous information from EU citizens for AI training.Under the proposal, information that has been made anonymous would not be considered personal data if the entity handling it is deemed not to have means to re-identify the person to whom the information relates.When training AI systems, firms would be allowed to use huge datasets even if they contain sensitive personal information like health or biometric data as long as they make reasonable efforts to remove it.Cutting red tape for European businessAs part of the changes, European users would see far fewer pop ups asking for cookie consent.Users would instead be able to set their cookie preferences once, either with a single click lasting six months or through browser and operating system settings that apply across all websites.Small and medium-sized businesses developing or using AI systems would face significantly reduced documentation requirements, potentially saving at least 225 million euros ($401 million) annually, according to the Commission.The changes would also exempt small companies from some cloud-switching rules in data legislation, saving them approximately 1.5 billion euros in one-time compliance costs.In addition, firms would receive a 'European Business Wallet', essentially a digital passport that works across all 27 EU member states, that would allow them to digitally sign and timestamp documents and handle filings across Europe.The Commission says this could eliminate up to 150 billion euros per year in administrative costs once widely adopted.Changes come after big tech, Trump administration pushThe changes appear to be a big win for companies like Alphabet, the owner of Google, and Meta, the owner of Facebook, who have argued revisions to the AI Act were needed to make things easier for business.The Trump administration has also been critical of the EU's push to regulate the sector, accusing the bloc of targeting US firms. The commission has rejected these charges.Ursula von der Leyen and Donald Trump first announced the deal in late July, but did not provide specifics.(Reuters: Evelyn Hockstein)The proposed amendments are not yet law, and would still need to be approved by EU countries and privacy-focused members of the European Parliament before they can be implemented.Critics slam 'roll back' in digital rightsPrivacy activists such as Noyb and civil rights groups see the amendments as a dilution of EU regulations.An open letter from a group of 127 civil organisations called the proposals "the biggest roll back of digital fundamental rights in EU history".And on Wednesday, a group of campaigners deployed four mobile billboards around Brussels, alongside hundreds of posters across the city, urging Commission President Ursula von der Leyen to stand up to big tech and the US president."It is disappointing to see the European Commission cave under the pressure of the Trump administration and Big Tech lobbies," Dutch MEP Kim van Sparrentak said in a statement.Reuters/AP