Back to Articles
Don't Fear DeepSeek – Australia Can Launch Its Own Start-Ups

The Age

SKIPPED

Description

We shouldn’t ban China’s new AI giant – we should regulate it. Then Australia should seize the opportunity to lead this sector.

Summary

The article examines the transformative potential of open-source AI models, using China's DeepSeek as a case study that challenged Silicon Valley monopolies by eliminating significant stock value and monopolistic constructs. It argues for robust and nuanced AI regulation by Australia, combining rigorous governance with innovation to turn potential threats into opportunities. The author pushes for Australia to eschew exclusionary reactions against nations like China and rather adopt inclusive, sustainable AI practices, citing DeepSeek’s model as an exemplar of both the promise and risks inherent in open-source AI. The discussion extends to the potential misuse of such technologies in geopolitical spheres, urging Australia to lead in setting ethical global standards for AI.

Body

February 13, 2025 — 11.22amSaveLog in,registerorsubscribeto save articles for later.Save articles for laterAdd articles to your saved list and come back to them any time.Got itNormal text sizeLarger text sizeVery large text sizeDeepSeek’s rise is a David v Goliath moment. A little-known Chinese start-up launched an open-source AI model powerful enough torattleSilicon Valley’s titans. Within hours, $US600 billion ($955 billion) vanished from Nvidia’s market value, and big tech lost over $US1 trillion combined.DeepSeek’s success shattered the myth that artificial intelligence leadership requires billion-dollar venture capital and corporate monopolies. It proved that efficient, open-source AI is a transformative force, levelling the playing field. But DeepSeek is neither the first David nor the last. Open-source AI is a seismic shift, with platforms such as Mistral (France) and Hugging Face’s DistilBERT (US) demonstrating that cutting-edge AI can be built cheaply, efficiently and with minimal computational resources.DeepSeek’s success shattered the myth that AI leadership requires billion-dollar venture capital.Credit:BloombergAustralia now has a once-in-a-generation chance to lead – to become a David itself. Clear AIregulationscan ensure AI serves the public good, not big-tech monopolies or foreign powers. Yet, reactionary policies like the DeepSeekban on government devicesshow a shortsighted focus on risk over opportunity. To achieve true digital sovereignty, Australia must move beyond Cold War rhetoric and lead responsible AI governance. The question is no longer whether AI should be regulated, but how to regulate it responsibly – without stifling progress or enabling harm.Open-source AI, such as DeepSeek’sMIT-licensedmodel, is free to use. Anyone can change the code and run it on consumer-grade hardware. This is its greatest strength but also its key risk. On the upside, it boosts competition, efficiency and transparency by activating only the essential parts of the AI model, reducing its size without sacrificing performance – much like MP3 simplified music storage. These advances make AI more cost-effective and less reliant on high-powered hardware. For Australia, this is a big opportunity to compete with global tech giants.On the other hand, it is harder to regulate. Unlike proprietary AI, where companies face legal oversight and shareholder scrutiny, open models can be used by anyone – including bad actors. Without safeguards, they risk being weaponised fordisinformationandcybercrime. Therefore, we must enforce AI regulations that ensure transparency, accountability and responsible development. The question remains: Who controls AI?LoadingThe US and its alliesfearthat China could use AI to expand surveillance, export authoritarianism or tilt global power in its favour.Some arguean open-source AI model from China could become a Trojan horse for foreign interference. The reality is more complex.DeepSeek explicitlystatesthat user data is stored on Chinese servers, while OpenAI’s privacypolicyis more ambiguous about where user data is stored or who has access. OpenAI has repeatedlyshareduser data with the US government, proving that AI surveillance extends beyond China. If Australia wants true AI sovereignty, it must establish enforceable frameworks that holdallproviders accountable – whether from Silicon Valley or Beijing.Open-source AI also reignites a classic environmental dilemma. Its modular design reduces computational waste and makes AI more energy-efficient. DeepSeek consumes a fraction of ChatGPT’s resources.AdvertisementBut efficiency does not guarantee sustainability. Efficiency gains are often offset by an exponential rise in usage, leading to a net increase in resource consumption. This is known asJevons’ Paradoxor “rebound effects” – if we invent more fuel-efficient cars, people buy bigger cars, more cars and drive them more often. If AI becomes drastically cheaper and more accessible, electricity demand could skyrocket, straining grids and increasing carbon emissions. If we fail to act, AI models could become one of the world’s biggest energy drains. Australia must align AI with renewable energy, mandate energy-use disclosures and ensure sustainability is built into regulation – not treated as an afterthought.LoadingOne of the biggest unanswered questions about open-source AI is: how will it sustain itself? DeepSeek may have rattled Silicon Valley, but disruption cuts both ways. Chinese tech giant Alibaba’s Qwen-max2.5 is alreadyoutpacingDeepSeek in efficiency and functionality, proving that no AI leader can afford to stand still.While DeepSeek is currently free, OpenAI monetises ChatGPT through subscriptions and corporate licensing. AI development is expensive. To survive, open-source AI must adopt sustainable revenue models, such as enterprise licensing, paid premium features or usage fees – strategies that have succeeded for Linux and Firefox.But with great power comes great responsibility. If DeepSeek – or any AI provider – chooses the wrong business model, it won’t just mirror thesurveillance capitalismof Facebook and Google – it will supercharge it. Relying on intrusive ads, data harvesting, or manipulative algorithms risks entrenching the very exploitation that open-source AI was meant to counter.To prevent harm from escalating, Australia must proactively regulate AI business models, ensuring that monetisation strategies prioritise ethical transparency over profit extraction. A particularly concerning trend isAI companionsthat simulate empathy to manipulate users into oversharing and fostering unhealthy dependence, a serious risk for minors.LoadingDeepSeek’s rise proves AI doesn’t have to be monopolised by a handful of corporations, but without strong governance, open-source AI could lead to misuse, environmental harm or financial instability. Australia must act now to ensure AI serves society – not just commercial or foreign interests. The nation has a once-in-a-generation opportunity to lead in responsible AI governance by prioritising:Digital sovereignty, developing AI on Australia’s terms with strong privacy protections.Inclusive design, ensuring AI reflects Australia’s pluralistic society through participatory co-design that includes diverse stakeholders.Sustainability, mandating transparent energy disclosures and renewable energy use from AI providers.Ethical regulation, banning exploitative business models and implementing special safeguards for minors, neurodivergent individuals and other vulnerable groups.Global leadership, shaping international AI standards by contributing to global governance frameworks for digital services.The European Union has just announced the InvestAI initiative,mobilising €200 billion (331 billion) for developmentto enhance its competitiveness against the US and China. Australia must not fall behind.The window of opportunity is wide open. The question is whether Australia is bold enough to lead AI towards serving everyone, everywhere, equitably.Raffaele Ciriello is a senior lecturer in business information systems at the University of Sydney.The Opinion newsletter is a weekly wrap of views that will challenge, champion and inform your own. Sign uphere.SaveLog in,registerorsubscribeto save articles for later.License this articleAIRegulationNvidiaOpinionChinaFor subscribersRaffaele Ciriellois a senior lecturer in business information systems at the University of Sydney.Loading