Government Appoints Microsoft's Policy Chief to Lead National AI Centre
Information Age
SKIPPED
Details
- Date Published
- 17 Mar 2025
- Priority Score
- 4
- Australian
- Yes
- Created
- 23 Mar 2025, 04:54 pm
Description
Balancing innovation and safety will fall to Lee Hickin.
Summary
Lee Hickin, former head of AI policy for Microsoft Asia, has been appointed director of Australia's National AI Centre. His role will involve balancing AI innovation with regulatory frameworks that aim to ensure safety, particularly as he assists in implementing Australia's Voluntary AI Safety Standard. The appointment holds significance in the Australian context, as it aligns with federal investments in AI capabilities and ongoing debates about adopting EU-style AI regulations. Hickin's dual experience in both promoting AI and assessing its risks positions him as a fitting leader for navigating the tension between innovation and safety. This development signals Australia's forward momentum in strengthening its AI governance and safety measures, potentially influencing global policy discussions.
Body
Gov taps Microsoft’s policy chief to lead National AI CentreBalancing innovation and safety will fall to Lee Hickin.By Jeremy Nadel on Mar 18 2025 11:02 AMPrint articleLee Hickin is leaving Microsoft to take on the role of director of the National AI Centre.Lee Hickin will steer a$21.6 million federal government investmentin Australia’s AI capability and advise on the implementation of hotly debated safeguards for emerging technology.Hickin, who confirmed that he will resign from his current role overseeing Microsoft’s AI policy in Asia,said in a LinkedIn postthat his primary responsibility as director of the National AI Centre would be “helping to drive Australia's AI future.”Innovation vs regulationThe National AI Centre, hosted by theDepartment of Industry Science and Research (DISR), oversees initiatives tofinance local AI startupsand support small to medium businesses to harness machine learning (ML).AlongsideDISR’s AI Expert Group, Hickin will assist industry to implement Australia’sVoluntary AI Safety Standard, which may beelevated to mandatory guardrails for high-risk AI use casesthat vendors arelobbying againstover concerns that they will stifle innovation.Hickin declinedInformation Age’srequests for comment on whether he supports the Albanese Government's proposal for EU-style AI regulations, but said,last year at a roundtable hosted by the Malaysia Digital Economy Corporation, that “AI needs guardrails to protect society and AI needs flexibility to address global issues”.In his post yesterday, he said: “I have long been an advocate for the positive potential for AI in our lives, communities and industry.”The 'ideal fit'Industry MinisterEd Husic said on Linkedinthat Lee’s “30 years of commercial experience” at “companies like Microsoft and Amazon” made “him an ideal fit” for the National AI Centre’s director.Hickin has had two stints at Microsoft; between 2005 to 2015 he held titles including security technology specialist and IoT product manager, then worked at AWS for two years; first as its APAC IoT business development lead and later APAC head of platform technology business development.He returned to Microsoft in 2018 as its chief technology officer and was promoted in 2023 to Asia AI Policy and Lead.Husic added that Hickin was also “bringing… his involvement in shaping AI policies with Government” to the role,including his support developing the “AI Action Plan”and his four years’ experience as a committee member ofAustralia’s firstAI-specific government watchdog, NSW’s “AI Review Committee” (AIRC).Balancing AI innovation with risk mitigationHickin’s dual experience in bothencouraging companies to embrace AI before they get left behindand auditing emerging technology — both internally asthe head of Microsoft’s ANZ Responsible Office of AI, whichaudits ML projects, and externally as a member of the NSW Government’s AIRC — makes him a safe pick.The Government is at a crossroads on whether to follow the EU in passing laws to better protect citizens from the privacy and procedural fairness risks that the private sector’s rapid uptake of AI poses, whichregulatorsandcivil society groupsare calling for, or to side with lobbyists like theBusiness Council of Australiain followingthe US in abstaining from any regulations that could hinder AI-enabled productivity.When Hickin simultaneously worked for AIRC and Microsoft, he deployed Microsoft technology to NSW government agencies that AIRC expressed concerns about, highlighting the difficulty of balancing AI innovation with risk mitigation.At the time,he said that it was “a privilege to work alongside NSW Police” when implementing the AI Insights platformbecause it “can speed up the analysis of evidence, accelerating justice”.However,AIRC’s review of Insightswarned that the AI-generated tool could bias investigations against communities more likely to feature in its surveillance feeds.Further, legal academicsexpressed concernsover its use of biometric ML models, which pose a proportionately higher risk of misidentifying minorities, but Hickin refused to release the audits Microsoft’s Office of Responsible AI conducted to risk-assess the platform, nor provide a summary.Jeremy NadelJeremy Nadel is a Melbourne-based freelancer whose work has been featured inThe Guardian, The Saturday PaperandCrikey.He was most recently a staff reporter forITnewsandCRNreporting on the tech channel and enterprise IT. In 2023, he was awarded Best New Journalist at the Australian IT Journalism Awards. For story tips, you can email Jeremy at[email protected].Tags:national ai centrelee hickinmicrosoftaiinnovationrisksaustralia