Why AI Is a Double-Edged Sword for Cybersecurity
Australian Financial Review
SKIPPED
Details
- Date Published
- 23 Sept 2024
- Priority Score
- 3
- Australian
- Yes
- Created
- 10 Mar 2025, 10:27 pm
Description
The rapid emergence of AI as a mainstream business tool has brought both opportunities and challenges for organisations of all sizes.
Summary
AI's dual role in cybersecurity presents both challenges and opportunities, with criminals exploiting AI for sophisticated attacks like automated malware creation and personalized phishing, while defenders leverage it for real-time threat detection and vulnerability scanning. The Australian Government's initiative to introduce mandatory safeguards reflects a growing recognition of the need for robust AI governance, though there remains concern over the disregard malicious actors have for regulatory frameworks. This dialogue highlights the global imperative for comprehensive AI safety measures and emphasizes the need for organizations to integrate AI with human expertise to enhance resilience against evolving threat vectors.
Body
TechnologyCheck Point Software TechnologiesPrint articleDr. Dorit DorSep 23, 2024 – 11.55amSaveLog inorSubscribeto save articleShareCopy linkCopiedEmailLinkedInTwitterFacebookCopy linkCopiedShare via...Gift this articleSubscribe to gift this articleGift 5 articles to anyone you choose each month when you subscribe.Subscribe nowAlready a subscriber?LoginThis content is produced in commercial partnership with Check Point Software Technologies.The rapid emergence of AI as a mainstream business tool has brought both opportunities and challenges for organisations of all sizes.On one hand, the technology promises to deliver significant improvements in productivity and process efficiency. However, on the other, it’s arming cybercriminals with powerful new capabilities that can enhance their ability to cause damage, and lowers the barrier of entry into the cybercrime ecosystem.One of the most concerning developments is the increasing use of AI to automate the creation of malware.iStockIn an era of constant digital transformation, AI is emerging as a potent ‘force multiplier’, for both attackers and defenders and will reshape the entire cybersecurity landscape.The growing threat of AI-powered attacksAdvertisementCybercriminals are increasingly leveraging AI to enhance their capabilities and launch more sophisticated attacks. GenAI can create highly convincing phishing emails by crafting personalised messages that are difficult to distinguish from legitimate ones, develop malicious macros in Office documents, and produce code for reverse shell operations. Additionally, AI can be used to create deepfakes - realistic but fabricated media content that can be used to deceive individuals and organisations.Dr. Dorit Dor, chief technology officer, Check Point Software Technologies.Beyond phishing and deepfakes, AI can also be employed to automate testing and vulnerability scanning, allowing attackers to identify and exploit weaknesses in systems and networks more efficiently. By analysing vast amounts of data, AI can uncover patterns and vulnerabilities that might otherwise remain undetected.One of the most concerning developments is the increasing use of AI to automate the creation of malware. The technology can generate new strains of malware at a rapid pace, making it difficult for traditional antivirus software to keep up. This could lead to a surge in targeted attacks that are customised to exploit specific vulnerabilities.AI can also be used to enhance social engineering attacks. By analysing social media profiles and other publicly available information, attackers can create highly personalised communications that are more likely to trick victims into revealing sensitive information or clicking on malicious links.AI as a defensive shieldWhile AI presents significant challenges for cybersecurity, it also offers promising solutions. Defenders can utilise AI-powered tools to detect and respond to threats in real-time.AI algorithms can analyse network traffic, identify anomalies, and flag potential attacks. Additionally, AI can assist in automating repetitive tasks, such as patch management and vulnerability scanning, freeing up security teams to focus on more strategic initiatives.AI-powered assistants can also provide valuable support. These assistants can help with tasks like incident response, threat intelligence analysis, and compliance management.Additionally, AI can be used to develop more advanced threat detection and prevention technologies. For example, AI can be trained to recognise patterns of malicious behaviour that are difficult for humans to detect. This can help security teams to identify and stop attacks before they cause significant damage.Regulatory challenges and ethical imperativesEarlier this year, the Australian Government, via the Department of Industry, Science and Resources announced its intention to implement a suite of mandatory safeguards in relation to the development and deployment of high-risk AI use cases. This was done via its interim response to its 2023 consultation paper “Safe and Responsible AI in Australia”, in which it set out its broad scheme for the regulation of AI in Australia.While regulation for economy-wide regulations for AI and the potential establishment of aVoluntary AI Safety Standardare crucial for safeguarding ethical use, it’s important to note that malicious actors may disregard these frameworks, further intensifying the race between defensive and offensive AI technologies. Therefore, while businesses must be aware of the regulatory landscape, they will also need to put steps in place to address cybersecurity risks, privacy concerns, and prevent the misuse of AI in spreading misinformation.To stay ahead of the curve, organisations should adopt a proactive approach to cybersecurity. This involves implementing a comprehensive security strategy that encompasses all aspects of the digital infrastructure, from networks and endpoints to cloud environments.Additionally, organisations should foster a culture of security awareness and training among their employees, empowering them to recognise and report potential threats. Indeed, by combining human expertise with AI-powered tools, organisations can build more resilient defences against cyber threats.Ultimately, by understanding the capabilities and limitations of AI, organisations can take proactive steps to mitigate risks and ensure their cybersecurity resilience in an evolving threat landscape.SeeSecurity In Actionand what it means for Australian enterprises.SponsoredbyCheck Point Software TechnologiesThis content has been funded by an advertiser and written by the Nine commercial editorial team.SaveLog inorSubscribeto save articleShareCopy linkCopiedEmailLinkedInTwitterFacebookCopy linkCopiedShare via...Gift this articleSubscribe to gift this articleGift 5 articles to anyone you choose each month when you subscribe.Subscribe nowAlready a subscriber?LoginLicense articleFollow the topics, people and companies that matter to you.Find out moreRead MoreCheck Point Software TechnologiesFetching latest articlesOlympic weightlifting is hard. This boss uses the 1pc rule to get it doneLucy DeanOut-of-control watch price rises give housing a run for its moneyKnow your craft: How the biggest airlines rate at the pointy endJun Bei Liu: How I learnt to speak upSally Patten and Lap PhanThe four actor ‘tricks’ giving executives more confidence‘We’ll fight’: Alex Waislitz on family battles and bad betsA last-chance tote bag and a groovy case for trumpetersEugenie KellyThis machine can bring out the creative streak you never knew you hadThis data-driven wellness retreat is a haven for high-flyersBillionaire Nicola Forrest appoints UBank boss to run family officePrimrose RiordanVictor Smorgon’s star fundie eyes 50pc returns for new fundForrest family powerbroker had alleged role in big Fortescue decisions