Summary
IBM Consulting is advancing AI use in cybersecurity with its Cybersecurity Assistant solution, which leverages generative and agentic AI to streamline security operations and mitigate the increasing stress on IT security professionals. This technology allows AI to autonomously handle routine tasks, empowering human analysts to focus on more sophisticated threats. The discussion emphasizes the importance of demonstrating AI's value through use cases, particularly in critical infrastructure sectors, to boost client comfort and trust. Additionally, IBM stresses the necessity of ethical and governance frameworks and ongoing training in AI and cybersecurity to ensure secure adoption of AI technologies. The article indirectly touches on the relevance of AI safety by discussing the reduction of human error and improving efficiency in cybersecurity, which aligns with global AI governance and safety efforts.
Body
Enterprise organisations may need help becoming comfortable with using artificial intelligence (AI) to mitigate the multitude of challenges posed by the evolving landscape of cyber incidents and the increasing stress placed on cyber security teams.
According to Gartner’s Predict 2025: Privacy in the age of AI and the dawn of quantum report, more than 40 per cent of AI-related data breaches will be caused by the improper use of generative AI across borders.
However, this comes a time when 66 per cent of IT security professionals said their job was more stressful than it was five years ago, according to a survey from the Information Systems Audit and Control Association (ISCA).
This is where generative AI and agentic AI could be the solution to these challenges, said IBM Consulting cyber security services leader Richa Arora.
When it's done with the right data sets and based on existing AI, it can look like IBM Consulting's Cybersecurity Assistant solution, built on data and AI platform watsonx. This was released in August 2024 and has since been used by IBM Consulting analysts to streamline security operations for clients.
The IBM Consulting Cybersecurity Assistant is designed to improve the identification, investigation and response to critical security threats, to advance and streamline security operations for clients and to help make the jobs of cyber security professionals more efficient, allowing them to focus on more advanced threats.
“If generative AI systems can recognise patterns and determine whether an alert is worth escalating, they can eliminate the repetitive tasks that analysts face,” she said. "Agentic AI can handle mundane, repetitive tasks faster, allowing analysts to focus on more critical issues.
"Our assets are built on the watsonx platform for threat detection and advanced threat dispositioning. Once we get that fine-tuned for our clients we move to investigations and agentic AI."
Agentic AI takes generative AI one step further by making operational decisions, making it an ideal tool to detect and respond to threats, Arora explained. “When we talk about agentic AI, we are talking about an agent,” she said. “We're not just referring to a chat bot where you're having an almost conversation in English, which we would call an assistant.”
This type of AI is autonomous and actively takes over certain tasks, making recommendations, analysing data and even deciding what actions to take without human intervention, unless necessary, Arora said. When an agentic AI system recognises patterns and determines whether an alert is worth escalating, it can eliminate the repetitive tasks that analysts face.
“This frees [security teams] up to do more complex analysis, skill development and advanced threat hunting,” she said. “With agentic AI solutions, they can decide which alerts need to be escalated.”
Making clients comfortable with AI
However, there is still a challenge in industries on whether they are "fully ready for this technology" and the comfort level varies with IBM Consulting’s clients around the topic of AI, with it largely being dependent on the organisation's risk appetite. Some of the clients may be comfortable with AI because they trust their partner, like IBM Consulting, who they've worked with for years.
“We provide them with security operations services and we continuously innovate, helping them adapt to new AI solutions," said Arora. "For example, we're working with clients in utilities and critical infrastructure. Where we show the value of these systems by running them alongside existing processes, demonstrating that the output is consistent, if not better."
It’s crucial to demonstrate the effectiveness of AI tools with use cases, especially since AI is often used as just a buzzword, and organisations like IBM Consulting don't want to add to the hype around it, explained Arora.
"I was in Perth yesterday presenting to two clients in critical infrastructure," she said. “These clients already use our security services and we showed them a proof of value for agentic AI. We also demonstrated how our system handles advanced threat disposition and presented the system’s recommendations for actions, allowing clients to see what decisions the AI would make.”
According to Arora, the feedback was positive and they saw the value and were eager to move forward they saw a showcase of "real proof of value" which helps them understand the tangible benefits of AI.
IBM Consulting has been using AI technology for decades and is experienced in guiding its clients to help them understand whether the actions of its products “are traceable, explainable and trustworthy”.
“To ensure our clients feel comfortable, we are inviting them to be early adopters,” Arora said. “We’re encouraging them to start with specific tasks, like analysis or investigations, before expanding to full response tasks. We're implementing a 'human in the loop' approach to assist with responses.”
Don’t take the guardrails off
Looking ahead three to five years, Arora said there will be more hyper-automated environments with the merging of AI and quantum computing. All of this is combining into a force where people have to constantly upskill.
Last year was a lot of hype and this year AI is moving out of the garage, from implementing fruitful concepts to scaling them for tasks or recognisable benefits that can be achieved. In the next three to five years, the industry will look at all of this as hyper-automated environments, due to shifting workloads from on-premises to cloud security
"With agentic AI making things easier and more responsive from a security perspective, people will adopt and scale it, rather than simply extending workforces as a solution. So, I think hyper-automation is essentially where we're headed," said Arora. "Then, of course, there's the converging point between AI, quantum, cyber skills and the blending of these concepts. We’ll need to figure out how to respond to the looming threats with the help of agentic AI."
This would also mean taking a standards-based approach with ethics and governance frameworks in place.
“There’s a lot of innovation happening and models are being created and updated quickly,” Arora explained. “But as long as we ground these efforts in what’s feasible and secure, we can keep moving forward. We need to invest more in zero-trust architecture and governance solutions for AI,” she said. “It’s important to have clear guardrails and policies about what tools and technologies are acceptable.
“If you’re scaling AI solutions, what standards do you need to adhere to, and we need the same principles we used before, just applied to AI and that's how we help clients adopt AI safely.”
Training is also vital for the workforce to become comfortable with these technologies so they can confidently use them and be trusted “to do the right thing”.
“You need to upskill the workforce because AI solutions are already here and the best thing we can do is train people on cyber security, agentic AI and AI-based solutions,” she added. "IBM Consulting trains people not only on how to sell these solutions but also to become at least level three experts in using AI at a technical level."