Taking Operational Risk to Resilience with Emerging AI Systems: Gartner
ARN
ENRICHED
Details
- Date Published
- 16 Apr 2026
- Priority Score
- 3
- Australian
- Yes
- Created
- 18 Apr 2026, 06:00 am
Description
Protecting against the risks brought forward by generative artificial intelligence (GenAI) and agentic AI requires organisations to actively design controls, rules, and oversight around how AI is used. This comes as Gartner predicts that by 2028, 25 per cent of all enterprise GenAI applications will experience at least five minor security incidents per year, up from 9 […]
Summary
Agentic AI and generative AI systems introduce novel attack vectors and systemic risks that require a fundamental shift from static security controls to operational resilience. Gartner analysts warn that by 2029, 15% of enterprise GenAI applications will face major security incidents, partly due to the 'black box' nature of models and the autonomous capabilities of agentic systems. This analysis emphasizes the need for failure-mode design thinking and domain-expert defined guardrails to mitigate risks as AI moves from passive content generation to independent action. Such shifts in governance are critical for managing the potential for unintended escalations or catastrophic failures within interconnected digital infrastructures.
Body
From operational risk to resilience with emerging AI systems
Credit: Luke Ellery (Gartner)
Protecting against the risks brought forward by generative artificial intelligence (GenAI) and agentic AI requires organisations to actively design controls, rules, and oversight around how AI is used.
This comes as Gartner predicts that by 2028, 25 per cent of all enterprise GenAI applications will experience at least five minor security incidents per year, up from 9 per cent in 2025.
Both GenAI and agentic AI are hard to control and risky – but for different reasons, said Gartner vice president analyst Luke Ellery.
According to Ellery, GenAI is like a “black box” where the control in how it makes decisions can’t be seen clearly. With vendors usually not taking legal responsibility for what it produces, this makes risk management difficult.
Agentic AI (AI that can act on its own or take actions for users) shifts responsibility more onto the organisation using it, because its behaviour is driven by how it is set up and used.
As organisations continue to build and integrate agentic AI applications using technologies such as model context protocol (MCP), newer attack vectors and immature security practices will significantly elevate risk exposure.
“MCP was built for interoperability, ease of use and flexibility first, so security mistakes can manifest without continuous oversight for agentic AI,” said Gartner director analyst Aaron Lord.
“The rate of minor security incidents within GenAI applications is set to grow at an increased rate.
“We will eventually see 15 per cent of all enterprise GenAI applications experience at least one major security incident per year by 2029, up from 3 per cent in 2025.”
The increase in security incidents within GenAI applications reflects a wider trend in how organisations think about risk, Ellery noted.
“As systems become more connected and rely on more external tools and providers, there are simply more points where things can go wrong,” he said. “There’s a lot more focus on the user from a risk perspective. The control and risk frameworks really need to be built around them to minimise risk or to manage risk.”
That risk pattern follows the broader shift over the past few years toward operational resilience in third party risk-management.
“That started during the pandemic when organisations had done all of this risk work but realising that even sometimes bad things happen to vendors,” said Ellery, adding that organisations have started to look at how to be more resilient.
“Also, organisations suffered in quite a few third-party breaches. That encourages them to have mechanisms to be more resilient, and to identify those key points of failure that impacts them the most.”
Keeping it safe
This is why establishing rigorous security review processes, prioritising low‑risk use cases, mitigating known threat‑patterns, and empowering domain experts to define guardrails that keep GenAI and agentic AI both powerful and safe was paramount, Ellery added.
These systems also need failure-mode design thinking applied to model behaviour and autonomous actions, not just infrastructure.
“The key thing is what security controls is to help an organisation be resilient, but sometimes it is missed that cyber security sits under operational resilience,” said Ellery. “Ideally CISOs should be thinking how they can maintain operational resilience — one [option] is the technical.
“The second is having redundancy, so there might be options to have identity and access management in that they might be able to place some of that in their demilitarised zone with their provider – that gives optionality.”
They then have to work with their provider to set up that resiliency so that their cloud services go down or they lose connectivity, they are not impacted, noted Ellery.
“They want services there, or they might have a cold standby so they might have VPNs [virtual private network] that they are able to connect directly to services,” he said. “[Although] that costs a bit of money — to have brought them up all the time.
“They can spin up so that they can still have the staff and access resources that they need.”
The third option is people doing the training and scenario testing.
“A good example of this is that vendor management teams are using external parties to help with scenarios and doing scenario role plays,” said Ellery. “Also, when events occur, doing post-incident review to learn from those.”
This helps to build this culture of if an incident happens, rather than being faced with a fighter response, cyber security teams are going straight into that problem solving – working as a team to try and figuring it out, as well as escalating the problem straight away to the right people to ensure broader programs around enterprise risk management and operational resiliency at a business level are invoked.
For managed service providers working with the cyber security leaders ensuring a level of transparency and working together can actually build trust, that might be one of those areas that it’s incorporated into the account management and the vendor management relationships that organisations have, in addition to reporting.
“One of the hardest things that CISOs struggle with is actually getting insights into whether the controls are actually in place or not,” added Ellery. “That sounds to be something that’s quite vulnerable for those organisations to actually enact, because it requires that transparency and maybe revealing when things aren’t perfect.”
SecurityArtificial Intelligence
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
Please enter a valid email address
Subscribe