Cisco AI Defence to Help Partners Navigate Security Challenges
ARN
SKIPPED
Details
- Date Published
- 28 Jan 2025
- Priority Score
- 3
- Australian
- Yes
- Created
- 8 Mar 2025, 02:41 pm
Description
Cisco is urging channel partners to develop competencies in AI security, ethical AI implementation and regulatory compliance, to align with the vendor’s recent launch of Cisco AI Defence. The product was designed to meet the advances of artificial intelligence and combines the ability to detect and protect against threats when developing and accessing AI applications without trade-offs. […]
Summary
Cisco has introduced its AI Defence product to enhance security for AI applications. The initiative aims to address trust and security concerns by integrating with Cisco's existing infrastructure to provide advanced threat intelligence and comprehensive security across AI lifecycles. Emphasizing the importance of partner involvement, the product aims to facilitate secure and responsible AI deployment, addressing potential vulnerabilities and compliance issues. This development holds significant relevance for global companies seeking to protect AI systems from unauthorized access and ensure ethical implementation.
Body
Cisco is urging channel partners to develop competencies in AI security, ethical AI implementation and regulatory compliance, to align with the vendor’s recent launch of Cisco AI Defence.
The product was designed to meet the advances of artificial intelligence and combines the ability to detect and protect against threats when developing and accessing AI applications without trade-offs.
Rodney Hamill, managing director partner and routes to market at Cisco Australia and New Zealand told ARN Cisco AI Defence was integrated into the fabric of the network, leveraging Cisco’s extensive network visibility and control capabilities.
With this newly released product, Cisco partners can play a crucial role in helping customers build, operate and secure their AI environments.
“Trust and security are significant concerns for the deployment of AI in enterprises, stemming from several key factors such as data vulnerability, lack of transparency, bias and discrimination, privacy concerns, operational risks, and regulatory compliance,” said Hamill.
AI Defence also provides advanced threat intelligence by integrating with Cisco Talos, one of the largest commercial threat intelligence teams globally. This, coupled with machine learning models powered and algorithmic security validation, provides a unique offering in the market.
“This integration allows for comprehensive visibility across distributed cloud environments, enforcement of security policies at the network level and seamless integration with existing Cisco security products and infrastructure,” said Hamill.
Hamill reiterated that partners were essential in addressing these concerns and ensuring the responsible and secure deployment of AI technologies.
“Partners who excel in this area will not only influence the adoption of AI but also facilitate its acceleration,” he said.
Unlike point solutions, AI Defence provides comprehensive security across the entire AI lifecycle from the development and deployment of AI applications, runtime protection for AI models in production and ongoing monitoring and validation of AI systems.
There are a number of ways partners can leverage this solution by developing expertise in AI security through Cisco’s training programs, integrating AI Defence into existing Cisco security offerings, providing consulting and managed security services and educating customers on AI security challenges and solutions.
“Partners can differentiate themselves in the market, address the growing demand for AI security, and establish themselves as trusted advisors in customers’ AI transformation journeys,” explained Hamill.
Cisco also offers several training programs and certifications to help partners stay ahead in deploying and managing secure AI solutions including, AI Fundamentals for Partners, AI-Ready Infrastructure Solution Specialisation and the Cisco 360 Partner Program, which will be launched in February 2026, with an initial US$80 million investment.
Hamill noted the stakes of something going wrong with AI are incredibly high. According to Cisco's 2024 AI Readiness Index, only 29 per cent of those surveyed feel fully equipped to detect and prevent unauthorized tampering with AI.
“The security challenges are also new and complex, with AI applications being multi-model and multi-cloud,” he said. “Vulnerabilities can occur at model or app level, while responsibility lies with different owners including developers, end users and vendors.
“As enterprises move beyond public data and begin training models on proprietary data, the risks only grow.”