Regardless of widespread use of AI, solely 6% have carried out a complete, AI-native safety technique
SandboxAQ launched its inaugural AI Safety Benchmark Report, revealing a major disconnect between enterprise AI adoption and cybersecurity readiness. Whereas 79% of organizations are already utilizing AI in manufacturing environments, solely 6% have carried out a complete, AI-native safety technique, leaving the overwhelming majority of enterprises weak to threats they don’t seem to be but geared up to detect or mitigate.
Regardless of widespread use of AI, solely 6% of organizations have carried out a complete, AI-native safety technique
Primarily based on a survey of greater than 100 senior safety leaders throughout the US and EU, the report highlights widespread concern concerning the dangers AI introduces, from mannequin manipulation and knowledge leakage to adversarial assaults and the misuse of non-human identities (NHIs). But regardless of rising nervousness amongst CISOs, solely 28% of organizations have carried out a full AI-specific safety evaluation, and most are nonetheless counting on conventional, rule-based instruments that had been by no means designed to handle dynamic, machine-speed techniques.
Additionally Learn: AiThority Interview with Pete Foley, CEO of ModelOp
Key findings embody:
- Solely 6% of organizations have carried out AI-native safety protections throughout each IT and AI techniques.
- 74% of safety leaders are extremely involved about AI-enhanced cyberattacks, and 69% are extremely involved about AI uncovering new vulnerabilities of their environments.
- Simply 10% of corporations have a devoted AI safety staff; in most organizations, accountability falls to conventional IT or safety groups.
The rise of NHIs, which embody autonomous AI brokers, companies, and machine accounts, has additional difficult the safety panorama. These techniques usually function independently, holding and exchanging cryptographic credentials, accessing delicate assets, and interacting with different software program with out human oversight. Most safety groups lack visibility into these entities or management over their behaviors, undermining core ideas of Zero Belief and exposing gaps in id governance and cryptographic hygiene.
Additionally Learn: Proactive Product Resilience: Leveraging AI-driven PLM for Predictive Provide Chain Stability and Design Threat Mitigation
The report’s findings mirror what SandboxAQ has seen throughout large-scale cryptographic environments and AI deployments: enterprises are struggling to increase core safety practices like automated stock, visibility, and coverage enforcement to the identities and belongings that AI techniques depend on. By options like AQtive Guard, enterprises are capable of modernize cryptographic and id governance on this new layer of infrastructure with the identical urgency they as soon as utilized to conventional IT.
“This isn’t only a answer hole, it’s a conceptual one,” stated Marc Manzano, Common Supervisor of the Cybersecurity Group at SandboxAQ. “AI is radically altering the cybersecurity paradigm at an unprecedented pace. This report highlights a rising recognition amongst safety leaders that defending in opposition to evolving threats requires new assumptions and approaches, not simply new layers or patches to present tooling.”
Regardless of these gaps, funding is accelerating. Eighty-five p.c of organizations plan to extend AI safety spending within the subsequent 12 to 24 months, with 1 / 4 planning important will increase. Areas of focus embody defending coaching knowledge and inference pipelines, securing non-human identities, and deploying automated incident response capabilities tailor-made to AI-driven infrastructure.
[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]
