Articles by: Lara IglesiasLara IglesiasLara Iglesias is a Computer Science and Artificial Intelligence student with a strong interest in the intersection of AI, security and innovation. Her work centers on the practical challenges of building reliable and responsible AI. NewsOpenAI Introduces GPT-5.4 Cyber: The Rise of Defensive, Domain-Specific AIOpenAI launches GPT-5.4 Cyber, a defensive AI model for cybersecurity. Explore its capabilities, risks, and impact on enterprise security.GlossaryThe Rise of Agent IAMAI agents are entering enterprise systems without identity. Discover why Agent IAM is becoming the biggest security challenge of 2026.PostsMemory Is the New Attack Surface in AI AgentsAI agent memory creates a new persistent attack surface. Discover how memory poisoning, context drift, and long-term state risks impact enterprise AI security.PostsWhy AI Agents Need a Kill SwitchWhy AI agents need kill switches. Explore the risks of autonomous systems without control and what enterprises must implement before deployment.PostsWhen AI Agents Talk to Each Other: The New Security Risks of Multi-Agent SystemsExplore the emerging security risks of multi-agent AI systems and how agent-to-agent interactions introduce new attack surfaces in enterprise environments.BenchmarksWhat the 2026 OWASP’s GenAI Data Security Guide Means for Enterprise AI SecurityOWASP’s 2026 GenAI data security guide reveals the top enterprise AI data risks and how organizations can mitigate them as GenAI moves into production.EventsAI Agent Security Takes RSAC 2026AI agent security takes center stage at RSAC 2026 as enterprises address new risks from autonomous systems and shift toward runtime governance and data control.NewsInside the McKinsey AI Chatbot Hack: How an Autonomous Agent Gained Read-Write AccessAn autonomous AI agent hacked McKinsey’s internal chatbot in two hours. Learn how the vulnerability worked and what it reveals about enterprise AI security risks.PostsThe Hidden Risk of Alignment Faking in Enterprise SystemsAlignment faking in AI creates hidden enterprise risk. Learn how deceptive model behavior bypasses safety checks and exposes new attack surfaces.BenchmarksTop 10 Guardian Agents for Securing Enterprise AI Systems in 2026Explore the top 10 Guardian Agents securing enterprise AI in 2026. Compare runtime governance, AI agent protection, and control platforms.PostsWhy Your Enterprise Cannot Treat AI Agents Like Traditional IT AssetsAI agents are not traditional IT assets. Discover why enterprises need runtime governance and behavioral controls to secure autonomous systems.PostsPreventing Shadow AI Agents in Your Company: A Security Framework for Enterprise AI GovernancePrevent shadow AI agents in your company with strong AI agent security, governance controls, runtime monitoring and enterprise-wide visibility.EventsMWC 2026 Highlights the Rise of AI Agent Security in Enterprise AIMWC 2026 spotlights the rise of AI agent security as autonomous AI scales across enterprise systems, with NeuralTrust winning Digital Horizons.PostsThe CISO Checklist for Securing Enterprise AI AgentsAI agents create a new enterprise attack surface. Discover the CISO checklist for governing, securing, and monitoring autonomous systems at scale.PostsCursor Security Risks and the Expanding Attack Surface of AI-Driven DevelopmentCursor security risks are growing as AI-driven development expands. Learn how AI coding agents reshape the enterprise attack surface.PostsAgent Forensics: How to Investigate Incidents in Autonomous AI SystemsAI agent incidents break traditional IR. Learn agent forensics to trace decisions, audit memory and tools, and prove what happened and why.GlossaryWhy RBAC Is Not Enough for AI AgentsRBAC was built for humans, not autonomous AI agents. Learn why static permissions fail and how runtime authorization secures agentic systems.PostsMemory Poisoning in Autonomous AI AgentsDiscover how memory poisoning attacks corrupt autonomous AI agents’ long-term memory, causing persistent misbehavior and bypassing traditional security defenses.NewsMoltbook: the AI-Only Social Network and its risksMoltbook, an AI-only social network powered by OpenClaw agents, is trending and exposing critical security and governance gaps in autonomous ecosystems.PostsWhy 2026 is the Year of AI AgentsDiscover why 2026 is the breakout year for AI agents in enterprise. Learn about adoption trends, security risks, and how to deploy autonomous agents safely.