Articles by: Rodrigo Fernández

Rodrigo Fernández

Rodrigo Fernández is a published author who writes and speaks about generative AI, agent security, and the hard truths of deploying these systems at scale. His work turns risk into method through evaluation, red teaming, policy design, and observability. He helps teams replace hype with metrics, align with frameworks such as the NIST AI Risk Management Framework and OWASP guidance, and ship AI that is measurable, reliable, and safe.

Articles by: Rodrigo Fernández

Rodrigo Fernández

Rodrigo Fernández is a published author who writes and speaks about generative AI, agent security, and the hard truths of deploying these systems at scale. His work turns risk into method through evaluation, red teaming, policy design, and observability. He helps teams replace hype with metrics, align with frameworks such as the NIST AI Risk Management Framework and OWASP guidance, and ship AI that is measurable, reliable, and safe.

Articles by: Rodrigo Fernández

Rodrigo Fernández

Rodrigo Fernández is a published author who writes and speaks about generative AI, agent security, and the hard truths of deploying these systems at scale. His work turns risk into method through evaluation, red teaming, policy design, and observability. He helps teams replace hype with metrics, align with frameworks such as the NIST AI Risk Management Framework and OWASP guidance, and ship AI that is measurable, reliable, and safe.

Use this detailed OpenAI AgentKit guide to plan, build, test, and ship AI agents using Agent Builder, Agents SDK, ChatKit, and Evals. Start building now.

Use this detailed OpenAI AgentKit guide to plan, build, test, and ship AI agents using Agent Builder, Agents SDK, ChatKit, and Evals. Start building now.

Use this detailed OpenAI AgentKit guide to plan, build, test, and ship AI agents using Agent Builder, Agents SDK, ChatKit, and Evals. Start building now.

Learn how red teaming uncovers vulnerabilities in AI agents and strengthens security, trust, and compliance across autonomous systems.

Learn how red teaming uncovers vulnerabilities in AI agents and strengthens security, trust, and compliance across autonomous systems.

Learn how red teaming uncovers vulnerabilities in AI agents and strengthens security, trust, and compliance across autonomous systems.

Learn how MCP authentication secures AI models by verifying identity, enforcing permissions, and preventing unauthorized context access with this guide.

Learn how MCP authentication secures AI models by verifying identity, enforcing permissions, and preventing unauthorized context access with this guide.

Learn how MCP authentication secures AI models by verifying identity, enforcing permissions, and preventing unauthorized context access with this guide.

Compare the best MCP scanners to secure agent workflows. Static scans, runtime guardrails, approvals, CI support. See the updated 2025 ranking

Compare the best MCP scanners to secure agent workflows. Static scans, runtime guardrails, approvals, CI support. See the updated 2025 ranking

Compare the best MCP scanners to secure agent workflows. Static scans, runtime guardrails, approvals, CI support. See the updated 2025 ranking

Understand the OWASP Agentic AI Security Guidelines and learn how organizations can identify, mitigate, and govern emerging risks in autonomous AI systems.

Understand the OWASP Agentic AI Security Guidelines and learn how organizations can identify, mitigate, and govern emerging risks in autonomous AI systems.

Understand the OWASP Agentic AI Security Guidelines and learn how organizations can identify, mitigate, and govern emerging risks in autonomous AI systems.

Discover what are the best MCP Gateways in 2025, ranked by reliability, observability, and security for teams scaling AI agent infrastructure safely.

Discover what are the best MCP Gateways in 2025, ranked by reliability, observability, and security for teams scaling AI agent infrastructure safely.

Discover what are the best MCP Gateways in 2025, ranked by reliability, observability, and security for teams scaling AI agent infrastructure safely.

Learn how to deploy an AI agent securely and efficiently, from setup to lifecycle management, across cloud, hybrid, and on-premise environments.

Learn how to deploy an AI agent securely and efficiently, from setup to lifecycle management, across cloud, hybrid, and on-premise environments.

Learn how to deploy an AI agent securely and efficiently, from setup to lifecycle management, across cloud, hybrid, and on-premise environments.

Discover how to secure AI agents and use agents for security with a lifecycle model, best practices, and measurable KPIs for safer automation.

Discover how to secure AI agents and use agents for security with a lifecycle model, best practices, and measurable KPIs for safer automation.

Discover how to secure AI agents and use agents for security with a lifecycle model, best practices, and measurable KPIs for safer automation.

Discover the top AI gateways driving secure, scalable, and compliant AI operations. Compare features, performance, and governance trends in 2025.

Discover the top AI gateways driving secure, scalable, and compliant AI operations. Compare features, performance, and governance trends in 2025.

Discover the top AI gateways driving secure, scalable, and compliant AI operations. Compare features, performance, and governance trends in 2025.

Discover the 10 most critical MCP vulnerabilities, how they emerge and the practical steps organizations can take to prevent them before they escalate.

Discover the 10 most critical MCP vulnerabilities, how they emerge and the practical steps organizations can take to prevent them before they escalate.

Discover the 10 most critical MCP vulnerabilities, how they emerge and the practical steps organizations can take to prevent them before they escalate.

Learn the most critical threats to autonomous AI, from identity spoofing to memory poisoning, and get practical mitigations to secure agents in production.

Learn the most critical threats to autonomous AI, from identity spoofing to memory poisoning, and get practical mitigations to secure agents in production.

Learn the most critical threats to autonomous AI, from identity spoofing to memory poisoning, and get practical mitigations to secure agents in production.

Compare leading tools to protect AI agents at runtime, with threat coverage, policy control, and observability to stop prompt attacks and unsafe tool use.

Compare leading tools to protect AI agents at runtime, with threat coverage, policy control, and observability to stop prompt attacks and unsafe tool use.

Compare leading tools to protect AI agents at runtime, with threat coverage, policy control, and observability to stop prompt attacks and unsafe tool use.

Multi-agent LLM systems often fail due to coordination debt, protocol drift, and looping. Benchmarks, failure modes, and a triage playbook for engineers.

Multi-agent LLM systems often fail due to coordination debt, protocol drift, and looping. Benchmarks, failure modes, and a triage playbook for engineers.

Multi-agent LLM systems often fail due to coordination debt, protocol drift, and looping. Benchmarks, failure modes, and a triage playbook for engineers.

Understand how agents shift risks from outputs to actions, and learn the runtime controls, identity checks, and observability to govern agent behavior.

Understand how agents shift risks from outputs to actions, and learn the runtime controls, identity checks, and observability to govern agent behavior.

Understand how agents shift risks from outputs to actions, and learn the runtime controls, identity checks, and observability to govern agent behavior.

Prevent data leaks from AI agents with fixes for APIs, memory, and tools. Add masking, context-aware access, and runtime monitoring to keep data safe.

Prevent data leaks from AI agents with fixes for APIs, memory, and tools. Add masking, context-aware access, and runtime monitoring to keep data safe.

Prevent data leaks from AI agents with fixes for APIs, memory, and tools. Add masking, context-aware access, and runtime monitoring to keep data safe.

Map the EU AI Act to agentic workflows: set system boundaries, enforce layered controls and oversight, and log immutable audit ready evidence.

Map the EU AI Act to agentic workflows: set system boundaries, enforce layered controls and oversight, and log immutable audit ready evidence.

Map the EU AI Act to agentic workflows: set system boundaries, enforce layered controls and oversight, and log immutable audit ready evidence.

Evaluate AI agent security platforms by runtime policy, tool least-privilege, DLP, and full traceability, aligned to OWASP, NIST AI RMF, and MITRE ATLAS.

Evaluate AI agent security platforms by runtime policy, tool least-privilege, DLP, and full traceability, aligned to OWASP, NIST AI RMF, and MITRE ATLAS.

Evaluate AI agent security platforms by runtime policy, tool least-privilege, DLP, and full traceability, aligned to OWASP, NIST AI RMF, and MITRE ATLAS.

Stop prompt injection in agentic AI with structural patterns, runtime policies, and tests mapped to OWASP LLM01 and MITRE ATLAS. Deploy safer workflows.

Stop prompt injection in agentic AI with structural patterns, runtime policies, and tests mapped to OWASP LLM01 and MITRE ATLAS. Deploy safer workflows.

Stop prompt injection in agentic AI with structural patterns, runtime policies, and tests mapped to OWASP LLM01 and MITRE ATLAS. Deploy safer workflows.

Choose AI agent building platforms with state control, typed tool calls, guardrails, and tracing. See who each suits, key tradeoffs, and a 14 day proof plan.

Choose AI agent building platforms with state control, typed tool calls, guardrails, and tracing. See who each suits, key tradeoffs, and a 14 day proof plan.

Choose AI agent building platforms with state control, typed tool calls, guardrails, and tracing. See who each suits, key tradeoffs, and a 14 day proof plan.