Articles by: Rodrigo Fernández

Rodrigo Fernández

Rodrigo Fernández is a published author who writes and speaks about generative AI, agent security, and the hard truths of deploying these systems at scale. His work turns risk into method through evaluation, red teaming, policy design, and observability. He helps teams replace hype with metrics, align with frameworks such as the NIST AI Risk Management Framework and OWASP guidance, and ship AI that is measurable, reliable, and safe.

Articles by: Rodrigo Fernández

Rodrigo Fernández

Rodrigo Fernández is a published author who writes and speaks about generative AI, agent security, and the hard truths of deploying these systems at scale. His work turns risk into method through evaluation, red teaming, policy design, and observability. He helps teams replace hype with metrics, align with frameworks such as the NIST AI Risk Management Framework and OWASP guidance, and ship AI that is measurable, reliable, and safe.

Articles by: Rodrigo Fernández

Rodrigo Fernández

Rodrigo Fernández is a published author who writes and speaks about generative AI, agent security, and the hard truths of deploying these systems at scale. His work turns risk into method through evaluation, red teaming, policy design, and observability. He helps teams replace hype with metrics, align with frameworks such as the NIST AI Risk Management Framework and OWASP guidance, and ship AI that is measurable, reliable, and safe.

Discover the 10 most critical MCP vulnerabilities, how they emerge and the practical steps organizations can take to prevent them before they escalate.

Discover the 10 most critical MCP vulnerabilities, how they emerge and the practical steps organizations can take to prevent them before they escalate.

Discover the 10 most critical MCP vulnerabilities, how they emerge and the practical steps organizations can take to prevent them before they escalate.

Learn the most critical threats to autonomous AI, from identity spoofing to memory poisoning, and get practical mitigations to secure agents in production.

Learn the most critical threats to autonomous AI, from identity spoofing to memory poisoning, and get practical mitigations to secure agents in production.

Learn the most critical threats to autonomous AI, from identity spoofing to memory poisoning, and get practical mitigations to secure agents in production.

Compare leading tools to protect AI agents at runtime, with threat coverage, policy control, and observability to stop prompt attacks and unsafe tool use.

Compare leading tools to protect AI agents at runtime, with threat coverage, policy control, and observability to stop prompt attacks and unsafe tool use.

Compare leading tools to protect AI agents at runtime, with threat coverage, policy control, and observability to stop prompt attacks and unsafe tool use.

Multi-agent LLM systems often fail due to coordination debt, protocol drift, and looping. Benchmarks, failure modes, and a triage playbook for engineers.

Multi-agent LLM systems often fail due to coordination debt, protocol drift, and looping. Benchmarks, failure modes, and a triage playbook for engineers.

Multi-agent LLM systems often fail due to coordination debt, protocol drift, and looping. Benchmarks, failure modes, and a triage playbook for engineers.

Understand how agents shift risks from outputs to actions, and learn the runtime controls, identity checks, and observability to govern agent behavior.

Understand how agents shift risks from outputs to actions, and learn the runtime controls, identity checks, and observability to govern agent behavior.

Understand how agents shift risks from outputs to actions, and learn the runtime controls, identity checks, and observability to govern agent behavior.

Prevent data leaks from AI agents with fixes for APIs, memory, and tools. Add masking, context-aware access, and runtime monitoring to keep data safe.

Prevent data leaks from AI agents with fixes for APIs, memory, and tools. Add masking, context-aware access, and runtime monitoring to keep data safe.

Prevent data leaks from AI agents with fixes for APIs, memory, and tools. Add masking, context-aware access, and runtime monitoring to keep data safe.

Map the EU AI Act to agentic workflows: set system boundaries, enforce layered controls and oversight, and log immutable audit ready evidence.

Map the EU AI Act to agentic workflows: set system boundaries, enforce layered controls and oversight, and log immutable audit ready evidence.

Map the EU AI Act to agentic workflows: set system boundaries, enforce layered controls and oversight, and log immutable audit ready evidence.

Evaluate AI agent security platforms by runtime policy, tool least-privilege, DLP, and full traceability, aligned to OWASP, NIST AI RMF, and MITRE ATLAS.

Evaluate AI agent security platforms by runtime policy, tool least-privilege, DLP, and full traceability, aligned to OWASP, NIST AI RMF, and MITRE ATLAS.

Evaluate AI agent security platforms by runtime policy, tool least-privilege, DLP, and full traceability, aligned to OWASP, NIST AI RMF, and MITRE ATLAS.

Stop prompt injection in agentic AI with structural patterns, runtime policies, and tests mapped to OWASP LLM01 and MITRE ATLAS. Deploy safer workflows.

Stop prompt injection in agentic AI with structural patterns, runtime policies, and tests mapped to OWASP LLM01 and MITRE ATLAS. Deploy safer workflows.

Stop prompt injection in agentic AI with structural patterns, runtime policies, and tests mapped to OWASP LLM01 and MITRE ATLAS. Deploy safer workflows.

Choose AI agent building platforms with state control, typed tool calls, guardrails, and tracing. See who each suits, key tradeoffs, and a 14 day proof plan.

Choose AI agent building platforms with state control, typed tool calls, guardrails, and tracing. See who each suits, key tradeoffs, and a 14 day proof plan.

Choose AI agent building platforms with state control, typed tool calls, guardrails, and tracing. See who each suits, key tradeoffs, and a 14 day proof plan.