AI Security Researcher building offensive and defensive tools for LLM and ML systems.
I publish datasets, break guardrails, and secure AI infrastructure. 25+ years across networking, security, cloud, and AI — now focused entirely on making AI systems harder to attack.
| Area | What | Examples |
|---|---|---|
| Attack Research | Original research on RAG poisoning, model extraction, and reasoning chain attacks | semantic-chameleon, hybrid-rag-pivot-attacks, Chain-of-Thought-Reasoning-Attacks |
| Security Tools | Red team frameworks, agent scanners, guardrail testers | red-sentinel, vulnerable-chat, atlas |
| Training Data | Security datasets for fine-tuning AI coding assistants | securecode (2,185 examples), securecode-aiml (750 examples) |
| Integrations | Securing LLMs with Palo Alto Networks Prisma AIRS | panw-unified-sdk, model-security-pipeline-integration, prisma-airs-n8n |
- Hardening RAG systems against retrieval-stage attacks
- Building detection signatures for prompt injection variants
- CI/CD pipelines for ML model security scanning
- Custom guardrail tuning — drove attack success rate from 8.7% to 1.0%





