Skip to content
View scthornton's full-sized avatar

Block or report scthornton

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
scthornton/README.md

Scott Thornton

AI Security Researcher building offensive and defensive tools for LLM and ML systems.

I publish datasets, break guardrails, and secure AI infrastructure. 25+ years across networking, security, cloud, and AI — now focused entirely on making AI systems harder to attack.

What I Build

Area What Examples
Attack Research Original research on RAG poisoning, model extraction, and reasoning chain attacks semantic-chameleon, hybrid-rag-pivot-attacks, Chain-of-Thought-Reasoning-Attacks
Security Tools Red team frameworks, agent scanners, guardrail testers red-sentinel, vulnerable-chat, atlas
Training Data Security datasets for fine-tuning AI coding assistants securecode (2,185 examples), securecode-aiml (750 examples)
Integrations Securing LLMs with Palo Alto Networks Prisma AIRS panw-unified-sdk, model-security-pipeline-integration, prisma-airs-n8n

Current Focus

  • Hardening RAG systems against retrieval-stage attacks
  • Building detection signatures for prompt injection variants
  • CI/CD pipelines for ML model security scanning
  • Custom guardrail tuning — drove attack success rate from 8.7% to 1.0%

Stats

Connect

LinkedIn Blog

Pinned Loading

  1. securecode-web securecode-web Public

    Real-World Security Conversations for AI Training

    Python 7

  2. Chain-of-Thought-Reasoning-Attacks Chain-of-Thought-Reasoning-Attacks Public

    Breaking Chain-of-Thought: A Comprehensive Taxonomy of Reasoning Vulnerabilities in Production AI Systems

    Jupyter Notebook 2

  3. semantic-chameleon semantic-chameleon Public

    Dual-Stage Temporal Poisoning Attack on RAG Systems

    Python 2

  4. vulnerable-chat vulnerable-chat Public

    🚨 Intentionally vulnerable AI chatbot for Prisma AIRS Red Teaming testing. Docker-containerized Flask app with OWASP Top 10 LLM vulnerabilities (prompt injection, data leakage, jailbreaks). FREE mo…

    Python 2

  5. hybrid-rag-pivot-attacks hybrid-rag-pivot-attacks Public

    Retrieval Pivot Attacks in Hybrid RAG (Vector → Graph)

    Python 2

  6. panw-unified-sdk panw-unified-sdk Public

    Unified Python SDK for Palo Alto Networks AI Security — wraps AIRS Runtime API (text/prompt scanning) and WildFire API (file/malware analysis) behind a single interface with smart content routing a…

    Python 1