✨ Fully autonomous AI Agents system capable of performing complex penetration testing tasks
-
Updated
Feb 28, 2026 - Go
✨ Fully autonomous AI Agents system capable of performing complex penetration testing tasks
HexStrike AI MCP Agents is an advanced MCP server that lets AI agents (Claude, GPT, Copilot, etc.) autonomously run 150+ cybersecurity tools for automated pentesting, vulnerability discovery, bug bounty automation, and security research. Seamlessly bridge LLMs with real-world offensive security capabilities.
PentestAgent is an AI agent framework for black-box security testing, supporting bug bounty, red-team, and penetration testing workflows.
CyberStrikeAI is an AI-native security testing platform built in Go. It integrates 100+ security tools, an intelligent orchestration engine, role-based testing with predefined security roles, a skills system with specialized testing skills, and comprehensive lifecycle management capabilities.
AI Red Teaming Range
The CoSAI Risk Map is a framework for identifying, analyzing, and mitigating security risks in Artificial Intelligence systems. As traditional software security practices are not always sufficient for AI, this project provides a shared understanding and a common language for addressing the unique security challenges of the AI development lifecycle.
MCP Security Solution for Agentic AI — real-time proxying, behavior analysis, and malicious tool detection
A comprehensive reference for securing Large Language Models (LLMs). Covers OWASP GenAI Top-10 risks, prompt injection, adversarial attacks, real-world incidents, and practical defenses. Includes catalogs of red-teaming tools, guardrails, and mitigation strategies to help developers, researchers, and security teams deploy AI responsibly.
Build secure mcp infrastructure to audit and control every data access by AI agents with minimal effort
The first open-source static analyzer purpose-built for AI agent code. Maps findings to OWASP Agentic Top 10 (2026). 40+ rules. Taint analysis. MCP config auditing.
Client-side retrieval firewall for RAG systems — blocks prompt injection and secret leaks, re-ranks stale or untrusted content, and keeps all data inside your environment.
Jibril: A performant and low impact Linux runtime security tool agent.
Next-Gen Secret Scanner powered by Local AI (Ollama). Filters false positives by understanding code context.
AIDEFEND MCP is a local-first AI Security Defensive Assistant that brings the full AIDEFEND countermeasure library into your environment and turns static knowledge into actionable protection for LLMs and agentic AI systems — privately, securely, and on-device.
Local open-source dev tool to debug, secure, and evaluate LLM agents. Provides static analysis, dynamic security checks, and runtime monitoring - integrates with Cursor and Claude Code.
Contexi let you interact with entire codebase or data with context using a local LLM on your system.
Security Dashboard for OpenClaw AI Agents - intercept, monitor, and control what OpenClaw does on your system.
Comprehensive LLM AI Model protection | Protect your production GenAI LLM applications | cybersecurity toolset aligned to addressing OWASP vulnerabilities in Large Language Models - https://genai.owasp.org/llm-top-10/
Comprehensive LLM protection toolset aligned to addressing OWASP vulnerabilities - https://genai.owasp.org/llm-top-10/
Repository for machine readable AI system card
Add a description, image, and links to the ai-security-tool topic page so that developers can more easily learn about it.
To associate your repository with the ai-security-tool topic, visit your repo's landing page and select "manage topics."