# I Built an AI Bug Bounty Hunter. Here's What It Actually Found.

- **Category:** AI
- **Date:** 2026-02-24
- **Read time:** 8 min

Most AI demos are impressive until you ask "does it actually work?" I built an autonomous bug bounty hunting system — one command, six specialist AI agents, real vulnerabilities found on real programs. Then I open-sourced it.

## The architecture

A single Claude Opus orchestrator delegates to six specialist agents, each with restricted tool access: Scope Analyzer, Recon Agent, Web Vuln Agent, API Vuln Agent, Source Review Agent, and Report Writer. Built on the Claude Agent SDK with 19 custom MCP tools across 6 servers.

## What it found

On the first successful run against a real HackerOne program: 30 findings (7 medium, 16 low, 7 informational). 22 minutes, 36 turns, $8.91 in API cost. We're not naming the program or specific endpoints — findings were reported through proper disclosure and some are still in triage. Categories included exposed source maps, leaking configuration endpoints, XML-RPC vulnerabilities, and exposed CA keys — each with HTTP request/response evidence.

## Key technical decisions

- Orchestrator-worker pattern: Opus for strategy, Sonnet for execution
- Tool restrictions enforced at SDK level, not by prompting
- Incremental persistence: findings written to disk immediately
- Evidence validation: no finding stored without concrete proof
- AsyncIterable prompt fix to keep MCP transport alive
- Response truncation at 500KB to stay under SDK buffer limits

## Why open source

The architecture patterns — orchestrator-worker with MCP tool restrictions, evidence validation, transport fixes — apply well beyond bug bounty. The full source is at github.com/eliBenven/aibugbounty (MIT licensed).
