Autonomous Tactical Hacking
& Exploration Network Agent
M. Eiszner · 2025 · For authorized security testing only
Section One
Democratizing advanced security testing through AI
About ATHENA
Core Philosophy
Section Two
Three layers, one unified framework
Component 01
The primary execution engine. Orchestrates workflow execution, manages agent lifecycles, and provides real-time terminal feedback.
litellm · langchain · rich · crawl4ai · fastapirunner · agent_factory · llm_adapter · tool_executor · event_emitterconfigConfiguration & Execution
# Execute a workflow
$ athena \
--workflow workflows/webapp \
--scope scope/target \
--config configs/config \
--run-id pentest-2025-001
# config — plug in any OpenAI-compatible LLM
llm:
provider: openrouter # or: openai, llama.cpp, anthropic
model: claude-3.5-sonnet
api_key: ${OPENROUTER_API_KEY}
temperature: 0.7
max_tokens: 4000
Component 02
RESTful API layer with real-time WebSocket streaming for remote assessment management and monitoring.
POST /api/run — start assessment remotelyWS /api/stream/{run_id} — real-time outputGET/PUT /api/workflows — full CRUD for configsGET/PUT /api/agents · /api/scopesComponent 03
Modern browser-based interface built with Alpine.js + Monaco Editor + Tailwind CSS for configuration and live monitoring.
Section Three
How all the pieces connect at runtime
System Overview
Agent Model
The orchestrator is a manager/planner LLM agent that assigns tasks between specialized phase agents — each phase runs a dedicated AI worker with its own toolset.
Live Demo
Section Four
Three powerful deployment strategies
Use Case 01
Transform ATHENA into a universal security testing platform capable of handling any target type or vulnerability class.
Challenges to address:
Use Case 02
Purpose-built assessment frameworks for specific technologies or attack surfaces — narrow scope, maximum effectiveness.
Example: WordPress Security Tester
Use Case 03
Balance automation with human expertise — implement strategic checkpoints where the system requests expert guidance before proceeding.
Intervention points:
Section Five
Choosing the right AI brain for the mission
Cloud LLMs
ADVANTAGES
CHALLENGES
Local LLMs
ADVANTAGES
CHALLENGES
LLM Integration
| Provider | Type | Config Value | Best For | Status |
|---|---|---|---|---|
| OpenRouter | Cloud Gateway | openrouter |
Access 100+ models via single API | Recommended |
| OpenAI | Cloud | openai |
GPT-4 / GPT-4o deployments | Supported |
| Anthropic | Cloud | anthropic |
Claude 3.5 Sonnet — strong reasoning | Supported |
| llama.cpp | Local | llama.cpp |
Privacy-first, uncensored models | 150B+ param needed |
| Venice AI | Local proxy | venice |
Uncensored cloud execution | Supported |
| Any OpenAI-compat | Any | custom base_url | Custom deployments / vLLM | Pluggable |
Framework Stats
Enable the creation of arbitrary workflows that leverage any number of AI agents without writing a single line of code.— ATHENA Core Philosophy · M. Eiszner · 2025