Archexa Documentation

Archexa is a CLI tool for architectural intelligence. It analyzes any repository — statically, then semantically — and gives you instant insight into structure, patterns, dependencies, and change impact.

Overview

Archexa works in two phases: a fast static analysis pass using Tree-sitter AST parsing across 15+ languages, followed by an optional LLM reasoning layer that interprets architectural intent. The result is structured Markdown with Mermaid diagrams, tables, and file citations.

Binary distribution: Archexa is distributed as a standalone binary — no Python installation required.
About the examples: All examples below were run against the FastAPI repository using OpenRouter as the LLM provider (models: Gemini 2.5 Flash, Claude Sonnet 4, GPT-4.1). The API key is set via the OPENAI_API_KEY environment variable — no config file changes needed.

Quick Start

Quick Start
# 1. Install curl -fsSL https://raw.githubusercontent.com/ereshzealous/archexa/main/install.sh | bash # 2. Set your API key export OPENAI_API_KEY=sk-... # 3. Run your first gist archexa gist # 4. Ask a question archexa query "How does auth work?" # 5. Check your setup archexa doctor

Installation

macOS

macOS 12+ · Intel & Apple Silicon

Shell
curl -fsSL https://raw.githubusercontent.com/ereshzealous/archexa/main/install.sh | bash

Linux

Ubuntu 20+, Debian, RHEL, Alpine

Shell
curl -fsSL https://raw.githubusercontent.com/ereshzealous/archexa/main/install.sh | bash

Windows

Windows 10+ · WSL2

WSL2
curl -fsSL https://raw.githubusercontent.com/ereshzealous/archexa/main/install.sh | bash
VS Code users: The VS Code extension auto-downloads the binary on first use — no separate CLI install needed. See the VS Code Extension section.
Verify
archexa --versionarchexa 0.5.3-beta.1 archexa doctor# Config valid, API key set, Endpoint reachable
RequirementMinimumStatus
Operating SystemmacOS 12, Ubuntu 20.04, Windows 10Supported
Architecturex86_64, arm64 (Apple Silicon)Supported
LLM API KeyOpenAI-compatible or OllamaFor AI features
Disk space~20MBMinimal
API Key Setup
# OpenAI (works out of the box) export OPENAI_API_KEY=sk-... # OpenRouter (access multiple providers) export OPENAI_API_KEY=sk-or-... # Ollama (local, no key needed) archexa init# Set endpoint to http://localhost:11434/v1/ in archexa.yaml

Configuration

Archexa reads from archexa.yaml in your project root. Run archexa init to generate a starter config.

archexa.yaml (minimal)
archexa: source: "./src" openai: model: "gpt-4o" endpoint: "https://api.openai.com/v1/" deep: enabled: false max_iterations: 15 limits: prompt_budget: 128000 max_files: 100 exclude_patterns: - "*.test.ts" - "vendor/**"
LLM API Key: Set OPENAI_API_KEY in your environment. Works with any OpenAI-compatible API — OpenAI, Anthropic (via OpenRouter), Azure, Ollama, vLLM, LM Studio.

Configuration Reference

Full archexa.yaml with all available options and documentation.

SectionKey FieldsDescription
projectbase_path, entry_filesRepository root and optional entry points
llmmodel, base_url, ssl_verifyLLM provider settings (any OpenAI-compatible API)
limitsmax_files, max_prompt_tokens, safety_marginToken budget and file limits
evidencemax_bytes_per_file, max_blocks_per_file, block_linesHow source code is parsed and sampled
outputout, show_evidence_summaryOutput directory and format options
promptsgist_prompt, query_prompt, impact_prompt, review_prompt, diagnose_promptCustom instructions per command
queryquestion, targetDefault question and target for query command
reviewtargetDefault target files for review
diagnoselogs, trace, error, last, tzLog file, stack trace, time window, timezone
servicefocusLimit analysis to specific directories
agentenabled, max_iterationsDeep mode (--deep) settings
cacheenabledEvidence caching for repeated runs

Commands

archexa gist

Generates a high-level architectural summary in 5–15 seconds. Detects stack, patterns, module boundaries, and coupling hotspots.

Usage
archexa gist archexa gist --path ./src --format json archexa gist --deep

archexa query

Ask natural language questions answered with precise architectural context and file citations.

Usage
archexa query "Where does authentication happen?" archexa query "What modules depend on the database?" --deep archexa query "Show me all entry points" --target src/api/

archexa analyze

Two-phase pipeline for comprehensive architecture documentation with Mermaid diagrams, component tables, and data flows.

Usage
archexa analyze archexa analyze --deep --output architecture.md

archexa impact

Traces the full blast radius of a file change. Shows direct dependents, transitive impact, and risk assessment.

Usage
archexa impact --file src/auth/session.py archexa impact --file billing/processor.py --deep

archexa review

Architecture-aware code review. Finds security vulnerabilities, resource leaks, interface mismatches. Every file gets a verdict.

Usage
archexa review --changed archexa review src/billing/processor.py archexa review --branch feature/auth --deep archexa review --changed --json-findings 2> findings.json

archexa diagnose

Feed it a stack trace, log file, or error message. Traces call chains and explains the root cause with fix recommendations.

Usage
archexa diagnose "ConnectionRefusedError on line 42" archexa diagnose --log app.log --last 1h archexa diagnose --trace stacktrace.txt

archexa chat

Interactive multi-turn codebase exploration with memory. Auto-detects topic switches.

Usage
archexa chat # In-session commands:# /deep — toggle agentic mode for next turn# /format — set custom output structure# /save — save last response to file# /exit — end session

Advanced

Deep Mode (--deep)

Every command supports --deep — a multi-step agentic reasoning pass that reads actual file content for richer output.

AspectPipeline (default)Deep (--deep)
Speed5–15 seconds30–120 seconds
LLM calls1–210–50+
Agent toolsread_file, grep_codebase, list_directory, find_references
Best forHigh-level overviewsTracing execution flows, code-level detail

JSON Output

All commands support --format json for machine-readable output. Review findings emit structured JSON with --json-findings for IDE integration.

JSON Output
archexa review --changed --format json archexa review --changed --json-findings 2> findings.json archexa impact --file auth.py --format json --output report.json

CI/CD Integration

.github/workflows/archexa.yml
- name: Archexa Architecture Review run: archexa review --changed --deep --format json env: OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

LLM Configuration

Archexa works with any OpenAI-compatible API. The model you choose directly determines the accuracy, depth, and quality of the generated architecture documentation — stronger models produce more insightful analysis with better architectural reasoning.

ModelOutput QualityBest for
GPT-4oExcellentStrong all-round analysis with detailed diagrams and tables
Claude Sonnet 4ExcellentHighly detailed technical docs with nuanced architectural reasoning
GPT-4.1ExcellentWell-structured output with clean tables and precise references
GPT-4o-miniGoodQuick gists and simple queries where speed matters more than depth
Gemini 2.5 FlashGoodFast analysis with solid architectural understanding
Ollama (local)VariesAir-gapped environments where no data can leave your machine
Tip: For full architecture documentation (analyze) and deep investigations, use a high-capability model like GPT-4o or Claude Sonnet 4. For quick gists and simple queries, a smaller model works well.

Custom Prompts

Override system prompts per command in archexa.yaml. Each command has its own prompt key: gist, query, user (analyze), impact, review, diagnose.

archexa.yaml
archexa: prompts: query: | Focus on security implications. Always mention OWASP categories. review: | Check for our team's patterns: - Repository pattern in data layer - DI for all services gist: | Include mermaid architecture diagram. Focus on request lifecycle.

VS Code Extension

Archexa for VS Code Available

  • Inline findings — editor squiggles in Problems panel
  • Command Palette — Gist, Query, Impact, Review, Diagnose
  • Sidebar Panel — command wizard, chat, settings, history
  • Context menu — right-click any file to review, diagnose, or query
  • Keyboard shortcutsCmd+Shift+D, R, Q, I
  • Auto binary download — ~20 MB on first use, setup wizard included
Recommended: Install directly from VS Code — open Extensions (Ctrl+Shift+X), search “Archexa”, click Install.
Install, Configure, Run

Extension Settings

Extension Settings
SettingDefaultDescription
archexa.apiKeyAPI key (or use env var)
archexa.modelgpt-4oLLM model name
archexa.endpointhttps://api.openai.com/v1/API base URL
archexa.deepByDefaulttrueUse deep mode by default
archexa.showInlineFindingstrueShow editor squiggles
archexa.autoReviewOnSavefalseAuto-review on file save

Sidebar Panel

Sidebar Demo

Review Findings

Findings appear as inline editor squiggles and in the Problems panel with severity levels.

Review Findings