Documentation Index
Fetch the complete documentation index at: https://docs.kong.fyi/llms.txt
Use this file to discover all available pages before exploring further.
Basic usage
kong analyze ./path/to/binary
That’s it. Kong opens the binary in Ghidra, runs the five-phase pipeline (triage → analysis → cleanup → synthesis → export), and writes results to ./kong_output/.
TUI vs headless mode
By default, Kong runs with a terminal UI that shows real-time progress — current phase, function count, confidence distribution, and running cost.
For CI pipelines, Docker containers, or environments without a terminal, use --headless:
kong analyze ./binary --headless
Headless mode prints events to stdout instead of rendering the TUI.
Choosing a provider and model
Kong uses your default provider (set during kong setup). Override at runtime:
# Use OpenAI instead of your default
kong analyze ./binary --provider openai
# Use a specific model
kong analyze ./binary --provider openai --model gpt-4o-mini
# Use a local model via Ollama
kong analyze ./binary --provider custom --base-url http://localhost:11434/v1 --model mistral
See LLM Providers for setup details.
Output control
Directory
Results go to ./kong_output by default. Override with --output:
kong analyze ./binary --output ./my_results
Kong supports three output formats. The CLI default is source + json. Specify explicitly with -f:
# Just JSON
kong analyze ./binary -f json
# All three formats
kong analyze ./binary -f source -f json -f ghidra
| Format | What it produces |
|---|
source | Annotated C file with recovered names and JSDoc comments |
json | Structured analysis.json with full metadata |
ghidra | Writes results back into the Ghidra program database |
See Output Formats for details on each.
Advanced flags
| Flag | Description |
|---|
--ghidra-dir | Override Ghidra installation path (normally auto-detected) |
--verbose / -v | Enable debug logging |
--max-prompt-chars | Override maximum prompt size in characters |
--max-chunk-functions | Override maximum functions per LLM batch |
--max-output-tokens | Override maximum output tokens |
The --max-* flags are mainly useful for custom endpoints where local models may have smaller context windows.
Common workflows
# Quick analysis with cheapest model
kong analyze ./binary --provider openai --model gpt-4o-mini
# CI pipeline — no TUI, JSON only
kong analyze ./binary --headless -o ./results -f json
# Local model via Ollama
kong analyze ./binary --provider custom \
--base-url http://localhost:11434/v1 \
--model mistral \
--max-prompt-chars 100000
# Maximum quality — all output formats
kong analyze ./binary -f source -f json -f ghidra
What to expect
Analysis time and cost scale with function count and binary complexity:
| Binary size | Typical time | Typical cost (Claude Opus) |
|---|
| ~300 functions | 5-15 min | $10-50 |
| ~1000 functions | 20-60 min | $40-150 |
| ~3000+ functions | 1-3 hours | $150-500+ |
See the XZ Backdoor case study for a real-world example: 396 functions analyzed in 15 minutes for $6.63.
Further reading