Skip to main content

Basic usage

kong analyze ./path/to/binary
That’s it. Kong opens the binary in Ghidra, runs the five-phase pipeline (triage → analysis → cleanup → synthesis → export), and writes results to ./kong_output/.
Kong TUI showing real-time analysis progress

TUI vs headless mode

By default, Kong runs with a terminal UI that shows real-time progress — current phase, function count, confidence distribution, and running cost. For CI pipelines, Docker containers, or environments without a terminal, use --headless:
kong analyze ./binary --headless
Headless mode prints events to stdout instead of rendering the TUI.

Choosing a provider and model

Kong uses your default provider (set during kong setup). Override at runtime:
# Use OpenAI instead of your default
kong analyze ./binary --provider openai

# Use a specific model
kong analyze ./binary --provider openai --model gpt-4o-mini

# Use a local model via Ollama
kong analyze ./binary --provider custom --base-url http://localhost:11434/v1 --model mistral
See LLM Providers for setup details.

Output control

Directory

Results go to ./kong_output by default. Override with --output:
kong analyze ./binary --output ./my_results

Formats

Kong supports three output formats. The CLI default is source + json. Specify explicitly with -f:
# Just JSON
kong analyze ./binary -f json

# All three formats
kong analyze ./binary -f source -f json -f ghidra
FormatWhat it produces
sourceAnnotated C file with recovered names and JSDoc comments
jsonStructured analysis.json with full metadata
ghidraWrites results back into the Ghidra program database
See Output Formats for details on each.

Advanced flags

FlagDescription
--ghidra-dirOverride Ghidra installation path (normally auto-detected)
--verbose / -vEnable debug logging
--max-prompt-charsOverride maximum prompt size in characters
--max-chunk-functionsOverride maximum functions per LLM batch
--max-output-tokensOverride maximum output tokens
The --max-* flags are mainly useful for custom endpoints where local models may have smaller context windows.

Common workflows

# Quick analysis with cheapest model
kong analyze ./binary --provider openai --model gpt-4o-mini

# CI pipeline — no TUI, JSON only
kong analyze ./binary --headless -o ./results -f json

# Local model via Ollama
kong analyze ./binary --provider custom \
  --base-url http://localhost:11434/v1 \
  --model mistral \
  --max-prompt-chars 100000

# Maximum quality — all output formats
kong analyze ./binary -f source -f json -f ghidra

What to expect

Analysis time and cost scale with function count and binary complexity:
Binary sizeTypical timeTypical cost (Claude Opus)
~300 functions5-15 min$10-50
~1000 functions20-60 min$40-150
~3000+ functions1-3 hours$150-500+
See the XZ Backdoor case study for a real-world example: 396 functions analyzed in 15 minutes for $6.63.

Further reading

Last modified on March 20, 2026