Skip to main content
This guide gets you through your first Kong analysis as fast as possible. Five steps, five minutes.
1

Install Kong

uv pip install kong-re
Need more detail? See the full Installation guide.
2

Set your API key

Kong needs at least one LLM API key. Pick whichever provider you prefer:
# Anthropic (Claude)
export ANTHROPIC_API_KEY="sk-ant-..."

# or OpenAI (GPT-4o)
export OPENAI_API_KEY="sk-..."
Add the export line to your ~/.zshrc or ~/.bashrc so it persists across terminal sessions.
3

Run the setup wizard

The setup wizard configures which LLM providers to use and verifies your environment:
kong setup
It will detect your Ghidra installation, check your API keys, and save your preferences. This only needs to run once.
Kong setup wizard showing successful configuration
For a detailed walkthrough of every option, see Setup Wizard.
4

Analyze a binary

Point Kong at any stripped binary:
kong analyze ./path/to/stripped_binary
Kong will load the binary into an in-process Ghidra instance and run the full pipeline: triage, analysis, cleanup, synthesis, and export. You will see a live TUI showing progress as functions are analyzed.
Kong TUI showing live analysis progress
You can also specify a provider or model explicitly:
kong analyze ./binary --provider openai
kong analyze ./binary --provider anthropic --model claude-sonnet-4-20250514
5

Review the output

When analysis completes, Kong writes results to ./kong_output_{binary_name}/:
kong_output_{binary_name}/
├── analysis.json         # Recovered function names, types, parameters
└── events.log            # Pipeline execution trace
Open analysis.json to see every recovered function with its name, return type, parameters, and a confidence indicator. Kong also writes everything back to Ghidra’s program database, so you can open the binary in Ghidra and see real names instead of FUN_ labels.

What Just Happened?

Behind the scenes, Kong ran a five-phase pipeline:
  1. Triage enumerated all functions, classified them by complexity, built the call graph, and matched known library signatures
  2. Analysis processed functions bottom-up from the call graph, building rich context windows for each LLM call
  3. Cleanup normalized results and unified struct proposals
  4. Synthesis took a global view across all functions to unify naming conventions
  5. Export wrote analysis.json and applied everything back to Ghidra

Next Steps

  • Understand the output — See Output Formats for a detailed breakdown of analysis.json
  • Try different providers — Configure multiple LLM providers with the Setup Wizard
  • See it in action — Read the XZ Backdoor case study to see what Kong recovers from a real-world binary
  • Go deeper — Learn how Call-Graph Analysis orders functions for maximum context
Last modified on March 20, 2026