Skip to main content

Config resolution

ntrp resolves configuration in this order (highest priority first):
  1. Environment variablesNTRP_* prefix or standard provider key names
  2. .env files — loaded from ~/.ntrp/.env and ./.env (project directory)
  3. Settings file~/.ntrp/settings.json (persisted via TUI settings)
  4. Auto-detection — models resolved from available provider keys

LLM providers

Set at least one provider API key. ntrp auto-selects models based on what’s available.
Env varChat modelMemory modelEmbedding model
ANTHROPIC_API_KEYclaude-sonnet-4-6claude-sonnet-4-6
OPENAI_API_KEYgpt-5.2gpt-5.2text-embedding-3-small
GEMINI_API_KEYgemini-3-pro-previewgemini-3-flash-previewgemini-embedding-001
Override auto-detection with explicit model IDs:
export NTRP_CHAT_MODEL=claude-sonnet-4-6
export NTRP_MEMORY_MODEL=claude-haiku-4-5
export NTRP_EMBEDDING_MODEL=text-embedding-3-small
Embedding is optional. Without it, vector search is disabled but ntrp still works with full-text search.

Server

VariableDefaultDescription
NTRP_HOST127.0.0.1Server bind address
NTRP_PORT8000Server port

Features

VariableDefaultDescription
NTRP_MEMORYtrueEnable persistent memory
NTRP_GMAILfalseEnable Gmail integration
NTRP_GMAIL_DAYS30Days of email history to index
NTRP_CALENDARfalseEnable Google Calendar
NTRP_VAULT_PATHPath to Obsidian vault
NTRP_BROWSERBrowser type: chrome, safari, or arc
NTRP_BROWSER_DAYS30Days of browser history

Optional integrations

VariableDescription
EXA_API_KEYExa.ai for web search
TELEGRAM_BOT_TOKENTelegram notifications

Agent behavior

VariableDefaultDescription
NTRP_MAX_DEPTH8Max agentic recursion depth

Authentication

On first ntrp-server serve, an API key is generated and printed. The server stores only a salted SHA-256 hash — the plaintext key is never persisted.
# Generate a new key
ntrp-server serve --reset-key
The TUI client stores the key in your OS keychain (macOS Keychain, Linux libsecret). Fallback: ~/.ntrp/settings.json.
VariableDescription
NTRP_WEBHOOK_TOKENSeparate token for webhook endpoints (email notifications)

Custom models

Add OpenRouter, Ollama, vLLM, or any OpenAI-compatible endpoint via ~/.ntrp/models.json:
[
  {
    "id": "deepseek-r1",
    "provider": "custom",
    "api_base": "https://openrouter.ai/api/v1",
    "api_key_env": "OPENROUTER_API_KEY",
    "max_output_tokens": 8192,
    "max_context_tokens": 64000
  }
]
Or use the /add-model skill in the TUI to configure interactively.

Settings file

~/.ntrp/settings.json stores persisted configuration. Fields set here act as defaults that environment variables can override.
{
  "chat_model": "claude-sonnet-4-6",
  "memory_model": "claude-sonnet-4-6",
  "embedding_model": "text-embedding-3-small",
  "vault_path": "/Users/you/notes",
  "api_key_hash": "a1b2c3:..."
}

.env file

Copy .env.example to .env (or ~/.ntrp/.env for global config):
cp .env.example .env
Both locations are loaded automatically. Project-level .env takes precedence.