Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.ntrp.io/llms.txt

Use this file to discover all available pages before exploring further.

Overview

ntrp supports any OpenAI-compatible API endpoint. Register custom models via ~/.ntrp/models.json or the /add-model skill.

models.json

Create ~/.ntrp/models.json with model IDs as top-level keys:
{
  "deepseek-r1": {
    "base_url": "https://openrouter.ai/api/v1",
    "api_key_env": "OPENROUTER_API_KEY",
    "context_window": 64000,
    "max_output_tokens": 8192
  },
  "llama-local": {
    "base_url": "http://localhost:11434/v1",
    "context_window": 128000,
    "max_output_tokens": 4096
  },
  "embedding": {
    "nomic-embed-text": {
      "base_url": "http://localhost:11434/v1",
      "dim": 768
    }
  }
}

Fields

FieldRequiredDescription
top-level keyYesModel identifier used in settings, e.g. deepseek-r1
base_urlYesBase URL of the OpenAI-compatible API
context_windowYesMaximum context window for chat/completion models
max_output_tokensNoMaximum output tokens, defaults to 8192
api_key_envNoEnv var name containing the API key
price_in / price_outNoPrice per million tokens for usage estimates
embedding.<id>.dimYesEmbedding vector dimension for custom embedding models

Using custom models

After adding to models.json, set the model in your environment or settings:
export NTRP_CHAT_MODEL=deepseek-r1
Or change it in the TUI via /settings.

Providers

OpenRouter

{
  "openrouter/deepseek-r1": {
    "base_url": "https://openrouter.ai/api/v1",
    "api_key_env": "OPENROUTER_API_KEY",
    "context_window": 64000,
    "max_output_tokens": 8192
  }
}

Ollama

{
  "llama3": {
    "base_url": "http://localhost:11434/v1",
    "context_window": 128000,
    "max_output_tokens": 4096
  }
}
No api_key_env needed for local Ollama.

vLLM

{
  "my-model": {
    "base_url": "http://localhost:8080/v1",
    "context_window": 32000,
    "max_output_tokens": 4096
  }
}

Interactive setup

Use the /add-model skill in chat for guided model registration:
/add-model
The agent will walk you through configuring the endpoint.