Configuration
Configure model providers and API keys for the CL-SDK MCP server
Model configuration
The extraction tools (classify_document, extract_policy, extract_quote) need a configured model and API key. The server supports two configuration methods: environment variables (simplest) or a config file.
{
"provider": "anthropic",
"model": "claude-haiku-4-5-20251001",
"apiKey": "${ANTHROPIC_API_KEY}"
}
Fields
| Field | Description | Default |
|---|---|---|
provider | AI provider: anthropic, openai, or google | anthropic |
model | Model ID for the provider | claude-haiku-4-5-20251001 |
apiKey | API key, supports ${ENV_VAR} syntax | ${ANTHROPIC_API_KEY} |
Environment variable expansion
The apiKey field supports ${ENV_VAR} syntax. The server resolves these from process.env at startup. This lets you keep secrets in your shell environment rather than in the config file.
Provider examples
{
"provider": "anthropic",
"model": "claude-haiku-4-5-20251001",
"apiKey": "${ANTHROPIC_API_KEY}"
}Set the environment variable:
export ANTHROPIC_API_KEY=sk-ant-...Environment variables
Without a config file, the server reads directly from environment variables:
| Variable | Description | Default |
|---|---|---|
CL_MCP_PROVIDER | AI provider: anthropic, openai, or google | anthropic |
CL_MCP_MODEL | Model ID | claude-haiku-4-5-20251001 |
ANTHROPIC_API_KEY | Anthropic API key (checked first) | — |
OPENAI_API_KEY | OpenAI API key (fallback) | — |
GOOGLE_API_KEY | Google API key (fallback) | — |
The simplest setup — just export your API key and the defaults work:
export ANTHROPIC_API_KEY=sk-ant-...
Documentation tools and pure SDK tools (prompt builders, apply_extracted, etc.) work without any model configuration. Only the extraction and classification tools require a valid API key.
Uniform model config
The server uses createUniformModelConfig() from the SDK, which assigns the same model to all extraction pipeline roles (classification, metadata, sections, enrichment). For production use with large documents, consider using the SDK directly with per-role model configuration — see Model configuration.