Provider Integration

For Providers

If you’d like to be a model provider and sell inference on OpenRouter, fill out our form to get started.

To be eligible to provide inference on OpenRouter you must have the following:

1. List Models Endpoint

You must implement an endpoint that returns all models that should be served by OpenRouter. At this endpoint, please return a list of all available models on your platform. Below is an example of the response format:

1{
2 "data": [
3 {
4 // Required
5 "id": "anthropic/claude-sonnet-4",
6 "hugging_face_id": "", // required if the model is on Hugging Face
7 "name": "Anthropic: Claude Sonnet 4",
8 "created": 1690502400,
9 "input_modalities": ["text", "image", "file"],
10 "output_modalities": ["text", "image", "file"],
11 "quantization": "fp8",
12 "context_length": 1000000,
13 "max_output_length": 128000,
14 "pricing": {
15 "prompt": "0.000008", // pricing per 1 token
16 "completion": "0.000024", // pricing per 1 token
17 "image": "0", // pricing per 1 image
18 "request": "0", // pricing per 1 request
19 "input_cache_read": "0", // pricing per 1 token
20 "input_cache_write": "0" // pricing per 1 token
21 },
22 "supported_sampling_parameters": ["temperature", "stop"],
23 "supported_features": [
24 "tools",
25 "json_mode",
26 "structured_outputs",
27 "web_search",
28 "reasoning"
29 ],
30 // Optional
31 "description": "Anthropic's flagship model...",
32 "openrouter": {
33 "slug": "anthropic/claude-sonnet-4"
34 },
35 "datacenters": [
36 {
37 "country_code": "US" // `Iso3166Alpha2Code`
38 }
39 ]
40 }
41 ]
42}

NOTE: pricing fields are in string format to avoid floating point precision issues, and must be in USD.

Valid quantization values are: int4, int8, fp4, fp6, fp8, fp16, bf16, fp32.

Valid sampling parameters are: temperature, top_p, top_k, repetition_penalty, frequency_penalty, presence_penalty, stop, seed.

Valid features are: tools, json_mode, structured_outputs, web_search, reasoning.

2. Auto Top Up or Invoicing

For OpenRouter to use the provider we must be able to pay for inference automatically. This can be done via auto top up or invoicing.

3. Uptime Monitoring & Traffic Routing

OpenRouter automatically monitors provider reliability and adjusts traffic routing based on uptime metrics. Your endpoint’s uptime is calculated as: successful requests ÷ total requests (excluding user errors).

Errors that affect your uptime:

  • Authentication issues (401)
  • Payment failures (402)
  • Model not found (404)
  • All server errors (500+)
  • Mid-stream errors
  • Successful requests with error finish reasons

Errors that DON’T affect uptime:

  • Bad requests (400) - user input errors
  • Oversized payloads (413) - user input errors
  • Rate limiting (429) - tracked separately
  • Geographic restrictions (403) - tracked separately

Traffic routing thresholds:

  • Minimum data: 100+ requests required before uptime calculation begins
  • Normal routing: 95%+ uptime
  • Degraded status: 80-94% uptime → receives lower priority
  • Down status: <80% uptime → only used as fallback

This system ensures traffic automatically flows to the most reliable providers while giving temporary issues time to resolve.

4. Performance Metrics

OpenRouter publicly tracks TTFT (time to first token) and throughput (tokens/second) for all providers on each model page.

Throughput is calculated as: output tokens ÷ generation time, where generation time includes fetch latency (time from request to first server response), TTFT, and streaming time. This means any queueing on your end will show up in your throughput metrics.

To keep your metrics competitive:

  • Return early 429s if under load, rather than queueing requests
  • Stream tokens as soon as they’re available
  • If processing takes time (e.g. reasoning models), send SSE comments as keep-alives so we know you’re still working on the request. Otherwise we may cancel with a fetch timeout and fallback to another provider