January Release Spotlight

Our monthly Product Spotlight highlights a few of our biggest releases from the past month.
Partition Sorting: prioritize fast providers
Set a minimum throughput or maximum latency threshold and OpenRouter will deprioritize providers that don't meet it, with no latency hit on your requests. Combine with partition: "none" across fallback models to find the cheapest option that still meets your performance floor.
Provider Explorer
Explore all providers on OpenRouter in one place. DeepInfra has the most models, and OpenAI has the most proprietary ones.
Bug & Feedback Reporting
Report bugs or feedback on any generation from the Chatroom, your Activity page, or via API. We'll use this to help quantify provider degradation, and more to come.
Auto Router customization
Auto Router now supports 58 models including Opus 4.5, works with tool calling, and lets you customize allowed models using wildcard syntax (e.g. anthropic/*). No markup over the routed model’s market price. Per-request API support included.
SDK Skills Loader
Load encapsulated, composable skills into any model's context via the OpenRouter SDK. Skills inject domain-specific instructions automatically, with built-in idempotency so the same skill is never loaded twice.
LLM Leaderboard over 50% faster
The LLM Leaderboard is now more than 50% faster using intersection observer-based lazy loading combined with code-splitting to cut total blocking time in half.
70% faster gateway
Major p99 latency improvements across the gateway. Fastest gateways in our benchmarks.
Are we missing something you want to see? Let us know on Discord.