First Request
The SolRouter API is fully OpenAI-compatible. Point your existing code at https://api.solrouter.io/v1 and swap in your sr_ key — nothing else needs to change.
Using the OpenAI SDK
npm install openai
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.solrouter.io/v1",
apiKey: process.env.SOLROUTER_API_KEY,
});
const completion = await client.chat.completions.create({
model: "openai/gpt-4o-mini",
messages: [
{
role: "user",
content: "What is the meaning of life?",
},
],
});
console.log(completion.choices[0].message.content);
Using fetch directly
const response = await fetch("https://api.solrouter.io/v1/chat/completions", {
method: "POST",
headers: {
Authorization: `Bearer ${process.env.SOLROUTER_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "openai/gpt-4o-mini",
messages: [
{
role: "user",
content: "What is the meaning of life?",
},
],
}),
});
const data = await response.json();
console.log(data.choices[0].message.content);
Using Python
from openai import OpenAI
import os
from dotenv import load_dotenv
load_dotenv()
client = OpenAI(
base_url="https://api.solrouter.io/v1",
api_key=os.environ["SOLROUTER_API_KEY"],
)
completion = client.chat.completions.create(
model="openai/gpt-4o-mini",
messages=[
{
"role": "user",
"content": "What is the meaning of life?",
}
],
)
print(completion.choices[0].message.content)
Using curl
curl https://api.solrouter.io/v1/chat/completions \
-H "Authorization: Bearer $SOLROUTER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4o-mini",
"messages": [
{
"role": "user",
"content": "What is the meaning of life?"
}
]
}'
Understanding the response
Every successful call to /v1/chat/completions returns a JSON object:
{
"id": "chatcmpl-a1b2c3d4e5f6",
"object": "chat.completion",
"created": 1748000000,
"model": "openai/gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The meaning of life is a deeply philosophical question..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 15,
"completion_tokens": 83,
"total_tokens": 98,
"cost": 0.0000148
}
}
Field-by-field breakdown
| Field | Type | Description |
|---|---|---|
id | string | A unique identifier for this completion. |
object | string | Always "chat.completion" for non-streaming requests. |
created | number | Unix timestamp of when the completion was generated. |
model | string | The exact model used, including the provider prefix. |
choices | array | An array of completion candidates. |
choices[n].message.content | string | null | The text generated by the model. |
choices[n].finish_reason | string | Why the model stopped. |
usage.prompt_tokens | number | Tokens consumed by your input messages. |
usage.completion_tokens | number | Tokens generated by the model. |
usage.cost | number | Total cost in USD for this request. |
finish_reason values
| Value | Meaning |
|---|---|
stop | The model reached a natural stopping point. |
length | The output was cut off at max_tokens. |
tool_calls | The model called one or more tools. |
content_filter | The output was blocked by a content policy. |
Accessing the response in TypeScript
const completion = await client.chat.completions.create({
model: "openai/gpt-4o-mini",
messages: [{ role: "user", content: "What is the meaning of life?" }],
});
const message = completion.choices[0].message.content; // string | null
const tokenCount = completion.usage?.total_tokens; // number | undefined
const reason = completion.choices[0].finish_reason; // "stop" | "length" | ...
console.log(`Response: ${message}`);
console.log(`Tokens used: ${tokenCount}`);
console.log(`Finished because: ${reason}`);
Next steps
- System Prompts & Conversations — build multi-turn chats
- Models — browse the full model catalogue
- Streaming — receive tokens in real time