First Request


The SolRouter API is fully OpenAI-compatible. Point your existing code at https://api.solrouter.io/v1 and swap in your sr_ key — nothing else needs to change.


Using the OpenAI SDK

npm install openai
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.solrouter.io/v1",
  apiKey: process.env.SOLROUTER_API_KEY,
});

const completion = await client.chat.completions.create({
  model: "openai/gpt-4o-mini",
  messages: [
    {
      role: "user",
      content: "What is the meaning of life?",
    },
  ],
});

console.log(completion.choices[0].message.content);

Using fetch directly

const response = await fetch("https://api.solrouter.io/v1/chat/completions", {
  method: "POST",
  headers: {
    Authorization: `Bearer ${process.env.SOLROUTER_API_KEY}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    model: "openai/gpt-4o-mini",
    messages: [
      {
        role: "user",
        content: "What is the meaning of life?",
      },
    ],
  }),
});

const data = await response.json();
console.log(data.choices[0].message.content);

Using Python

from openai import OpenAI
import os
from dotenv import load_dotenv

load_dotenv()

client = OpenAI(
    base_url="https://api.solrouter.io/v1",
    api_key=os.environ["SOLROUTER_API_KEY"],
)

completion = client.chat.completions.create(
    model="openai/gpt-4o-mini",
    messages=[
        {
            "role": "user",
            "content": "What is the meaning of life?",
        }
    ],
)

print(completion.choices[0].message.content)

Using curl

curl https://api.solrouter.io/v1/chat/completions \
  -H "Authorization: Bearer $SOLROUTER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o-mini",
    "messages": [
      {
        "role": "user",
        "content": "What is the meaning of life?"
      }
    ]
  }'

Understanding the response

Every successful call to /v1/chat/completions returns a JSON object:

{
  "id": "chatcmpl-a1b2c3d4e5f6",
  "object": "chat.completion",
  "created": 1748000000,
  "model": "openai/gpt-4o-mini",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The meaning of life is a deeply philosophical question..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 15,
    "completion_tokens": 83,
    "total_tokens": 98,
    "cost": 0.0000148
  }
}

Field-by-field breakdown

FieldTypeDescription
idstringA unique identifier for this completion.
objectstringAlways "chat.completion" for non-streaming requests.
creatednumberUnix timestamp of when the completion was generated.
modelstringThe exact model used, including the provider prefix.
choicesarrayAn array of completion candidates.
choices[n].message.contentstring | nullThe text generated by the model.
choices[n].finish_reasonstringWhy the model stopped.
usage.prompt_tokensnumberTokens consumed by your input messages.
usage.completion_tokensnumberTokens generated by the model.
usage.costnumberTotal cost in USD for this request.

finish_reason values

ValueMeaning
stopThe model reached a natural stopping point.
lengthThe output was cut off at max_tokens.
tool_callsThe model called one or more tools.
content_filterThe output was blocked by a content policy.

Accessing the response in TypeScript

const completion = await client.chat.completions.create({
  model: "openai/gpt-4o-mini",
  messages: [{ role: "user", content: "What is the meaning of life?" }],
});

const message = completion.choices[0].message.content; // string | null
const tokenCount = completion.usage?.total_tokens;      // number | undefined
const reason = completion.choices[0].finish_reason;     // "stop" | "length" | ...

console.log(`Response: ${message}`);
console.log(`Tokens used: ${tokenCount}`);
console.log(`Finished because: ${reason}`);

Next steps