Next.js Guide
This guide shows how to use SolRouter in a Next.js application for both server-side and client-facing workflows.
You will learn how to:
- call the SolRouter API from Route Handlers
- keep your API key on the server
- stream responses in real time
- use tool calling and structured output
- work with images and multimodal requests
- integrate with the Vercel AI SDK
- avoid common security mistakes
Base URL
https://api.solrouter.io/ai
Core principle: keep your API key server-side
In Next.js, your SolRouter API key should stay on the server.
Use it in:
- Route Handlers
- Server Actions
- server components
- backend jobs
Do not expose it to the browser with NEXT_PUBLIC_.
Good
const apiKey = process.env.SOLROUTER_API_KEY;
Bad
const apiKey = process.env.NEXT_PUBLIC_SOLROUTER_API_KEY;
Anything prefixed with NEXT_PUBLIC_ can end up in the browser bundle.
Environment setup
Create a .env.local file in your Next.js app:
SOLROUTER_API_KEY=sr_your_api_key
Read it only in server-side code:
const apiKey = process.env.SOLROUTER_API_KEY;
if (!apiKey) {
throw new Error("Missing SOLROUTER_API_KEY");
}
Basic Route Handler proxy
The safest and most flexible integration pattern is to proxy requests through a Next.js Route Handler.
Create src/app/api/chat/route.ts:
import { NextResponse } from "next/server";
export async function POST(req: Request) {
const apiKey = process.env.SOLROUTER_API_KEY;
if (!apiKey) {
return NextResponse.json(
{ error: "Missing SOLROUTER_API_KEY" },
{ status: 500 },
);
}
const body = await req.json();
const response = await fetch("https://api.solrouter.io/ai/chat/completions", {
method: "POST",
headers: {
"Authorization": `Bearer ${apiKey}`,
"Content-Type": "application/json",
},
body: JSON.stringify(body),
});
const data = await response.json();
return NextResponse.json(data, { status: response.status });
}
Now your frontend can call /api/chat instead of calling SolRouter directly.
Why this is better
- your API key stays private
- you can add auth and rate limits
- you can log usage safely
- you can inject system prompts
- you can validate or transform requests
Client-side fetch example
Once the Route Handler exists, a client component can call it safely.
"use client";
import { useState } from "react";
export function ChatBox() {
const [prompt, setPrompt] = useState("");
const [answer, setAnswer] = useState("");
const [loading, setLoading] = useState(false);
async function send() {
setLoading(true);
setAnswer("");
const response = await fetch("/api/chat", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "openai/gpt-4o-mini",
messages: [
{
role: "user",
content: prompt,
},
],
}),
});
const data = await response.json();
setAnswer(data.choices?.[0]?.message?.content ?? "No response");
setLoading(false);
}
return (
<div>
<textarea value={prompt} onChange={(e) => setPrompt(e.target.value)} />
<button onClick={send} disabled={loading}>
{loading ? "Sending..." : "Send"}
</button>
<pre>{answer}</pre>
</div>
);
}
Streaming with Route Handlers
Streaming works very well with Next.js Route Handlers because they can forward the upstream event stream directly.
Create src/app/api/chat/stream/route.ts:
export async function POST(req: Request) {
const apiKey = process.env.SOLROUTER_API_KEY;
if (!apiKey) {
return new Response("Missing SOLROUTER_API_KEY", { status: 500 });
}
const body = await req.json();
const response = await fetch("https://api.solrouter.io/ai/chat/completions", {
method: "POST",
headers: {
"Authorization": `Bearer ${apiKey}`,
"Content-Type": "application/json",
"Accept": "text/event-stream",
},
body: JSON.stringify({
...body,
stream: true,
}),
});
return new Response(response.body, {
status: response.status,
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache, no-transform",
"Connection": "keep-alive",
},
});
}
Client-side streaming reader
"use client";
import { useState } from "react";
export function StreamingChat() {
const [output, setOutput] = useState("");
const [loading, setLoading] = useState(false);
async function run() {
setLoading(true);
setOutput("");
const response = await fetch("/api/chat/stream", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "openai/gpt-4o-mini",
messages: [
{ role: "user", content: "Write a short paragraph about API design." },
],
}),
});
if (!response.ok || !response.body) {
setLoading(false);
throw new Error(`Request failed: ${response.status}`);
}
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = "";
while (true) {
const { value, done } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const events = buffer.split("\n\n");
buffer = events.pop() ?? "";
for (const event of events) {
const line = event.trim();
if (!line.startsWith("data: ")) continue;
const payload = line.slice(6);
if (payload === "[DONE]") continue;
const chunk = JSON.parse(payload);
const delta = chunk.choices?.[0]?.delta?.content ?? "";
if (delta) {
setOutput((prev) => prev + delta);
}
}
}
setLoading(false);
}
return (
<div>
<button onClick={run} disabled={loading}>
{loading ? "Streaming..." : "Start"}
</button>
<pre>{output}</pre>
</div>
);
}
Using the OpenAI SDK in Next.js
Because SolRouter is OpenAI-compatible, you can use the official OpenAI SDK from server-side Next.js code.
Server Action example
"use server";
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.solrouter.io/ai",
apiKey: process.env.SOLROUTER_API_KEY,
});
export async function generateSummary(text: string) {
const completion = await client.chat.completions.create({
model: "openai/gpt-4o-mini",
messages: [
{
role: "system",
content: "You summarize text concisely.",
},
{
role: "user",
content: text,
},
],
});
return completion.choices[0].message.content ?? "";
}
Calling the Server Action from a component
"use client";
import { useState, useTransition } from "react";
import { generateSummary } from "./actions";
export function SummaryForm() {
const [text, setText] = useState("");
const [result, setResult] = useState("");
const [pending, startTransition] = useTransition();
return (
<form
onSubmit={(e) => {
e.preventDefault();
startTransition(async () => {
const summary = await generateSummary(text);
setResult(summary);
});
}}
>
<textarea value={text} onChange={(e) => setText(e.target.value)} />
<button disabled={pending}>
{pending ? "Generating..." : "Generate"}
</button>
<pre>{result}</pre>
</form>
);
}
Tool calling in a Route Handler
Tool calling should usually happen on the server, because your server owns the application logic and private data access.
Example route
import OpenAI from "openai";
import { NextResponse } from "next/server";
const client = new OpenAI({
baseURL: "https://api.solrouter.io/ai",
apiKey: process.env.SOLROUTER_API_KEY,
});
async function getWeather(city: string) {
return {
city,
temperature_c: 18,
condition: "Cloudy",
};
}
const tools = [
{
type: "function" as const,
function: {
name: "get_weather",
description: "Returns the current weather for a city",
parameters: {
type: "object",
properties: {
city: { type: "string" },
},
required: ["city"],
additionalProperties: false,
},
},
},
];
export async function POST() {
const first = await client.chat.completions.create({
model: "openai/gpt-4o-mini",
messages: [
{ role: "user", content: "What's the weather in Berlin?" },
],
tools,
});
const assistantMessage = first.choices[0].message;
const toolCall = assistantMessage.tool_calls?.[0];
if (!toolCall) {
return NextResponse.json(first);
}
const args = JSON.parse(toolCall.function.arguments);
const toolResult = await getWeather(args.city);
const second = await client.chat.completions.create({
model: "openai/gpt-4o-mini",
messages: [
{ role: "user", content: "What's the weather in Berlin?" },
assistantMessage,
{
role: "tool",
tool_call_id: toolCall.id,
content: JSON.stringify(toolResult),
},
],
tools,
});
return NextResponse.json(second);
}
Why server-side tool calling is best
- tools may require secrets
- tools may query private databases
- tools may trigger side effects
- argument validation belongs on the server
- authorization checks belong on the server
Structured output in Next.js
Structured output works especially well in Next.js because server code can validate model responses before returning them to the client.
Example with Zod
import OpenAI from "openai";
import { z } from "zod";
const TicketSchema = z.object({
category: z.enum(["billing", "technical", "account"]),
email: z.string().email(),
summary: z.string(),
});
const client = new OpenAI({
baseURL: "https://api.solrouter.io/ai",
apiKey: process.env.SOLROUTER_API_KEY,
});
export async function extractTicket(input: string) {
const completion = await client.chat.completions.create({
model: "openai/gpt-4o-mini",
messages: [
{
role: "user",
content: `Extract a support ticket from: ${input}`,
},
],
response_format: {
type: "json_schema",
json_schema: {
name: "ticket",
schema: {
type: "object",
properties: {
category: {
type: "string",
enum: ["billing", "technical", "account"],
},
email: { type: "string" },
summary: { type: "string" },
},
required: ["category", "email", "summary"],
additionalProperties: false,
},
},
},
});
const raw = completion.choices[0].message.content ?? "{}";
return TicketSchema.parse(JSON.parse(raw));
}
This pattern is ideal for:
- form automation
- metadata extraction
- content moderation labels
- CMS ingestion
- workflow classification
Vision and multimodal in Next.js
If users upload images through your app, the usual pattern is:
- upload or receive the file
- convert it to a signed URL or data URL
- send it to SolRouter from server-side code
Example with a data URL
import fs from "node:fs";
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.solrouter.io/ai",
apiKey: process.env.SOLROUTER_API_KEY,
});
export async function analyzeLocalImage(path: string) {
const bytes = fs.readFileSync(path);
const base64 = bytes.toString("base64");
const dataUrl = `data:image/jpeg;base64,${base64}`;
const completion = await client.chat.completions.create({
model: "openai/gpt-4o",
messages: [
{
role: "user",
content: [
{
type: "text",
text: "Extract the invoice number and total from this image.",
},
{
type: "image_url",
image_url: {
url: dataUrl,
detail: "high",
},
},
],
},
],
});
return completion.choices[0].message.content;
}
Practical advice
- do heavy file work on the server
- avoid exposing private file URLs publicly
- use signed URLs when possible
- validate uploads before forwarding them
- use structured output for extraction workflows
Vercel AI SDK integration
If you use the Vercel AI SDK, SolRouter can sit behind the same OpenAI-compatible interface.
Example pattern
import OpenAI from "openai";
export const client = new OpenAI({
baseURL: "https://api.solrouter.io/ai",
apiKey: process.env.SOLROUTER_API_KEY,
});
In practice, the exact Vercel AI SDK adapter setup depends on the version you are using, but the important part is the same:
- point the provider to
https://api.solrouter.io/ai - keep the key server-side
- use Route Handlers for streaming UI responses
When integrating the Vercel AI SDK:
- use server-side provider initialization
- stream through App Router Route Handlers
- validate structured output after parsing
- keep tool execution in backend code
Common mistakes
1. Exposing the API key to the browser
Never do this:
const apiKey = process.env.NEXT_PUBLIC_SOLROUTER_API_KEY;
2. Calling SolRouter directly from public client code
For production apps, prefer:
- Route Handlers
- Server Actions
- backend services
Instead of directly calling the upstream API from the browser.
3. Forgetting to handle non-2xx responses
Always check:
if (!response.ok) {
throw new Error(`Request failed: ${response.status}`);
}
4. Not validating structured output
Even if you use schema-constrained output, still validate with Zod or another runtime validator.
5. Treating tool arguments as trusted
Tool arguments come from the model. Validate them before execution.
6. Using text-only models for image input
Always verify that the selected model supports the modality you need.
7. Ignoring streaming edge cases
Streams can:
- end early
- fail before
[DONE] - omit final usage if interrupted
Your UI should handle incomplete responses gracefully.
Recommended architecture patterns
Pattern 1 — Simple chat app
- client component sends prompt
- Route Handler calls SolRouter
- response is returned to client
Best for:
- internal tools
- small assistants
- prototypes
Pattern 2 — Streaming chat UI
- client component reads streamed response
- Route Handler proxies SSE from SolRouter
- UI appends chunks live
Best for:
- chat products
- writing assistants
- real-time UX
Pattern 3 — Server-side extraction pipeline
- client uploads text, image, or file
- Server Action or Route Handler calls SolRouter
- server validates structured output
- parsed object is stored or returned
Best for:
- invoices
- forms
- moderation
- workflow automation
Pattern 4 — Agent or tool workflow
- client sends a request
- server calls SolRouter with tools
- server executes selected tools
- server sends tool results back
- final answer is returned
Best for:
- enterprise assistants
- CRM automations
- internal copilots
- live data access
Production checklist
Before shipping your Next.js integration, make sure you:
- store
SOLROUTER_API_KEYonly in server-side env vars - use Route Handlers or Server Actions for inference
- validate all tool inputs
- validate structured outputs
- handle
401,402,429, and5xxerrors - support streaming interruption gracefully
- keep large file and image handling on the server
- avoid leaking sensitive prompts or files in logs
- choose the correct model for the input modality
Minimal production-ready Route Handler
import { NextResponse } from "next/server";
export async function POST(req: Request) {
const apiKey = process.env.SOLROUTER_API_KEY;
if (!apiKey) {
return NextResponse.json(
{ error: "Missing SOLROUTER_API_KEY" },
{ status: 500 },
);
}
try {
const body = await req.json();
const response = await fetch("https://api.solrouter.io/ai/chat/completions", {
method: "POST",
headers: {
"Authorization": `Bearer ${apiKey}`,
"Content-Type": "application/json",
},
body: JSON.stringify(body),
});
const data = await response.json().catch(() => ({}));
return NextResponse.json(data, { status: response.status });
} catch (error) {
return NextResponse.json(
{
error: error instanceof Error ? error.message : "Unknown server error",
},
{ status: 500 },
);
}
}
This is a strong default starting point for most applications.
Next steps
- API Reference — complete request and response schema
- Streaming — deeper SSE handling patterns
- Tool Calling — server-side function execution flows
- Structured Output — JSON mode and schema-constrained parsing
- Vision & Multimodal — image, file, audio, and video input
- Errors — retries, rate limits, and failure handling