Tool Calling


Tool calling lets a model ask your application to execute a function, fetch external data, or trigger an internal workflow.

Instead of forcing the model to guess facts like weather, prices, account state, or database results, you define a set of tools and let the model decide when to call them. Your application executes the selected tool, returns the result, and the model uses that result to continue the conversation.

SolRouter supports OpenAI-compatible tool calling through the standard chat completions API.

Base URL

https://api.solrouter.io/ai

How tool calling works

A typical tool-calling interaction has four steps:

  1. You send a user prompt along with one or more tool definitions
  2. The model decides whether a tool is needed
  3. Your application executes the requested tool
  4. You send the tool result back as a tool message so the model can produce the final answer

At a high level, the flow looks like this:

User → Your app → SolRouter → Model
                      ↓
               tool call requested
                      ↓
              Your app executes tool
                      ↓
          tool result sent back to model
                      ↓
               final assistant answer

This pattern is useful for:

  • Weather, news, and finance lookups
  • Database queries
  • Search and retrieval
  • Internal APIs and business logic
  • Action-taking agents
  • Structured workflows with validation

Basic request structure

To enable tool calling, include a tools array in your POST /chat/completions request.

Example request

{
  "model": "openai/gpt-4o-mini",
  "messages": [
    {
      "role": "user",
      "content": "What's the weather in Berlin?"
    }
  ],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Returns the current weather for a city",
        "parameters": {
          "type": "object",
          "properties": {
            "city": {
              "type": "string",
              "description": "City name"
            }
          },
          "required": ["city"],
          "additionalProperties": false
        }
      }
    }
  ]
}

Tool object format

Each item in tools must be a function tool.

FieldTypeRequiredDescription
typestringYesMust be "function"
function.namestringYesUnique tool name
function.descriptionstringYesWhat the tool does
function.parametersobjectYesJSON Schema describing tool arguments

Tool schema design

A tool definition is only as good as its schema. Clear schemas help the model choose the right tool and produce valid arguments.

Good schema principles

  • Use clear, specific names
  • Write descriptions that explain when to use the tool
  • Keep argument structures simple
  • Mark required fields correctly
  • Set additionalProperties: false when possible
  • Prefer enums for constrained choices

Good example

{
  "type": "function",
  "function": {
    "name": "create_support_ticket",
    "description": "Creates a customer support ticket for billing or technical issues",
    "parameters": {
      "type": "object",
      "properties": {
        "category": {
          "type": "string",
          "enum": ["billing", "technical", "account"]
        },
        "subject": {
          "type": "string"
        },
        "message": {
          "type": "string"
        },
        "priority": {
          "type": "string",
          "enum": ["low", "normal", "high"]
        }
      },
      "required": ["category", "subject", "message"],
      "additionalProperties": false
    }
  }
}

Poor example

{
  "type": "function",
  "function": {
    "name": "tool1",
    "description": "Does stuff",
    "parameters": {
      "type": "object"
    }
  }
}

The second version gives the model almost no guidance. It does not explain when the tool should be used or what arguments it expects.


First full example

Here is the complete lifecycle of a tool call.

Step 1 — Send the request with tools

{
  "model": "openai/gpt-4o-mini",
  "messages": [
    {
      "role": "user",
      "content": "What's the weather in Berlin?"
    }
  ],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Returns the current weather for a city",
        "parameters": {
          "type": "object",
          "properties": {
            "city": {
              "type": "string"
            }
          },
          "required": ["city"],
          "additionalProperties": false
        }
      }
    }
  ]
}

Step 2 — Model responds with a tool call

{
  "id": "chatcmpl_123",
  "object": "chat.completion",
  "model": "openai/gpt-4o-mini",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "tool_calls": [
          {
            "id": "call_abc123",
            "type": "function",
            "function": {
              "name": "get_weather",
              "arguments": "{\"city\":\"Berlin\"}"
            }
          }
        ]
      },
      "finish_reason": "tool_calls"
    }
  ],
  "usage": {
    "prompt_tokens": 67,
    "completion_tokens": 18,
    "total_tokens": 85,
    "cost": 0.0000121
  }
}

At this point, the model is not done answering the user. It is asking your application to execute get_weather.

Step 3 — Your application executes the tool

Your backend or application code parses the arguments string and runs the matching function.

const toolCall = completion.choices[0].message.tool_calls?.[0];
const args = JSON.parse(toolCall.function.arguments);

const result = await getWeather(args.city);

Example tool result:

{
  "temperature_c": 18,
  "condition": "Cloudy",
  "wind_kph": 12
}

Step 4 — Send the tool result back

Now include:

  • the original user message
  • the assistant message containing tool_calls
  • a new tool message with the function result
{
  "model": "openai/gpt-4o-mini",
  "messages": [
    {
      "role": "user",
      "content": "What's the weather in Berlin?"
    },
    {
      "role": "assistant",
      "content": null,
      "tool_calls": [
        {
          "id": "call_abc123",
          "type": "function",
          "function": {
            "name": "get_weather",
            "arguments": "{\"city\":\"Berlin\"}"
          }
        }
      ]
    },
    {
      "role": "tool",
      "tool_call_id": "call_abc123",
      "content": "{\"temperature_c\":18,\"condition\":\"Cloudy\",\"wind_kph\":12}"
    }
  ],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Returns the current weather for a city",
        "parameters": {
          "type": "object",
          "properties": {
            "city": {
              "type": "string"
            }
          },
          "required": ["city"],
          "additionalProperties": false
        }
      }
    }
  ]
}

Step 5 — Model produces the final answer

{
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "The weather in Berlin is currently 18°C and cloudy, with winds around 12 kph."
      },
      "finish_reason": "stop"
    }
  ]
}

TypeScript example

OpenAI SDK

import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.solrouter.io/ai",
  apiKey: process.env.SOLROUTER_API_KEY,
});

async function getWeather(city: string) {
  return {
    temperature_c: 18,
    condition: "Cloudy",
    wind_kph: 12,
    city,
  };
}

const tools = [
  {
    type: "function" as const,
    function: {
      name: "get_weather",
      description: "Returns the current weather for a city",
      parameters: {
        type: "object",
        properties: {
          city: {
            type: "string",
            description: "City name",
          },
        },
        required: ["city"],
        additionalProperties: false,
      },
    },
  },
];

async function run() {
  const first = await client.chat.completions.create({
    model: "openai/gpt-4o-mini",
    messages: [
      { role: "user", content: "What's the weather in Berlin?" },
    ],
    tools,
  });

  const assistantMessage = first.choices[0].message;
  const toolCall = assistantMessage.tool_calls?.[0];

  if (!toolCall) {
    console.log(assistantMessage.content);
    return;
  }

  if (toolCall.function.name !== "get_weather") {
    throw new Error(`Unknown tool: ${toolCall.function.name}`);
  }

  const args = JSON.parse(toolCall.function.arguments);
  const toolResult = await getWeather(args.city);

  const second = await client.chat.completions.create({
    model: "openai/gpt-4o-mini",
    messages: [
      { role: "user", content: "What's the weather in Berlin?" },
      assistantMessage,
      {
        role: "tool",
        tool_call_id: toolCall.id,
        content: JSON.stringify(toolResult),
      },
    ],
    tools,
  });

  console.log(second.choices[0].message.content);
}

run().catch(console.error);

Using fetch

const tools = [
  {
    type: "function",
    function: {
      name: "lookup_stock_price",
      description: "Returns the latest stock price for a ticker symbol",
      parameters: {
        type: "object",
        properties: {
          ticker: { type: "string" },
        },
        required: ["ticker"],
        additionalProperties: false,
      },
    },
  },
];

const response = await fetch("https://api.solrouter.io/ai/chat/completions", {
  method: "POST",
  headers: {
    "Authorization": `Bearer ${process.env.SOLROUTER_API_KEY}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    model: "openai/gpt-4o-mini",
    messages: [
      { role: "user", content: "What's the latest price for AAPL?" },
    ],
    tools,
  }),
});

const data = await response.json();
console.log(data);

Python example

from openai import OpenAI
import json
import os

client = OpenAI(
    base_url="https://api.solrouter.io/ai",
    api_key=os.environ["SOLROUTER_API_KEY"],
)

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Returns the current weather for a city",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "City name"
                    }
                },
                "required": ["city"],
                "additionalProperties": False
            }
        }
    }
]

def get_weather(city: str) -> dict:
    return {
        "city": city,
        "temperature_c": 18,
        "condition": "Cloudy",
        "wind_kph": 12,
    }

first = client.chat.completions.create(
    model="openai/gpt-4o-mini",
    messages=[
        {"role": "user", "content": "What's the weather in Berlin?"}
    ],
    tools=tools,
)

assistant_message = first.choices[0].message
tool_call = assistant_message.tool_calls[0]

args = json.loads(tool_call.function.arguments)
tool_result = get_weather(args["city"])

second = client.chat.completions.create(
    model="openai/gpt-4o-mini",
    messages=[
        {"role": "user", "content": "What's the weather in Berlin?"},
        assistant_message,
        {
            "role": "tool",
            "tool_call_id": tool_call.id,
            "content": json.dumps(tool_result),
        },
    ],
    tools=tools,
)

print(second.choices[0].message.content)

tool_choice

By default, the model decides whether to call a tool or answer directly.

You can control that behavior with tool_choice.

Let the model choose

{
  "tool_choice": "auto"
}

Force a specific tool

{
  "tool_choice": {
    "type": "function",
    "function": {
      "name": "get_weather"
    }
  }
}

Disable tool calling for a request

{
  "tool_choice": "none"
}

When to use each mode

ValueWhen to use
"auto"Normal conversational tool use
"none"Force plain-language output only
Specific function objectWhen your application already knows which tool should run

Streaming tool calls

Tool calls can also be streamed. In streamed responses, the model may emit partial tool_calls fragments over multiple chunks.

Example SSE chunks:

data: {"choices":[{"delta":{"tool_calls":[{"index":0,"id":"call_1","type":"function","function":{"name":"get_weather","arguments":""}}]}}]}

data: {"choices":[{"delta":{"tool_calls":[{"index":0,"function":{"arguments":"{\"city\":\"Ber"}}]}}]}

data: {"choices":[{"delta":{"tool_calls":[{"index":0,"function":{"arguments":"lin\"}"}}]}}]}

data: {"choices":[{"delta":{},"finish_reason":"tool_calls"}]}

In streamed mode, your client must reconstruct the full argument string by appending partial function.arguments fragments in order.

TypeScript reconstruction example

const toolCalls: Record<number, { id?: string; name?: string; arguments: string }> = {};

for await (const chunk of stream) {
  const calls = chunk.choices?.[0]?.delta?.tool_calls ?? [];

  for (const call of calls) {
    const index = call.index ?? 0;

    if (!toolCalls[index]) {
      toolCalls[index] = { arguments: "" };
    }

    if (call.id) {
      toolCalls[index].id = call.id;
    }

    if (call.function?.name) {
      toolCalls[index].name = call.function.name;
    }

    if (call.function?.arguments) {
      toolCalls[index].arguments += call.function.arguments;
    }
  }
}

After reconstruction, parse the final argument string as JSON and continue the normal tool execution flow.


Security and validation

Tool calling is powerful, but your application must stay in control. Never treat model-generated tool arguments as trusted input.

Required safeguards

  • Validate all tool arguments before executing the function
  • Check tool names explicitly — never dynamically execute arbitrary names
  • Apply authorization checks for tools that read or modify protected data
  • Rate limit expensive tools such as web search or external APIs
  • Sanitize tool results before sending them back to the model if they may contain sensitive content
  • Log tool calls for debugging and auditability

Example validation

function assertGetWeatherArgs(args: unknown): { city: string } {
  if (
    typeof args !== "object" ||
    args === null ||
    typeof (args as { city?: unknown }).city !== "string"
  ) {
    throw new Error("Invalid arguments for get_weather");
  }

  return { city: (args as { city: string }).city };
}

Never do this

// ❌ Unsafe: blindly dispatching a tool call by name
const fn = (globalThis as Record<string, unknown>)[toolCall.function.name];
await (fn as (...args: unknown[]) => Promise<unknown>)(toolCall.function.arguments);

The model should suggest tool usage, but your application must remain the final authority on what actually executes.


Common mistakes

1. Forgetting to send the assistant tool call message back

When you make the second request, you must include the assistant message containing tool_calls. If you omit it, the model loses the context of why the tool message exists.

2. Returning plain text instead of structured JSON

Tool results should usually be returned as a JSON string in the tool message content. This keeps the structure clear and easier for the model to use.

Good:

{
  "role": "tool",
  "tool_call_id": "call_abc123",
  "content": "{\"temperature_c\":18,\"condition\":\"Cloudy\"}"
}

Less reliable:

{
  "role": "tool",
  "tool_call_id": "call_abc123",
  "content": "It is 18 degrees and cloudy."
}

3. Defining too many overlapping tools

If several tools appear to do the same thing, the model may choose inconsistently. Keep tool responsibilities distinct.

4. Using vague descriptions

A tool description like "Gets stuff" gives the model almost no guidance. Write descriptions as if you were explaining the tool to another engineer.

5. Ignoring argument validation

Models can generate malformed or incomplete arguments. Always validate before executing the tool.

6. Overloading a single tool

If one tool has too many optional fields and multiple unrelated responsibilities, it becomes harder for the model to use correctly. Split complex workflows into clearer, narrower tools.


Best practices

Keep tools narrow and focused

Good tools usually do one thing well:

  • get_weather
  • search_docs
  • create_invoice
  • lookup_customer

This is better than a single catch-all tool like do_business_action.

Prefer structured outputs

Tool results should be machine-readable. JSON is usually the best format.

Reuse schemas

If your application already has validation schemas in Zod, JSON Schema, or Pydantic, generate tool definitions from those schemas instead of maintaining two copies.

Use tool calling for data access, not reasoning

Let the model handle reasoning and language. Let tools handle:

  • real-time lookups
  • deterministic business logic
  • database access
  • external APIs
  • side effects

Handle errors explicitly

If a tool fails, return a structured error payload rather than throwing away the conversation.

Example:

{
  "error": "city_not_found",
  "message": "No weather data available for the requested city"
}

Then send that as the tool result and let the model explain the failure to the user.


End-to-end example with failure handling

import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.solrouter.io/ai",
  apiKey: process.env.SOLROUTER_API_KEY,
});

const tools = [
  {
    type: "function" as const,
    function: {
      name: "get_weather",
      description: "Returns the current weather for a city",
      parameters: {
        type: "object",
        properties: {
          city: { type: "string" },
        },
        required: ["city"],
        additionalProperties: false,
      },
    },
  },
];

async function getWeather(city: string) {
  if (city.toLowerCase() === "unknown") {
    return {
      error: "city_not_found",
      message: "No weather data available for the requested city",
    };
  }

  return {
    city,
    temperature_c: 18,
    condition: "Cloudy",
    wind_kph: 12,
  };
}

async function ask(prompt: string) {
  const first = await client.chat.completions.create({
    model: "openai/gpt-4o-mini",
    messages: [{ role: "user", content: prompt }],
    tools,
  });

  const assistantMessage = first.choices[0].message;
  const toolCall = assistantMessage.tool_calls?.[0];

  if (!toolCall) {
    return assistantMessage.content;
  }

  let toolResult: unknown;

  try {
    if (toolCall.function.name !== "get_weather") {
      throw new Error(`Unsupported tool: ${toolCall.function.name}`);
    }

    const args = JSON.parse(toolCall.function.arguments);

    if (typeof args.city !== "string") {
      throw new Error("Invalid city argument");
    }

    toolResult = await getWeather(args.city);
  } catch (error) {
    toolResult = {
      error: "tool_execution_failed",
      message: error instanceof Error ? error.message : "Unknown error",
    };
  }

  const second = await client.chat.completions.create({
    model: "openai/gpt-4o-mini",
    messages: [
      { role: "user", content: prompt },
      assistantMessage,
      {
        role: "tool",
        tool_call_id: toolCall.id,
        content: JSON.stringify(toolResult),
      },
    ],
    tools,
  });

  return second.choices[0].message.content;
}

Next steps

  • API Reference — request and response schema details
  • Streaming — handling streamed text and streamed tool calls
  • Structured Output — JSON mode and schema-constrained outputs
  • Vision & Multimodal — combine image input with tools for OCR and extraction flows
  • Errors — retries, validation failures, and operational debugging