Commissioned
API

Chat Completions

Send messages to your fine-tuned models via the chat completions endpoint.

Endpoint

POST /v1/chat/completions

Request format

from openai import OpenAI

client = OpenAI(
    base_url="https://app.commissioned.tech/v1",
    api_key="your-api-key",
)

response = client.chat.completions.create(
    model="your-model-id",
    messages=[
        {"role": "user", "content": "Write a summary of our Q4 results."}
    ],
)

print(response.choices[0].message.content)
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://app.commissioned.tech/v1",
  apiKey: "your-api-key",
});

const response = await client.chat.completions.create({
  model: "your-model-id",
  messages: [
    { role: "user", content: "Write a summary of our Q4 results." }
  ],
});

console.log(response.choices[0].message.content);
curl https://app.commissioned.tech/v1/chat/completions \
  -H "Authorization: Bearer your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "your-model-id",
    "messages": [
      {"role": "user", "content": "Write a summary of our Q4 results."}
    ]
  }'

Parameters

ParameterTypeRequiredDescription
modelstringYesThe ID of your fine-tuned model
messagesarrayYesArray of message objects

Message format

Each message in the messages array:

FieldTypeValuesDescription
rolestring"system", "user", "assistant"Who sent this message
contentstringAny textThe message content

Response format

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Here's a summary of the Q4 results..."
      },
      "finish_reason": "stop"
    }
  ]
}

Multi-turn conversations

The API is stateless — it doesn't remember previous messages. To have a multi-turn conversation, include the full message history with each request:

messages = []

# Turn 1
messages.append({"role": "user", "content": "What were our top products last quarter?"})
response = client.chat.completions.create(model="your-model-id", messages=messages)

assistant_reply = response.choices[0].message.content
messages.append({"role": "assistant", "content": assistant_reply})

# Turn 2
messages.append({"role": "user", "content": "How did they compare to Q3?"})
response = client.chat.completions.create(model="your-model-id", messages=messages)

print(response.choices[0].message.content)
const messages: OpenAI.ChatCompletionMessageParam[] = [];

// Turn 1
messages.push({ role: "user", content: "What were our top products last quarter?" });
let response = await client.chat.completions.create({
  model: "your-model-id",
  messages,
});

messages.push({ role: "assistant", content: response.choices[0].message.content! });

// Turn 2
messages.push({ role: "user", content: "How did they compare to Q3?" });
response = await client.chat.completions.create({
  model: "your-model-id",
  messages,
});

console.log(response.choices[0].message.content);

Each request includes the full conversation. This means token usage grows with conversation length. For long conversations, consider summarizing earlier messages or trimming the history.

Finding your model ID

Get your model ID from:

  • Dashboard — shown on each model card
  • /v1/models endpoint — returns all your models with their IDs
  • Model card details — click into any model to see its full ID

Error responses

StatusCause
400Invalid request body (missing model or messages)
401Missing or invalid API key
404Model not found (check the model ID)
429Rate limit exceeded
500Server error (retry the request)

On this page