Commissioned
API

SDKs & Libraries

Use Commissioned with popular AI SDKs and frameworks.

Because Commissioned is OpenAI-compatible, it works with any SDK or framework that supports custom base URLs.

Official OpenAI SDKs

pip install openai
from openai import OpenAI

client = OpenAI(
    base_url="https://app.commissioned.tech/v1",
    api_key="your-api-key",
)

response = client.chat.completions.create(
    model="your-model-id",
    messages=[{"role": "user", "content": "Hello!"}],
)
npm install openai
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://app.commissioned.tech/v1",
  apiKey: "your-api-key",
});

const response = await client.chat.completions.create({
  model: "your-model-id",
  messages: [{ role: "user", content: "Hello!" }],
});

Vercel AI SDK

The Vercel AI SDK supports custom OpenAI-compatible providers.

import { createOpenAI } from "@ai-sdk/openai";
import { generateText } from "ai";

const commissioned = createOpenAI({
  baseURL: "https://app.commissioned.tech/v1",
  apiKey: "your-api-key",
});

const { text } = await generateText({
  model: commissioned("your-model-id"),
  prompt: "Hello!",
});

This works with all Vercel AI SDK features — generateText, streamText, generateObject, React hooks (useChat, useCompletion), and more.

LangChain

pip install langchain-openai
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    base_url="https://app.commissioned.tech/v1",
    api_key="your-api-key",
    model="your-model-id",
)

response = llm.invoke("Hello!")
print(response.content)
npm install @langchain/openai
import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
  configuration: {
    baseURL: "https://app.commissioned.tech/v1",
  },
  openAIApiKey: "your-api-key",
  modelName: "your-model-id",
});

const response = await llm.invoke("Hello!");
console.log(response.content);

LlamaIndex

from llama_index.llms.openai_like import OpenAILike

llm = OpenAILike(
    api_base="https://app.commissioned.tech/v1",
    api_key="your-api-key",
    model="your-model-id",
)

response = llm.complete("Hello!")

Any library that supports custom OpenAI-compatible endpoints will work. If your library has a base_url, api_base, or baseURL parameter, just point it at https://app.commissioned.tech/v1.

On this page