API
Models Endpoint
List your fine-tuned models via the API.
Endpoint
GET /v1/modelsRequest
from openai import OpenAI
client = OpenAI(
base_url="https://app.commissioned.tech/v1",
api_key="your-api-key",
)
models = client.models.list()
for model in models.data:
print(model.id)import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://app.commissioned.tech/v1",
apiKey: "your-api-key",
});
const models = await client.models.list();
for (const model of models.data) {
console.log(model.id);
}curl https://app.commissioned.tech/v1/models \
-H "Authorization: Bearer your-api-key"Response
{
"object": "list",
"data": [
{
"id": "ft:gpt-4.1-mini:abc123",
"object": "model",
"owned_by": "user"
},
{
"id": "ft:gemini-2.5-flash:def456",
"object": "model",
"owned_by": "user"
}
]
}Only models with Succeeded status are returned. Models that are still training, queued, or failed won't appear in this list.
Using model IDs
The id from this response is what you pass as the model parameter to chat completions:
# List models and use the first one
models = client.models.list()
model_id = models.data[0].id
response = client.chat.completions.create(
model=model_id,
messages=[{"role": "user", "content": "Hello!"}],
)