Commissioned
Models

Models Overview

Providers, base models, fine-tuning, and model management on Commissioned.

Commissioned supports fine-tuning across multiple providers and model families. You upload your data, pick a base model, and we handle training and deployment.

At a glance

ModelProviderPlanTraining timeBest for
GPT-4.1OpenAIFree30–45 minHigh-quality general-purpose tasks
GPT-4.1 MiniOpenAIFree30–45 minFast, cost-effective, great default
Gemini 2.5 FlashGoogleFree30–45 minLong documents, multimodal data
Gemini 2.5 Flash LiteGoogleFree30–45 minLightweight tasks, fast inference
Gemini 2.5 ProGooglePro30–45 minHighest quality Gemini model
Qwen 3 8BGPU (LoRA)Free~5 minSelf-hosting, rapid iteration

How to choose

Start here: What's your priority?

├── Speed of iteration → Qwen 3 8B (~5 min training)
├── Self-hosting → Qwen 3 8B (downloadable adapter)
├── Best quality → GPT-4.1 or Gemini 2.5 Pro
├── Long documents → Gemini 2.5 Flash
├── Cost-effective → GPT-4.1 Mini
└── Not sure → GPT-4.1 Mini (best default)

Key concepts

Base model — the pre-trained LLM you're fine-tuning. Determines general capabilities, context window, and which provider hosts it.

Fine-tuned model — the result of training a base model on your data. Retains general knowledge while learning your specific domain, tone, and patterns.

LoRA adapter — a lightweight set of adapter weights produced by open-source fine-tunes (Qwen). Can be downloaded and self-hosted.

Training job — a single fine-tuning run. Tracked by status: Queued → Validating → In Progress → Succeeded/Failed.

On this page