Models Overview
Providers, base models, fine-tuning, and model management on Commissioned.
Commissioned supports fine-tuning across multiple providers and model families. You upload your data, pick a base model, and we handle training and deployment.
At a glance
| Model | Provider | Plan | Training time | Best for |
|---|---|---|---|---|
| GPT-4.1 | OpenAI | Free | 30–45 min | High-quality general-purpose tasks |
| GPT-4.1 Mini | OpenAI | Free | 30–45 min | Fast, cost-effective, great default |
| Gemini 2.5 Flash | Free | 30–45 min | Long documents, multimodal data | |
| Gemini 2.5 Flash Lite | Free | 30–45 min | Lightweight tasks, fast inference | |
| Gemini 2.5 Pro | Pro | 30–45 min | Highest quality Gemini model | |
| Qwen 3 8B | GPU (LoRA) | Free | ~5 min | Self-hosting, rapid iteration |
How to choose
Start here: What's your priority?
│
├── Speed of iteration → Qwen 3 8B (~5 min training)
├── Self-hosting → Qwen 3 8B (downloadable adapter)
├── Best quality → GPT-4.1 or Gemini 2.5 Pro
├── Long documents → Gemini 2.5 Flash
├── Cost-effective → GPT-4.1 Mini
└── Not sure → GPT-4.1 Mini (best default)Key concepts
Base model — the pre-trained LLM you're fine-tuning. Determines general capabilities, context window, and which provider hosts it.
Fine-tuned model — the result of training a base model on your data. Retains general knowledge while learning your specific domain, tone, and patterns.
LoRA adapter — a lightweight set of adapter weights produced by open-source fine-tunes (Qwen). Can be downloaded and self-hosted.
Training job — a single fine-tuning run. Tracked by status: Queued → Validating → In Progress → Succeeded/Failed.