Quick Start
Go from zero to a fine-tuned model in under five minutes.
This guide gets you from sign-up to chatting with your own fine-tuned model as fast as possible.
No machine learning experience is needed. If you can drag and drop a file, you can fine-tune a model.
Prerequisites
- A web browser
- Data you want to train on (documents, conversations, code — any text)
- That's it
The 5-minute walkthrough
Create your account
Go to app.commissioned.tech and sign up with email, Google, or GitHub. No credit card required for the free tier.
Upload your data
From the dashboard, drag files into the upload area. Commissioned accepts:
- JSONL / JSON — structured data, conversation logs
- PDF — documents, papers, reports
- TXT / Markdown — plain text, notes, articles
Files can be up to 5 GB each. Upload as many as you need.
Describe what you want
In the description field, write a plain-English explanation of your goal:
"I want a writing assistant that matches the tone and style of these blog posts"
"Create a support agent that can answer questions based on our help documentation"
This description guides how Commissioned cleans and structures your data for training.
Pick a base model
Select a base model from the dropdown:
| Model | Best for | Training time |
|---|---|---|
| GPT-4.1 Mini | Getting started, general use | 30–45 min |
| GPT-4.1 | Complex tasks, highest quality | 30–45 min |
| Gemini 2.5 Flash | Long documents | 30–45 min |
| Qwen 3 8B | Fast training, self-hosting | ~5 min |
If you're unsure, pick GPT-4.1 Mini — it's fast, capable, and free.
Click "Create a Custom Model"
Commissioned takes over from here:
- Parses and extracts text from your files
- Cleans and deduplicates the content
- Formats it for the target provider
- Submits the training job
- Monitors until completion
You'll get an email when it's ready. GPU models (Qwen) finish in ~5 minutes; cloud models take 30–45 minutes.
Start chatting
Once your model shows Succeeded, click Open in chat. You're now talking to a model trained on your data.
Try asking it something specific to your domain — you'll notice it understands your context without any prompting.
What's next
Now that you have a working model, you can:
- Integrate via API — call your model from any app using the OpenAI-compatible endpoint
- Try a different base model — compare how GPT-4.1 vs Gemini handles your data
- Download the LoRA adapter — self-host your Qwen fine-tune
- Follow a guide — step-by-step tutorials for specific use cases