Commissioned
Chat

Prompting Tips

Get better results from your fine-tuned models with effective prompts.

Fine-tuned models behave differently from base models. The whole point of fine-tuning is that the model already knows your domain — you don't need to explain everything from scratch.

What changes after fine-tuning

With a base modelWith your fine-tuned model
Need long system promptsDomain knowledge is built in
Must explain terminologyAlready knows your vocabulary
Generic tone and styleMatches your data's tone
Requires examples in-contextHas seen hundreds of examples during training

Tips

Be direct

Skip the context-setting that you'd normally need with a generic model. Instead of:

"You are a customer support agent for a SaaS company that makes project management tools. You should respond in a professional but friendly tone..."

Just ask:

"How do I add a team member to my project?"

Your fine-tuned model already knows the product, the tone, and the audience.

Use your vocabulary

The model learned from your data, so it understands your internal terminology. Use the same language you used in your training data:

  • Product names, feature names, internal terms
  • Abbreviations and acronyms from your domain
  • Technical jargon specific to your field

Start simple, then iterate

Begin with straightforward requests. If the model doesn't get it right, refine in the same conversation:

  1. "Write a summary of our Q4 results"
  2. "Make it more concise — 3 bullet points max"
  3. "Add the revenue numbers from the report"

Use different conversations for different tasks

Don't mix unrelated topics in one conversation. The model uses conversation history as context — mixing topics can confuse it.

When the model falls short

If your model consistently misses something:

  • It probably wasn't in the training data. The model can only learn from what you gave it.
  • Add more relevant data and retrain. Create a new fine-tune with additional examples covering the gap.
  • Try a different base model. Some models handle certain tasks better than others.

Fine-tuning is iterative. Your first model won't be perfect. Use what you learn from chatting to improve your data and train again.

On this page