Fine-Tuning

Antoni Kozelski
CEO & Co-founder
July 3, 2025
Glossary Category
LLM

Fine-Tuning is a machine learning technique that adapts pre-trained models to specific tasks or domains by continuing training on smaller, task-specific datasets while leveraging previously learned representations. This process modifies model parameters through supervised learning on labeled examples, enabling specialization for particular applications such as sentiment analysis, question answering, or domain-specific text generation. Fine-tuning requires significantly less computational resources and training data compared to training from scratch, while achieving superior performance on target tasks. The technique involves adjusting learning rates, selecting appropriate layers for modification, and implementing regularization strategies to prevent overfitting on limited training data. Advanced fine-tuning approaches include parameter-efficient methods like LoRA, adapter layers, and prompt tuning that modify only small subsets of model parameters while maintaining general capabilities across diverse applications and deployment scenarios.