Prompt Tuning

wojciech achtelik
Wojciech Achtelik
AI Engineer Lead
July 4, 2025
Glossary Category

Prompt Tuning is a parameter-efficient fine-tuning technique that optimizes a small set of continuous prompt tokens prepended to input sequences while keeping the pre-trained model parameters frozen. This method learns task-specific soft prompts through gradient descent, enabling model adaptation without modifying the underlying transformer weights. Prompt tuning typically involves optimizing only 0.01-0.1% of total model parameters, making it highly efficient for deployment scenarios with limited computational resources. The technique proves particularly effective for natural language understanding tasks, where learned prompt representations guide model behavior toward desired outputs. Advanced implementations incorporate prompt initialization strategies, multi-task prompt sharing, and prompt ensembling to enhance performance across diverse applications. Prompt tuning enables rapid model customization, supports multiple concurrent tasks through different prompt sets, and maintains the original model’s general capabilities while achieving task-specific performance comparable to full fine-tuning with significantly reduced training costs and storage requirements.