What are Adapters

PG() fotor bg remover fotor bg remover
Bartosz Roguski
Machine Learning Engineer
Published: July 25, 2025
Glossary Category
NLP

Adapters are lightweight neural network modules inserted into pre-trained models to enable task-specific adaptation without modifying the original model parameters. These parameter-efficient fine-tuning techniques add small trainable layers while keeping the base model frozen, allowing rapid customization for new domains or tasks. Common adapter architectures include bottleneck adapters with down-projection and up-projection layers, Low-Rank Adaptation (LoRA) that decomposes weight updates into low-rank matrices, and prefix tuning that prepends learnable tokens to input sequences. Adapters typically add less than 5% additional parameters while achieving performance comparable to full fine-tuning. Benefits include reduced computational costs, faster training, prevention of catastrophic forgetting, and support for multi-task learning where different adapters handle different capabilities. For AI agents, adapters enable efficient personalization, domain adaptation, and skill acquisition without expensive retraining, making systems more modular and cost-effective.

Want to learn how these AI concepts work in practice?

Understanding AI is one thing. Explore how we apply these AI principles to build scalable, agentic workflows that deliver real ROI and value for organizations.

Last updated: July 25, 2025