What is Stacking?

Antoni Kozelski
CEO & Co-founder
Published: July 29, 2025
Glossary Category
ML

What is stacking refers to an ensemble learning technique that combines predictions from multiple diverse base models using a meta-learner to achieve superior performance compared to individual models. This approach involves training several heterogeneous models on the same dataset, then using their predictions as input features for a higher-level meta-model that learns optimal combination strategies. The stacking process employs cross-validation to generate out-of-fold predictions, preventing data leakage and overfitting while training the meta-learner. Base models typically include different algorithms like random forests, support vector machines, gradient boosting, and neural networks to capture diverse patterns. The meta-model, often a linear regression or neural network, discovers how to best weight and combine base model outputs. Stacking leverages model diversity to reduce prediction variance and bias. For AI agents, stacking enables robust decision-making through ensemble intelligence.

Want to learn how these AI concepts work in practice?

Understanding AI is one thing. Explore how we apply these AI principles to build scalable, agentic workflows that deliver real ROI and value for organizations.

Last updated: July 31, 2025