One-shot prompting

PG()
Bartosz Roguski
Machine Learning Engineer
July 3, 2025
Glossary Category

One-shot prompting is a technique where a large language model receives exactly one worked example along with the task instruction before generating its own answer. The single demonstration teaches the model the desired format, style, or reasoning pattern—bridging the gap between zero-shot prompting (no examples) and few-shot prompting (multiple examples). A typical one-shot prompt bundles a system role (“You are a financial analyst”), a user query, and one full Q&A pair; the model then applies the learned pattern to a new, similar query. Benefits include higher accuracy on structured outputs, code generation, or niche domains while keeping token costs low. Pitfalls involve overfitting to the lone example and limited generalization if the exemplar is unrepresentative. Developers iterate on example quality, temperature, and retrieval-based insertion to optimize metrics such as exact-match and latency. In Retrieval-Augmented Generation (RAG) flows, one-shot prompting grounds the LLM and clarifies response schemas without bloating the context window.