Zero-shot prompting

wojciech achtelik
Wojciech Achtelik
AI Engineer Lead
July 3, 2025
Glossary Category

Zero-shot prompting is the technique of giving a large language model (LLM) a novel task with only an instruction—no labeled examples—relying on the model’s pre-training to infer the correct pattern. A typical prompt states the role (“You are a helpful tutor”), the task (“Translate the sentence into French”), and any constraints (“Keep it under 15 words”). Because no demonstrations are provided, the model depends entirely on its internal knowledge and the clarity of the instruction. Zero-shot prompting offers rapid iteration and minimal token cost, making it ideal for dynamic or one-off queries in chatbots and APIs. Precision can drop on complex or niche tasks, so developers often pair zero-shot prompts with Retrieval-Augmented Generation (RAG), tool calls, or follow-up clarification. Key metrics—exact-match accuracy and latency—guide refinements, while prompt libraries and automated evaluators accelerate A/B testing. By distilling instructions to their essence, zero-shot prompting turns an LLM into a flexible Swiss-army knife without retraining or examples.