Overview
Direct Answer
Few-shot prompting is a technique in which a language model receives a small number of demonstration examples (typically 2–10) embedded directly within a prompt to establish a pattern for generating contextually appropriate responses. This method leverages in-context learning without requiring model retraining or fine-tuning.
How It Works
The model observes the provided input–output pairs and infers the desired task structure, tone, and format from those examples. During inference, the model applies this learned pattern to new, unseen inputs within the same prompt. The proximity and ordering of examples significantly influence the model's behaviour, as the demonstrations provide implicit instruction through pattern recognition rather than explicit algorithmic rules.
Why It Matters
Organisations adopt this approach to reduce engineering overhead and deployment latency—no retraining cycles or specialised datasets are required. It enables rapid adaptation to domain-specific tasks, improved accuracy on niche problems, and cost-effective customisation without infrastructure investment.
Common Applications
Applications include customer service chatbots performing intent classification, legal document analysis extracting specific clause types, financial services automating transaction categorisation, and healthcare systems interpreting clinical notes for structured data extraction.
Key Considerations
Performance gains plateau with model size and task complexity; some tasks benefit minimally from additional examples. Token consumption increases linearly with demonstration count, raising inference costs for resource-constrained deployments.
Cross-References(1)
More in Artificial Intelligence
Inference Engine
Infrastructure & OperationsThe component of an AI system that applies logical rules to a knowledge base to derive new information or make decisions.
AI Model Registry
Infrastructure & OperationsA centralised repository for storing, versioning, and managing trained AI models across an organisation.
AI Fairness
Safety & GovernanceThe principle of ensuring AI systems make equitable decisions without discriminating against any group based on protected attributes.
AI Bias
Training & InferenceSystematic errors in AI outputs that arise from biased training data, flawed assumptions, or prejudicial algorithm design.
Artificial Narrow Intelligence
Foundations & TheoryAI systems designed and trained for a specific task or narrow range of tasks, such as image recognition or language translation.
Planning Algorithm
Reasoning & PlanningAn AI algorithm that generates a sequence of actions to achieve a specified goal from an initial state.
AI Interpretability
Safety & GovernanceThe degree to which humans can understand the internal mechanics and reasoning of an AI model's predictions and decisions.
Constraint Satisfaction
Reasoning & PlanningA computational approach where problems are defined as a set of variables, domains, and constraints that must all be simultaneously satisfied.