Overview
Direct Answer
In-context learning is the ability of large language models to adapt behaviour and perform new tasks by conditioning on examples or instructions embedded directly in the input prompt, without modifying the model's underlying parameters. This represents a fundamental departure from traditional machine learning approaches that require retraining or fine-tuning.
How It Works
When relevant examples, instructions, or few-shot demonstrations are provided at the beginning of a prompt, transformer-based models leverage their attention mechanisms to recognise patterns and apply implicit learning directly during inference. The model's internal representations shift to accommodate the new task context, with later token predictions influenced by earlier contextual information through learned attention weights.
Why It Matters
This capability dramatically reduces deployment friction and cost by eliminating the need for task-specific fine-tuning cycles, which require labelled data, computational resources, and model versioning overhead. Organisations can iterate rapidly on use cases, adapt to domain shifts, and deploy generalised models across diverse applications without retraining.
Common Applications
Practical applications include zero-shot and few-shot classification in customer support (routing queries by providing category examples), legal document analysis where domain-specific terminology is demonstrated in-prompt, and cross-lingual translation tasks where bilingual examples guide behaviour. Content moderation systems and domain-specific question-answering similarly rely on contextual demonstration.
Key Considerations
Performance remains sensitive to example selection, ordering, and prompt formulation; there is no guarantee of task mastery regardless of in-prompt guidance. Context window length constraints and potential brittleness with complex reasoning tasks necessitate careful validation before production deployment.
Referenced By2 terms mention In-Context Learning
Other entries in the wiki whose definition references In-Context Learning — useful for understanding how this concept connects across Artificial Intelligence and adjacent domains.
More in Artificial Intelligence
AI Robustness
Safety & GovernanceThe ability of an AI system to maintain performance under varying conditions, adversarial attacks, or noisy input data.
Causal Inference
Training & InferenceThe process of determining cause-and-effect relationships from data, going beyond correlation to establish causation.
AI Accelerator
Infrastructure & OperationsSpecialised hardware designed to speed up AI computations, including GPUs, TPUs, and custom AI chips.
Artificial Superintelligence
Foundations & TheoryA theoretical level of AI that surpasses human cognitive abilities across all domains, including creativity and social intelligence.
AI Agent Orchestration
Infrastructure & OperationsThe coordination and management of multiple AI agents working together to accomplish complex tasks, routing subtasks between specialised agents based on capability and context.
Precision
Evaluation & MetricsThe ratio of true positive predictions to all positive predictions, measuring accuracy of positive classifications.
Planning Algorithm
Reasoning & PlanningAn AI algorithm that generates a sequence of actions to achieve a specified goal from an initial state.
State Space Search
Reasoning & PlanningA method of problem-solving that represents all possible states of a system and searches for a path from initial to goal state.