Artificial IntelligencePrompting & Interaction

Few-Shot Learning

Overview

Direct Answer

Few-shot learning is a machine learning paradigm where models achieve task performance through exposure to only a small number of labelled examples—typically between two and ten instances per class. This approach differs fundamentally from traditional supervised learning, which requires thousands of examples, and leverages transfer learning or in-context learning mechanisms to generalise from minimal data.

How It Works

The mechanism relies on the model's pre-trained representations and ability to recognise patterns from limited exemplars. In large language models, few-shot capability emerges through in-context learning, where examples are provided within the input prompt without parameter updates. Meta-learning approaches train models explicitly to adapt quickly to new tasks, whilst metric-learning methods learn similarity functions that can classify unseen data points based on proximity to support examples.

Why It Matters

Organisations benefit significantly from reduced labelling costs, faster deployment timelines, and the ability to address long-tail problems where abundant training data is infeasible. In regulated industries and specialised domains—such as medical imaging or legal document analysis—few-shot methods accelerate model development whilst maintaining data privacy and compliance requirements.

Common Applications

Applications include intent classification in customer service chatbots, rapid personalisation in recommendation systems, and medical diagnosis from limited patient records. Few-shot techniques are particularly valuable in rare disease detection, multilingual natural language processing, and content moderation where class distributions are highly imbalanced.

Key Considerations

Performance often remains lower than fully-supervised baselines, and quality of selected examples disproportionately influences outcomes. Practitioners must carefully curate exemplars and recognise that success depends heavily on the model's pre-training quality and task similarity to the training distribution.

Cross-References(2)

Artificial Intelligence
Machine Learning

More in Artificial Intelligence

See Also