Overview
Direct Answer
An attention mechanism is a neural network component that dynamically weights input elements to selectively focus on the most relevant information when computing each output representation. It enables models to learn which parts of the input to prioritise, rather than treating all inputs equally.
How It Works
The mechanism computes attention weights through a scaled dot-product calculation between query and key vectors, then applies these weights to value vectors via softmax normalisation. This allows the network to assign higher importance to semantically relevant positions whilst suppressing irrelevant ones, creating context-dependent output representations.
Why It Matters
Attention significantly improves model accuracy on sequence-to-sequence tasks, reduces training time through parallelisation, and enables interpretability by revealing which input regions influenced specific predictions. These improvements directly enhance performance in translation, summarisation, and question-answering systems whilst reducing computational waste.
Common Applications
Machine translation (encoder-decoder architectures), natural language understanding in transformer-based models, image captioning, speech recognition, and clinical text analysis. Multi-head variants are standard in contemporary large language models and vision transformers.
Key Considerations
Computational complexity scales quadratically with sequence length, limiting applicability to very long documents without approximation techniques. Practitioners must balance interpretability gains against increased model complexity and memory requirements during inference.
Cross-References(1)
Referenced By3 terms mention Attention Mechanism
Other entries in the wiki whose definition references Attention Mechanism — useful for understanding how this concept connects across Deep Learning and adjacent domains.
More in Deep Learning
Word Embedding
Language ModelsDense vector representations of words where semantically similar words are mapped to nearby points in vector space.
Layer Normalisation
Training & OptimisationA normalisation technique that normalises across the features of each individual sample rather than across the batch.
Diffusion Model
Generative ModelsA generative model that learns to reverse a gradual noising process, generating high-quality samples from random noise.
Flash Attention
ArchitecturesAn IO-aware attention algorithm that reduces memory reads and writes by tiling the attention computation, enabling faster training of long-context transformer models.
Softmax Function
Training & OptimisationAn activation function that converts a vector of numbers into a probability distribution, commonly used in multi-class classification.
Sigmoid Function
Training & OptimisationAn activation function that maps input values to a range between 0 and 1, useful for binary classification outputs.
Adapter Layers
Language ModelsSmall trainable modules inserted between frozen transformer layers that enable task-specific adaptation without modifying the original model weights.
Gradient Clipping
Training & OptimisationA technique that caps gradient values during training to prevent the exploding gradient problem.