Deep LearningTraining & Optimisation

Gradient Clipping

Overview

Direct Answer

Gradient clipping is a regularisation technique that constrains the magnitude of gradients during backpropagation to prevent their unbounded growth. By capping gradient values to a specified threshold, it stabilises training in deep networks prone to explosive gradient escalation.

How It Works

During each backpropagation pass, gradients are computed through the network layers. If the norm or individual values exceed a predefined threshold, they are rescaled proportionally to remain within bounds. Common approaches include L2 norm clipping (limiting vector magnitude) or element-wise clipping (bounding individual gradient components).

Why It Matters

Exploding gradients destabilise training, cause numerical overflow, and degrade model convergence—particularly in recurrent neural networks and very deep architectures. Clipping enables reliable training in challenging scenarios, reduces computational overhead from loss scaling workarounds, and improves model robustness across diverse initialisation schemes.

Common Applications

The technique is standard in natural language processing models, particularly sequence-to-sequence architectures and transformers. It is also employed in reinforcement learning policy gradient methods and in training deep convolutional networks on tasks with variable-length sequences.

Key Considerations

Aggressive clipping thresholds may impede gradient flow and slow convergence, whilst lenient thresholds offer minimal protection. The optimal threshold is dataset and architecture dependent, requiring empirical tuning alongside monitoring of gradient statistics.

Cross-References(1)

Deep Learning

More in Deep Learning