Deep LearningArchitectures

Batch Normalisation

Overview

Direct Answer

Batch normalisation is a technique that rescales and recentres the inputs to each layer during neural network training by normalising activations across a mini-batch. This approach reduces internal covariate shift—the phenomenon where the distribution of layer inputs changes during training—thereby enabling faster convergence and improved stability.

How It Works

For each mini-batch, the technique computes the mean and variance of activations across training samples, then standardises these values using z-score normalisation. Learnable scale and shift parameters (gamma and beta) are then applied per feature, allowing the network to recover expressivity. During inference, a running estimate of population statistics computed from training batches replaces the mini-batch statistics.

Why It Matters

Normalisation dramatically accelerates training convergence, reduces sensitivity to weight initialisation, and enables use of higher learning rates, directly reducing time-to-deployment and computational cost. Organisations deploying deep learning systems benefit from improved model stability and generalisation performance, particularly when training on large datasets.

Common Applications

The technique is standard in convolutional neural networks for image classification, object detection, and computer vision pipelines. It is equally prevalent in recurrent architectures and transformer-based language models, where it stabilises training of very deep networks across NLP and recommendation systems.

Key Considerations

Batch normalisation introduces a dependency on batch size; very small batches produce unreliable statistics whilst very large batches reduce computational efficiency. The distinction between training and inference behaviour requires careful implementation, and layer normalisation or group normalisation may be preferable in certain contexts such as recurrent networks or variable-batch settings.

Cross-References(1)

Deep Learning

More in Deep Learning