Machine LearningUnsupervised Learning

Matrix Factorisation

Overview

Direct Answer

Matrix factorisation is a mathematical technique that decomposes a high-dimensional matrix into a product of lower-dimensional matrices, typically to uncover latent patterns or reduce computational complexity. It is foundational to collaborative filtering and dimensionality reduction in machine learning applications.

How It Works

The method approximates an original matrix (often sparse, such as user-item ratings) as the product of two or more smaller matrices whose dimensions correspond to latent factors. Algorithms such as Singular Value Decomposition (SVD) or non-negative matrix factorisation iteratively adjust these factor matrices to minimise reconstruction error, exposing hidden features that explain observed data structure.

Why It Matters

The technique reduces memory footprint and computational cost whilst preserving predictive signal, enabling scalable processing of sparse datasets. In recommendation engines and information retrieval, it delivers significant accuracy gains and faster inference compared to dense similarity approaches, directly impacting user engagement and operational efficiency.

Common Applications

Primary use cases include collaborative filtering for e-commerce and streaming platforms (predicting user preferences from implicit feedback), topic modelling in document analysis, and feature extraction in image and signal processing. It is also employed in link prediction for social networks and latent semantic indexing for search systems.

Key Considerations

Practitioners must balance model complexity and overfitting risk, select appropriate factorisation rank (number of latent factors), and handle sparsity carefully. Cold-start problems persist when users or items lack historical interactions, and interpretability of discovered latent factors remains challenging.

Cross-References(1)

Machine Learning

More in Machine Learning