Overview
Direct Answer
AI bias refers to systematic disparities in model predictions or outputs that disadvantage particular groups or outcomes, stemming from non-representative training data, encoded human prejudices, or algorithmic design choices that amplify historical inequities. These errors are distinct from random model noise and propagate through downstream decisions.
How It Works
Bias emerges when training datasets reflect historical imbalances—for example, loan approval systems trained on decades of discriminatory lending practices. Algorithms optimise to minimise loss across aggregate populations, inadvertently learning to replicate or magnify disparities present in source data. Feature selection, sampling strategies, and loss function design further influence which groups experience worse performance or harmful outcomes.
Why It Matters
Organisations face regulatory exposure under anti-discrimination law, operational risk from public backlash, and accuracy degradation in underrepresented segments. Financial services, healthcare, recruitment, and criminal justice systems experience material harm when biased models deny loans, misdiagnose conditions, reject qualified candidates, or influence sentencing recommendations.
Common Applications
Facial recognition systems exhibit higher error rates on darker skin tones; hiring algorithms have screened out female candidates; medical risk scores underestimate disease burden in Black patients; credit scoring models perpetuate lending disparities across protected groups.
Key Considerations
Detecting and correcting bias requires multi-stage governance—auditing training data composition, validating performance across demographic segments, and accepting that mitigation often involves accuracy-fairness tradeoffs. No single metric captures bias comprehensively across all stakeholder perspectives.
More in Artificial Intelligence
Planning Algorithm
Reasoning & PlanningAn AI algorithm that generates a sequence of actions to achieve a specified goal from an initial state.
Heuristic Search
Reasoning & PlanningProblem-solving techniques that use practical rules of thumb to find satisfactory solutions when exhaustive search is impractical.
AI Governance
Safety & GovernanceThe frameworks, policies, and regulations that guide the responsible development and deployment of AI technologies.
AI Explainability
Safety & GovernanceThe ability to describe AI decision-making processes in human-understandable terms, enabling trust and regulatory compliance.
AI Interpretability
Safety & GovernanceThe degree to which humans can understand the internal mechanics and reasoning of an AI model's predictions and decisions.
Turing Test
Foundations & TheoryA measure of machine intelligence proposed by Alan Turing, where a machine is deemed intelligent if it can exhibit conversation indistinguishable from a human.
Artificial Superintelligence
Foundations & TheoryA theoretical level of AI that surpasses human cognitive abilities across all domains, including creativity and social intelligence.
Neural Architecture Search
Models & ArchitectureAn automated technique for designing optimal neural network architectures using search algorithms.