By Jenny L. Davis, Gertrude Conaway Vanderbilt Chair and Professor of Sociology at Vanderbilt University
Algorithmic bias is a perpetual problem. It is a problem rooted in history, manifesting in the present, and shaping the future into troubling form. This is not a problem with a technical fix, a box to be ticked, nor obvious actors to blame. It’s diffuse, entrenched, and the subject of significant attention.
That attention, framed through the prism of ‘fairness’, has not been especially effective, if effectiveness is measured in a greater justice and less harm. With each new advance—automated decision systems, facial recognition, generative AI—social stratifications replicate, amplify, and scale.
The fairness paradigm isn’t working. It’s time for something else. Here, I pose algorithmic reparation as an orienting framework and worldbuilding project, displacing fairness in favour of redress. This draws from a burgeoning movement across fields and domains.
Algorithmic bias: From novel insight to social fact
Algorithmic bias is not a new issue. Journalists and academics have been writing about it for years, not to mention advocacy groups and those who have been directly targeted, erased, or otherwise affected. Many of us may have experienced some form of algorithmic bias ourselves. What started as a scholarly insight, journalistic revelation, and activist refrain is now common knowledge and everyday fact.
But while the problem is well worn, the question of what to do about it remains unresolved. This question has been met with a flurry of responses. By and large, these are driven by an imperative towards fairness.
Before we go any further, some definitions. These are, of course, working definitions.
- Computing – a field of study and practice concerned with information processing of hardware, software, and data.
- Algorithms – a set of rules that instruct a computational process.
- Artificial Intelligence – machinic entities that mimic human behaviour and decision making, animated by rule based and machine learning algorithms.
- Fairness – the moral imperative to treat people the same as one another in a neutral and objective process.
- Reparation – redress for systemic and/or systematic harms through attention to social categories and how those social categories matter.
The fairness paradigm
So, back to fairness. The logic of fairness in computing is deeply embedded and widespread. It features in and frames flagship organizations and academic conferences. A quick search for ‘fair machine learning’ will leave a screen plastered with results, while major companies hire teams and form departments to execute fairness initiatives.
In a computing context, in the most basic sense, fairness means to erase socially relevant differences from mathematical models so that those models treat everyone the same.
This manifests through algorithmic fairness, which is an umbrella concept for algorithmic tools that use computation and statistics to neutralize bias and remove both direct and proxy indicators of protected class attributes like race, class, gender, disability, geography, etc.
These efforts occupy practitioners of Fair Machine Learning (FML), a technical field working to optimize fair processes and outcomes when inferring and predicting with algorithmic applications.
This field operationalizes fairness through three broad definitions: anti-classification, classification parity, and calibration. Without getting too technical, these refer to the removal of protected class attributes, equal error rates between groups, and models calibrated to base-level differences between groups, respectively. In practice, this might look like removing race and gender from a predictive model (anti-classification), ensuring that the model is equally accurate across racial and gender categories (classification parity), or ensuring the model treats racial and gender groups the same, but accounts for empirical differences between them.
This all seems quite reasonable, virtuous, even rigorous. So, what’s wrong with ‘fair’?
The problem
The biggest problem with fair machine learning is that fairness isn’t working. The smartest minds, most prestigious institutions, millions of grant dollars, and reams of articles are all geared towards making AI and algorithmic systems fairer. Yet, bias and discrimination persist.
A few years back, I published a paper with Apryl Williams and Michael Yang, making a general case for why we think fairness isn’t up to the task of ameliorating algorithmic biases and related discriminatory effects. What it boils down to is a faulty assumption of meritocratic social systems. We refer to this misguided view as algorithmic idealism, or the erroneous notion that with good enough data and precise enough math, computation can unlock society’s latent meritocracy.
The trouble is that society is not inherently meritocratic, nor has it been, but instead deeply stratified and historically, systemically so. Algorithmic idealism is animated by an interrelated set of fallacies and flaws, such as techno-solutionism, a commitment to neutrality, and inattention to power and privilege when considering disadvantage.
An alternative proposal: Algorithmic reparation
Algorithmic reparation is an alternative to the fairness paradigm, focusing not on erasure or neutrality but rather, on historical systemic redress.
Resolving fairness fallacies and flaws, algorithmic reparation casts a critical lens on algorithmic systems, keeping open the possibility that computing and its applications are ill fit for the problem and purpose at hand. Rather than revealing and achieving neutrality, a reparative approach starts with the baseline recognition of structural inequity to be observed and corrected. Demographics are not to be ignored or erased but attended to as anchors of identity and conduits of opportunity.
Here, technologies and technical fixes are never the start or end point, but one (often minor) piece of wraparound, holistic, and community-driven solutions. And while fairness focuses primarily on means and modes of disadvantage, algorithmic reparation factors in the ways disadvantage feeds and follows from, privilege and power.
Why algorithms?
At the start, I positioned algorithmic reparation as a worldbuilding project. This may seem odd. How can algorithmic interventions refashion whole worlds? The focus on algorithms is strategic, and rests on the notion that algorithms themselves are not always (or even often) the point, but rather, the point-of-access for social, cultural, and institutional structures.
Algorithms are the linchpin of AI and computing, which are now socially integral and having transformative effect. These algorithms are everywhere – organisations, institutions, and all through the mundanities of social life. Data, algorithms, and the AI they enable thus touch the myriad places where inequality lives and festers. Moreover, algorithms serve as a strategic entry point for reparative justice in a time when sweeping claims to reparation and redress may be too big, too much, too ideologically aggressive, falling away without ever taking hold.
The fairness paradigm dominates today, but its record leaves much to be desired. A reparative approach suggests another way forward. For more on algorithmic reparation, you can read Big Data & Society’s special collection on the topic, and stay tuned for the book, which Apryl Williams and I promise will be fully drafted before the first new year’s chime of 2025 rings.
For more information on the work of the Centre for Sociodigital Futures, join our mailing list, follow us on X and LinkedIn or visit our webpage.
Jenny L. Davis is the Gertrude Conaway Vanderbilt Chair and Professor of Sociology at Vanderbilt University, with an Honorary Professorship at the Australian National University. She works at the intersection of social psychology and technology studies, focusing on the ways social forces embed within and are affected by technological systems. Read more about Jenny’s work at jennyldavis.com