Algorithmic ethics in public services

Governments around the world are increasingly relying on automated decision-making systems to allocate public resources. Algorithms now help determine who receives welfare payments, which families are flagged for child protection investigations, who qualifies for public housing, and how healthcare resources are distributed. These systems promise efficiency, consistency, and the elimination of human bias. But they also raise profound questions about moral responsibility that our existing frameworks struggle to answer.

When an algorithm denies someone access to disability support, who is morally responsible for that decision? The software developer who wrote the code? The data scientist who trained the model? The government official who approved its deployment? The politician who cut the budget that made automation seem necessary? Or is the responsibility so diffused across so many actors that no one is truly accountable?

The Problem of Many Hands

Philosophers have long recognised what Dennis Thompson called "the problem of many hands" — the difficulty of assigning moral responsibility when outcomes result from the actions of multiple agents. This problem is not unique to algorithmic systems. It arises whenever complex institutional structures mediate between individual decisions and their consequences. But algorithmic decision-making intensifies the problem in several important ways.

First, the technical complexity of machine learning systems makes it genuinely difficult to explain why a particular decision was made. When a neural network trained on historical data denies someone a welfare payment, the "reasoning" behind that decision may be opaque even to the system's developers. This opacity undermines the most basic requirement of accountability: that the decision-maker be able to explain and justify their decision.

Second, algorithmic systems create temporal distance between the decision to deploy the system and its downstream effects. The choices made during system design — what data to use, what outcomes to optimise for, how to handle edge cases — shape thousands or millions of subsequent decisions, often in ways that were not anticipated or intended. This makes it difficult to draw a clear causal line between any individual choice and any particular harm.

Third, algorithmic systems are often presented as neutral tools that merely implement existing policy. This framing obscures the fact that design choices are inevitably value-laden. The decision to optimise for fraud detection rather than benefit access, for example, reflects a normative judgement about which errors are more acceptable — false positives (denying benefits to eligible people) or false negatives (providing benefits to ineligible people). These trade-offs are moral choices disguised as technical specifications.

Existing Frameworks and Their Limits

Consequentialist approaches to responsibility focus on outcomes. On this view, responsibility falls on whoever could most effectively have prevented the harmful outcome. This has the advantage of directing attention to practical questions about system design and oversight. But it struggles with the fact that harmful outcomes in algorithmic systems are often statistical rather than individual — the system produces a predictable rate of errors, but which specific individuals will be harmed is not knowable in advance.

Deontological approaches focus on duties and obligations. Public officials have a duty to treat citizens fairly, to respect their dignity, and to provide adequate justification for decisions that affect their welfare. Algorithmic systems that cannot provide such justification arguably violate these duties, regardless of their aggregate accuracy. But this framework has difficulty specifying whose duty is violated when the system operates as designed but produces unjust outcomes for particular individuals.

Virtue ethics approaches focus on the character traits and dispositions that should guide institutional decision-making. A virtuous institution would exercise appropriate caution in deploying automated systems, maintain genuine concern for the people affected by its decisions, and cultivate the intellectual humility to recognise the limits of its technical capabilities. This approach has the advantage of directing attention to institutional culture and practices, but it can seem frustratingly vague when specific accountability is needed.

Toward a Framework for Algorithmic Accountability

None of these frameworks alone is adequate to the challenge of algorithmic accountability. What is needed is an approach that draws on insights from each while addressing the specific features of algorithmic decision-making that make traditional accountability mechanisms insufficient.

Such a framework would need to address at least four dimensions. First, prospective responsibility: the obligation to anticipate and mitigate potential harms before a system is deployed. This includes rigorous testing for bias, meaningful stakeholder consultation, and honest assessment of the risks of automation.

Second, retrospective accountability: clear mechanisms for identifying who is responsible when harms occur, including the ability to audit algorithmic decisions, explain their basis, and provide effective remedies to affected individuals.

Third, structural responsibility: recognition that algorithmic harms are often the product of institutional structures and incentives rather than individual failures. Addressing these harms may require changes to procurement processes, regulatory frameworks, and the political economy of government technology.

Fourth, democratic legitimacy: the principle that decisions affecting people's fundamental interests should be subject to democratic oversight and control. Algorithmic systems that make consequential decisions about welfare, housing, and healthcare should not be treated as purely technical matters to be delegated to specialists. They involve value choices that belong in the public domain.

The Stakes

These are not abstract philosophical questions. In Australia, the Robodebt scheme used an automated system to issue debt notices to welfare recipients based on averaged income data, resulting in thousands of incorrect demands for repayment. The human cost — financial hardship, psychological distress, and in some cases loss of life — was immense. The Royal Commission into Robodebt found systematic failures of governance, accountability, and concern for the people affected.

Robodebt demonstrated what happens when algorithmic efficiency is pursued without adequate attention to the moral dimensions of automated decision-making. The efficiency gains were real, but so were the harms — harms that fell disproportionately on vulnerable people who lacked the resources to challenge incorrect decisions.

As governments continue to adopt automated systems, the philosophical questions raised by algorithmic accountability will only become more pressing. We need frameworks that are sophisticated enough to handle the complexity of these systems and robust enough to ensure that the most vulnerable members of our society are not made to bear the costs of technological progress. Philosophy cannot build those frameworks alone, but it can — and must — contribute the moral clarity that the conversation demands.

← Back to Blog