Robodebt and the Dangers of Datafication: Fast Machines and Fallible Humans

In today’s Five-Minute Friday Read, Mark Andrejevic, Professor in Media, Film and Journalism at Monash University, explores the robodebt controversy and the nature of automated systems to exacerbate pre-existing imbalances of power.

The Federal Government recently announced that the Royal Commission’s investigation into Centrelink’s automated debt recovery system (aka ‘robodebt’) will focus, among other things, on how to avoid similar debacles in the future. This is a crucially important point of inquiry, given the ongoing automation of bureaucratic and administrative operations.

We live in an era of voracious datafication, driven, at least in part, by the promise of enhanced efficiency and convenience.1 However, the accelerating collection and accumulation of data result in cascading pressures to automate data processing and any resulting decision-making processes. It is hard to imagine this process reversing itself any time in the foreseeable future.

The advantage of datafication in its contemporary digital form is that it is machine-readable, enabling more-than-human speeds of sorting and analysis. The potential drawback is that machines have no understanding of what they are doing – and sometimes humans do not either.

Automated debt recovery, for example, promised a cash windfall by accelerating the process of detecting and reclaiming overpayments to Centrelink beneficiaries. Although there is plenty for the Royal Commission still to investigate about the implementation of this system, it is clear that the failure was not simply the result of automation but of decisions designed to place the burden of contesting debt claims on recipients – and to render that burden as heavy as possible.

In other words, the “robotic” part of the system was just one component in a series of decisions crafted to weaponise automated debt collection against those with the fewest resources for defending themselves. As the legal scholar Terry Carney has noted, the program (and some of the early news coverage) reflected the “tolerance, especially in some media quarters, of a ‘culture’ of political and public devaluing of the significance of breaches of the rule of law and rights of vulnerable welfare clients.”2

The system relied upon what the government eventually conceded was a flawed (and non-transparent) system for detecting overpayment – that is, a default procedure of income averaging that would reliably produce errors.It compounded this design flaw by making the process of detecting and querying those errors punishingly difficult for recipients. A 2017 report by the Commonwealth Ombudsman, for example, noted that debt notification letters “did not include crucial information” such as a contact phone or helpline number.4 Even those who eventually tracked down a help number were thwarted by the fact that service centre staff were inadequately trained to field queries and complaints about the system.5

Thus, a series of human, ‘non-automated’ decisions that rendered the system both erroneous and difficult to challenge contributed to the systemic failure of the debt recovery system. These decisions resulted from the overall intent of the system: to recapture as much money from Centrelink recipients as quickly as possible, with the least potential resistance.

It would be possible to imagine such a system that worked the other way around: automatically compensating recipients for underpayment, for example – but this would not have aligned with the revenue goals of the system.

It would have been possible for humans at Centrelink, following the same procedures, to make the same set of financial miscalculations, but this would have been a slower process and perhaps more open to human contestation. Embedded in bureaucracy, automated systems can be used as an alibi for opacity and recalcitrance: “Why do you owe this much? Well, because the computer said so.” This is obviously an unfair and unsatisfactory approach that shifts the burden of proof to the recipient. Still, it is abetted by the tendency to frame computers (however inaccurately) as powerful, ‘objective’ information sources.

The robodebt debacle raises the important question regarding which decision-making processes should be subject to automation and datafication. It is one thing to let algorithms decide what movies or music to recommend and quite another to use them to impose life-changing financial obligations on vulnerable people.

Automated assessment systems are in use for various purposes in both the public and private sectors, ranging from the use by law enforcement of automated threat assessment scores to the evaluation of the creditworthiness of loan applicants and the riskiness of insurance applicants.

The case of robodebt has demonstrated the tendency of such systems to be deployed in ways that burden the already disadvantaged. It is cheaper and easier to experiment on those without economic and legal resources. It also illustrates the crucial importance of rigorous trialling before implementation. Crucially, it highlights the asymmetry between automated action and human response. Hundreds of millions of dollars of debt could be rapidly generated in a very short period. Sorting out the damage caused by the system has taken its toll in years and human lives.

The broader lesson is that automated systems, unless they are deliberately crafted to do otherwise, tend to exacerbate power imbalances. This understanding should shape the Commission’s inquiry, as well as the broader deployment of automated systems in both the public and private sectors.

References

Ulises A Mejias and Nick Couldry. 2019. “Datafication.” Internet Policy Review 8 (4). DOI: 10.14763/2019.4.1428. https://policyreview.info/concepts/datafication.
Terry Carney. 2019. “Robo-debt illegality: The seven veils of failed guarantees of the rule of law?” Alternative Law Journal 44 (1): 4-10, p. 4. DOI: 10.1177/1037969X1881591. https://journals.sagepub.com/doi/full/10.1177/1037969X18815913
Luke Henriques-Gomes. 2020. “Coalition says it has no duty of care for welfare recipients over robodebt.” The Guardian, 18 February. Available online at: https://www.theguardian.com/australia-news/2020/feb/18/coalition-says-it-has-no-duty-of-care-for-welfare-recipients-over-robodebt
Richard Glenn. 2017. Centrelink’s automated debt raising and recovery system. Report by the Acting Commonwealth Ombudsman, April. Available online at: https://www.ombudsman.gov.au/__data/assets/pdf_file/0022/43528/Report-Centrelinks-automated-debt-raising-and-recovery-system-April-2017.pdf, p. 2.
Ibid.

About the author

Professor Mark Andrejevic FAHA

Mark Andrejevic is a Professor at the School of Media, Film and Journalism at Monash University. He contributes expertise in the social and cultural implications of data mining and online monitoring. He writes about monitoring and data mining from a socio-cultural perspective and is the author of three monographs and more than 60 academic articles and book chapters. He was the Chief Investigator for an ARC QEII Fellowship investigating public attitudes toward collecting personal information online.

He Tweets at @MarkAndrejevic and blogs at The Digital Enclosure.

Acknowledgement of Country

The Australian Academy of the Humanities recognises Australia’s First Nations Peoples as the traditional owners and custodians of this land, and their continuous connection to country, community and culture.