How Algorithmic Experiments Harm People Living in Poverty

We may earn a commission from links on this page.

Virginia Eubanks made the same mistake most would. In her job working with low-income women struggling to afford housing, she assumed they also struggled with access to vital technology, like the internet. But this technology isn’t just accessible, it’s permeates access to basic resources people in poverty need to survive, and it’s often rigged against them. Her new book, Automating Inequality: How High Tech Tools, Profile, Police and Punish the Poor is about how technology has come to define people touched by poverty.

“In fact, technology was really ubiquitous,” Eubanks tells Gizmodo in a phone interview. In her job finding housing for low income women, many told her that it often “felt like the technology was surveilling them, punishing them, or diverting them from accessing resources.” As these women tried to navigate poverty and its many attendant crises—whether going to the welfare office, dealing with the criminal justice system or applying for public housing—they found automation was increasingly replacing human connection and understanding.

Advertisement
Advertisement

This became the crux of Automating Inequality, out today. As need for public resources increases, Eubanks discovered, more states are automating the process of applying for public services—welfare, food stamps, housing, etc. Eubanks opens the book in Indiana in 2007, where the governor signed a contract with IBM to automate the food stamp and Medicaid application process by replacing local caseworkers with online applications, statistical models, and a regional call center.

Advertisement

“What that system did was quite explicitly sever the link between local caseworkers and the district that they served,” she says. “The result of that was not people getting off welfare and finding ways to self-sufficiency; the result was [a rise in] denials of benefits for basic human rights like food and medical care, a rise in extreme poverty in Indiana, and even death.”

In 2009, Indiana had pulled out of the IBM contract, alleging improper rejections, missing documents and increased wait time. Further, during the transition, severely ill recipients, taking seizure, heart and lung medication, were denied benefits and told they’d have to reapply via the online system. One 80-year-old woman was denied because she didn’t re-register during the re-registration period, while she was hospitalized for heart failure. Eubanks tells the story of Omega Young, an elderly woman who lost benefits as she was dying of cancer. An appeals court in 2012 found that IBM breached its contract with the state by failing to automate the system. Six years later, IBM is still litigating its battle with Indiana over the failed automation gambit.

Advertisement

Automating Inequality uses these case studies to chart these societal fluctuations: The increase in poverty in the country, decrease in resources allocated in helping the poor, and the rise in automation decision making. As Eubanks explains, these new, technologically-aided methods of evaluating people in need are rooted in decades of disparaging rhetoric about poverty itself and the belief that poor people deserve poverty. Speaking to Medicaid applicants in Indiana and housing applicants in LA, Eubanks observed that the data collection process of applying for benefits is now even more invasive, impersonal, and unforgiving. Without a human caseworker that makes decisions based on variable circumstances, answers to online highly personal questionnaires—about drug use, education level, marital status, whether or not you use condoms during sex—become mere data points in the statistical models-transforming public services.

“We believe, as a nation, that poverty is an individual failing and that it only happens to a tiny minority of people,” Eubanks continues. “And probably people are pathological or they made bad choices. So we are determining whether or not individual poverty is their own fault rather than spending time and effort on supporting self-determination or unleashing human capacity. And so these tools have evolved to do that.”

Advertisement

Data collection is not only skewed against the poor, it punishes them further. The book’s third case study is of an algorithmic model in Pittsburgh meant to predict the likelihood of child abuse. The model weighs 131 factors when determining whether workers need to open cases on households where abuse or neglect has been alleged. The cyclical nature of the system risks what Eubanks calls “poverty profiling.”

Advertisement

This process “over-surveils working families because it is only using the county and state data about folks who access public programs,” Eubanks says. In determining whether case needs to be opened, the algorithmic model excludes important information that would be in reports from babysitters or Alcoholics Anonymous. Instead, it looks for things like education level of family members, martial status, and what public resources they depend on. In attempting to standardize these decision by applying the same metric to all families, the algorithmic model ignores the fact that the conditions and circumstances behind the statistics vary from family to family. If a family has the money to treat an addiction problem at a private facility (off the public record), for example, or borrow cash from friends to make ends meet instead of applying for food stamps, their scores are inflated. For those relying on public assistance, each interaction with public services marks the family with suspicion. “That creates a feedback loop where poor working families are seen as risky to their children because they use these resources more and then they’re surveilled more.”

“One of the folks I talked to in the book was investigated for medical neglect because he couldn’t afford his daughter’s prescription after taking her to the emergency room,” Eubanks explains. A social worker, in this case, would hopefully determine that this occurred because the family needs financial assistance, instead a statistical model just flagged that the child was not provided with medical treatment. “These are large scale social crises, like not having affordable medical care for folks, that the results end up landing on individual families.” It faults the person who cannot afford medication, not the system that makes medical care unaffordable.

Advertisement

When determining the fitness of a family, the Pittsburgh statistical model will run a score on not just every immediate family member, but extended family as well. (Lower income families often have extended family living in the same household.) Eubanks compares this to a “virus” because of how bias spreads along familial networks, judging their every interaction with public services. The social crises that create the gaps in resources, filled in momentarily by public assistance, are made invisible. There are simply data points—families that seem like they can care for their children or those that can’t.

“The nation needs to get its soul right around poverty,” Eubanks says. “And I think we’re really facing that crisis right now.”

Advertisement

Referring to the emergent field of algorithmic injustice, and increased interest in how technology impacts poverty, Eubanks says she’s hopeful.

“In a time of deep scarcity, where a lot of families are suffering deeply, I believe that these tools have come [to the front] at this moment is not accidental. It’s very much a response to the politics of scarcity,” she says. As the tide turns towards permanent automation of allocating public resources, this is the time to address embedded biases, and to face these issues head on, before it’s too late. “It really offers us this moment—because it makes these inequalities so visible—to really attack the roots of the problem.”

Advertisement

Advertisement