Big data and the creation of a self-fulfilling prophecy

Opinion by Jasmine Liu
April 5, 2017, 12:05 a.m.

On Sunday, journalist Noam Scheiber published an article in The New York Times exposing the various techniques Uber uses to induce its drivers to work in ways favorable to the company but often harmful to its employees. These tactics rely on features informed by social psychology and behavioral economics that are smoothly integrated into a game-like system that rewards workers with digital images and baits them with pop-up notifications. Manipulating psychological tricks to maximize profit is not new, but the amount of data companies have accumulated is – now more than ever, information is power. Data scientists at Uber, for example, may now easily identify trends and target features to individuals based off of particular behaviors.

While the notorious ride-sharing conglomerate has been recently entangled in multiple ethical transgressions and deserves special scrutiny for its practices, Uber is but one of many tech companies to operate with such procedures. These companies have the capability not only to monopolize the market but to hold a monopoly on certain types of data about individuals. Schreiber’s article serve as a reminder of the increasingly blurred line between free will and manipulation in the digital age.

Moreover, big-data systems may unknowingly circumvent regulations to prevent discrimination and the legal convention of assuming innocence. Because algorithms often consider a confluence of factors (rather than just one), it is often difficult to make a complaint tied to one specific component of a program. For example, Kaveh Waddell wrote an article in The Atlantic last year about the ways in which government service agencies and surveillance may inadvertently trap low-income communities in cycles of poverty through the use of big data. He cites the case of a homeless man who generated a lengthy arrest record from instances of loitering. Subsequently, he was routinely denied housing and employment opportunities by algorithms that were not nuanced enough to extend the empathy a human reviewer might have. As he summarizes, “When the data is about humans – especially those who lack a strong voice – those algorithms can become oppressive rather than liberating … Absent a human touch, its single-minded efficiency can further isolate groups that are already at society’s margins.”

These errors can be fixed by human review, as the information collected is by nature descriptive of past events. More alarming, however, is the case of algorithms used to predict future events. Most notorious is the rise of predictive policing systems that have already been implemented in many cities. By using existing data about the probability distributions for crime in different zones of a city, given variables like time and weather, police officers can patrol using maps that identify locations that are more crime-prone. While proponents advocate that such systems remove subjective biases in favor of objective empirical fact, data on crime is often biased because arrests are more likely to occur in neighborhoods that are monitored more heavily to begin with. The U.S. Department of Justice finds that from 2006-2010, 52 percent of all violent crimes went unreported. As Logan Koepke 0f Slate Magazine puts it, “Historical crime data is a direct record of how law enforcement responds to particular crimes, rather than the true rate of crime.” Justifying newer policing techniques with seemingly objective, data-based methods veils the flaws in the trends of current practices. As tomorrow’s practices become yesterday’s data, current biases will only be intensified over time.

Conclusions about certain areas based on location may create hostile relationships with authority and have negative impacts on the local neighborhood. But even more discomforting is the notion that big data can help police departments flag down individuals at risk of repeat-offending. The Chicago Police Department participated in a pilot program that used big data to compile a shortlist of 426 names who were categorized as at high risk of shooting. While the sample size was small because of the size of the experiment, the results weren’t promising. A RAND Corporation study discovered that those shortlisted weren’t any more likely to be homicide or shooting victims. Although no successful interventions were deployed to help extricate these individuals from the complex situations they were in, those who were shortlisted were three times more likely to be arrested for committing a shooting. Such results raise important questions about the limits on the scope of power authorities should be given, especially because the policies put in place by people in power today are important perpetrators of future patterns of inequality and crime.

While the examples illustrated above center around modern-day policing and its integration with data, questionable uses of big data are prevalent in many other aspects of day-to-day life. One common application is the usage of personality tests as part of the interview process. These are often a combination of more than 50 questions with ambiguous implications – what is the right answer to “Which appeals to you more, consistency of thought or harmonious human relationships?” – and neither the employer nor the employee will really know what magical concoction of answers makes a candidate a good fit. As Melissa Korn of the Wall Street Journal explains, “companies often find out what traits their high performers display, and then test for those characteristics.” In line with programs that make quantified claims about an individual’s likelihood of committing a crime, these tests possess the capability to predict the probability of gaining the maximum return on investment from a particular personality type. This general approach is more reflective of the culture of the organization implementing such policies than the individual applying. Instead of prioritizing the cultivation of a culture of growth and learning, the government and these businesses subscribe to a “fixed growth” mindset. According to Maria Popova, founder of popular philosophy blog Brain Pickings, such a mindset “assumes that our character, intelligence, and creative ability are static givens which we can’t change in any meaningful way.”

Undoubtedly, as an extremely flexible and cogent tool, big-data has many positive applications that have gone unmentioned. However, it is important not to overlook its ethical gray areas. Most demoralizing, to varying degrees, these big-data techniques rely on a fundamentally fatalistic view of the world. In the face of powerful groups that have devised systems for synthesizing and understanding personal information, the individual is a relatively powerless, vulnerable unit. What is utilitarian (and perhaps unfortunate) about crunching numbers to create general trends is that it may often create accurate (and in the case of private actors, profitable) outcomes. What is slowly lost in the process is the beauty of the illusion of free will.

 

Contact Jasmine Liu at jliu98 ‘at’ stanford.edu.

Jasmine Liu is a senior staff writer and writes for Opinions and Arts & Life.

Login or create an account