Uber: Addressing inadvertent bias

In a recent analysis of public ridesharing and Census data in Chicago, Akshat Pandey and Aylin Caliskan at The George Washington University explore correlations between ridesharing fares and neighborhood characteristics. In addition to finding that average per-mile fares are higher for trips to and from areas with higher home prices, higher education levels, and younger residents, the authors also claim a small but measurable bias if the destination is a neighborhood with a higher proportion of nonwhite residents.

While we commend the Pandey & Caliskan study for investigating this important issue, we disagree with how some of the results have been interpreted. First and foremost, the analysis explores correlations but does not prove causality. We can guess why some of the correlations exist — for instance, higher concentrations of (and local demand from) ridesharing users may lead to higher average fares in younger, more educated, and more affluent neighborhoods, but the methods used are unable to reveal the underlying causes. Unfortunately, some media mistakenly reported the opposite findings about age, education, and home value.

Cities and their transportation networks are incredibly complex — reflecting not just physical infrastructure, but decades of often racist and exclusionary urban planning policy. Gaining a deeper understanding of pricing bias requires simultaneously accounting for variations in land use, zoning, housing and employment density, transit availability, road infrastructure, car ownership, typical work and leisure trip patterns, airports (which are often situated in low-income communities of color and generate disproportionate ridesharing demand), and many other factors that correlate with demographics. This study does not take these factors into account, and according to the paper itself, the underlying methodology “cannot determine how much a change in the bias scores exhibited by one feature (pickup counts) would affect bias scores of other features (ethnicity).”

Uber prohibits discrimination on its platform (see our Community Guidelines) and we remain vigilant about the potential for disparate impacts. Of course, it’s well known that biased outcomes can still result from seemingly neutral algorithms. As access to Uber’s platform rapidly grows in areas that are historically underserved by public transit and taxis, we are mindful of the dangers of inadvertent bias and participate in research partnerships to inform product development. Our internal Fairness Working Group, which includes product, operations, policy, and legal stakeholders meets regularly to help address these issues. Finally, we’re committing to bringing on an Inclusivity and Accessibility Product Lead to steer our efforts to build products and features that meet the needs of our users.

There is always more to be done, and we aim to deepen our understanding and engagement in the months to come. We welcome an opportunity to review a final version of the study and discuss the outcomes with its researchers.


Go to Source