Who would possibly want to have AI making banking and rental decisions for them?
AI Do Not Consent
So-called “automated decision-making” is being heralded as the next big thing — but it turns out that many consumers are disgusted by the idea of AI making choices for them.
As the Electronic Frontier Foundation recently highlighted, a Consumer Reports survey this summer found that a broad majority of American respondents aren’t comfortable with AI making decisions about job hiring, banking, renting, medical diagnoses, and surveillance.
Of the more than 2,000 people CR surveyed, a whopping 72 percent said they’d be “uncomfortable” having AI scan their faces and answers during job interviews, and 45 percent said they were “very uncomfortable” with the concept.
When it comes to banking, meanwhile, roughly two-thirds of respondents said they weren’t comfortable with financial institutions using AI to determine if they were eligible for loans. That same percentage said they were uncomfortable with landlords using AI to decide whether they were eligible as renters, and nearly 40 percent said they were “very uncomfortable” with that potential application.
More than half of those 2,000 Americans also said they were uncomfortable with AI facial recognition surveillance, and about one-third of those respondents said they were “very uncomfortable” with it. When asked whether they would be comfortable with AI being used in medical diagnosing and treatment planning, half said they were not.
And a whopping majority of the people surveyed by CR — some 83 percent — said they would want to know what data the algorithms making decisions about them was trained on, and 91 percent said they’d want to have a way to correct the data when it was wrong. Given that AI frequently screws up, often in a discriminatory way, they’re probably not wrong to be concerned.
Sector Selector
Despite these rational and common-sense concerns, some businesses — and even some governments — are pushing full speed ahead to implement this nascent and error-riddled technology and save on human labor while they’re at it.
As the EFF notes, for instance, California’s Gov. Gavin Newsom announced earlier this year that the Golden State would be partnering with five AI firms to “test” generative AI within government agencies that cover transportation, public health, housing, and taxes. A similar project undertaken by New York City’s housing department was met with a successful tenant protest.
In the private sector, meanwhile, consultant groups like McKinsey and firms like Deutsche Bank seem all-in on the use of these new technologies that could easily slip into an algorithmic version of the sort of racist redlining policies financial institutions have undertaken for decades.
While the public and private sectors would do well to heed these obvious and overwhelming preferences against the use of decision-making AI, it seems, if recent history is any indicator, that they’ll charge ahead with these technologies regardless.
More on bad AI: Government Test Finds That AI Wildly Underperforms Compared to Human Employees.
Share This Article