LOS ANGELES, Jan. 11, 2024 /PRNewswire/ — Major Wall Street banks have poured billions of dollars into AI development, research, and patents without adequate safeguards, according to an investigation by Consumer Watchdog.
But California is developing rules in 2024 that, if properly implemented, will address AI and finances. Under the California Consumer Privacy Act (CCPA), proposed rules would require businesses to provide an opt out for any automated decision related to personal information and financial services. And for what appears to be a first, rules require businesses using language models such as ChatGPT to disclose if models use personal information to train AI, and to allow consumers to opt out of the use of their personal data. The privacy agency will meet Friday.
Consumer Watchdog detailed its concerns and the need for strong regulation in a new report, “Hallucinating Risk.” You can watch our consumer alert video here.
Wall Street banks are seeking patents and trademarks for a range of uses, with JPMorgan Chase, Goldman Sachs and Morgan Stanley seeking AI patents and trademarks for investment, analyzing securities, and predicting stock prices and portfolios, according to an analysis of patents filed with the United States Patent and Trademark Office (USPTO).
Spending on AI by the financial services sector is the biggest across industries, including tech, said the nonprofit. JPMorgan spent $12 billion on technology in 2022, a near 10 percent increase from 2020, and the company said it has over 300 AI use cases in production. Over 90 percent of AI patents in the banking sector were filed by five investment banks, according to Evident, which monitors AI implementation in the private sector.
Virtually every bank will have its own version of ChatGPT that will give financial advice to employees and customers. The lack of transparency with AI and its potential for bias means mysterious AI could push risky investments, loans, and hallucinate bad advice on managing debt without a consumer even knowing it was AI, said Consumer Watchdog. Hallucinating occurs when an AI presents false information as fact.
“Absent proper regulation, the next financial crisis could be caused by AI,” said Justin Kloczko, tech advocate for Consumer Watchdog. “A recession could ignite in the housing or equity market due to a handful of powerful banks relying on a couple of biased algorithms.”
Consumer Watchdog called for strong oversight at the state and federal level to protect against dangers to individuals and systemic risk.
California government has already begun the former while the federal government needs to undertake the latter through new auditing and oversight functions, said Consumer Watchdog. The group pointed to new draft rules developed by the California Privacy Protection Agency (CPPA) as a model for the nation that will give people the right to opt out of automated decision-making technology in areas of finance.
The rules present the country’s strongest safeguards against AI’s risks and biases regarding personal data. Through a “pre-use notice,” a business would have to tell consumers that it uses automated decision-making, details about the algorithm’s logic, as well as a chance to opt out of the decision. The rules would require businesses to provide an opt out for any automated decision that “results in access to, or the provision or denial of, financial or lending services.”
The major concerns about AI in the financial services industry are algorithmic complexity, a lack of transparency, and biased or false information, according to Consumer Watchdog.
Consumer Watchdog’s review of applications with the USPTO found the following are AI products in production or already in use by investment banks:
Goldman Sachs is seeking to patent AI that will synthesize virtually all the data a trader would need to predict stock prices, as well as a patent to predict a hedging portfolio.
JPMorgan has applied for a trademark called “IndexGPT” that would dole out financial recommendations, and another AI patent that would match companies with investors.
Deutsche Bank is using artificial intelligence to scan the portfolios of clients.
Wells Fargo is using generative AI to help decide what to disclose to regulators.
Morgan Stanley is using AI to translate “Fedspeak,” so banks can tell if statements by regulators such as the Federal Reserve are “dovish” or “hawkish.” It is also rolling out its own generative AI for financial advice that was developed with the help of OpenAI.
ING Group uses AI to screen for potential defaulters.
Deep learning could create complex new derivatives that allow investors to gamble with billions of dollars in the form of junk bonds, leading to an economic crash similar to the financial crisis of 2008, said Consumer Watchdog.
“The more these companies use automation, and they start to use it on bigger markets, then there is some systemic risk,” Gerard Hoberg, a University of Southern California professor of finance who studies artificial intelligence, told Consumer Watchdog.
AI is likely to worsen the impact of algorithmic bias and deny people loans based on race, class, and location, said Consumer Watchdog.
AI can also “drift,” meaning it can deviate from its intended use and perpetuate bias.
“An AI could start thinking a certain race or address equals bad credit and deny loans based on that,” said Kloczko.
Algorithms can also lead to financial group-think known as herding, because individuals or companies rely on the same dataset or model. Herding helped create the Savings and Loan Crisis of the 1980s, the dot-com bubble of the early 2000s, and the 2007 Quant Meltdown.
More sophisticated monitoring, auditing and limits are needed for AI in the securities space to avoid the systemic risks, Consumer Watchdog said, pointing to model federal recommendations developed by Public Citizen to ward against the dangers. Among the actions regulators should take:
Ensure that AI investment systems are held to the same standard as human investment brokers/advisors.
Mandate the external review of black box data of AI investment algorithms to ensure the AI serves the best interest of investors.
Require AI investment system models to be regularly externally audited for groupthink pattern biases to prevent AI models from “herding.”
Powerful banks are seeking to incorporate language models such as ChatGPT despite ChatGPT itself acknowledging its dangers. AI has a “high risk of economic harm” due to a “tendency to hallucinate,” and should not be used for financial advice, according to OpenAI’s own usage policy and system card.
“Consumer-facing uses of our models in medical, financial, and legal industries; in news generation or news summarization; and where else warranted, must provide a disclaimer to users informing them that AI is being used and of its potential limitations,” according to OpenAI’s usage policy.
In 2020, nearly 80,000 AI-related patent applications were published, the most in history, according to the USPTO. Over 90 percent of AI patents in the banking sector were filed by five investment banks. Since 2017, the top five banks produced half of AI investment deals and 67 percent of research papers, according to Evident. Over 90 percent of AI patents in the banking sector were filed by five investment banks.
SOURCE Consumer Watchdog