AI and Algorithms Increasingly Control Our Financial Fates
Last updated March 22, 2025
Click below to listen to our Consumerpedia podcast episode.
Companies are increasingly using algorithms and artificial intelligence (AI) to analyze the massive amount of data they scoop up about us to decide what financial products we qualify for and how much we’ll pay for them. These sophisticated tools are used by banks when making lending decisions, by employers screening job applications, and, in most states, by auto and home insurance companies when setting premiums.
If trained to ensure fairness and accuracy, AI can expand access to credit and reduce discrimination caused by human bias. If not, it can do enormous financial harm.
For example, if AI denies your loan application, there’s no way for you to know why that decision was made or what data were used.
“With many of the AI and machine learning models, they’re vacuuming up data from social media, from use of digital apps on your phone, and you have no idea what’s in that database that they have,” said Chuck Bell, financial policy advocate at Consumer Reports. “It might even be for some other person who has a similar name, and not you at all.”
Most applicants have no idea when decisions are made by AI. And even if they did, that decision-making process is opaque, so they’ll never know what factors were considered. The AI tool might scrape information from the internet about the applicant that’s inaccurate or totally false.
“It’s the black-box problem,” said Susan Weinstock, CEO of the non-profit Consumer Federation of America. “If there’s bad data going in, you’re going to get garbage data coming out,” Weinstock said on a recent episode of Checkbook’s Consumerpedia podcast. “And then the consumer is completely at the mercy of that bad data. The regulators may not even know that the algorithm is biased.”
Last year, Consumer Reports and the Consumer Federation of America sent a letter to the Consumer Financial Protection Bureau warning that “algorithmic discrimination can arise from many sources,” such as “unrepresentative, incorrect, or incomplete training data, as well as data that reflects historical biases.” This could prevent communities of color and low-income consumers from accessing affordable credit, they wrote.
Biases can be embedded into AI models during the design process, such as when “protected characteristics,” such as race, are improperly used directly or through proxies.
For example, a database created for a lending tool might include Zip codes, which might seem to be a neutral factor to the developer, but could serve as a proxy for race, income, gender, or religion.
“When you have proxy discrimination, Bell explained, “you’re discriminating against a protected class of people that deserve fair treatment, and you’re not even really paying attention to the fact that they’re being hurt by your model. And that’s essentially the situation that we have.”
Checkbook asked the AI Association, an industry trade group, to comment on this, but it did not respond to our requests.
Consumer Concerns
A 2024 survey by Consumer Reports found that most Americans are “somewhat uncomfortable” or “very uncomfortable” with AI making high-stakes decisions about their lives, such as analyzing video job interviews (72 percent), screening potential rental tenants (69 percent), and making lending decisions (66 percent).
When CR asked about applying for a job with a company that used AI to make hiring decisions, 83 percent said they would like to know what personal information the program used to make its decision and 93 percent said they wanted the opportunity to correct any incorrect personal information the AI hiring tool relied on.
The overwhelming discomfort with allowing machines to make important financial decisions may be well-founded. Research shows that AI can produce false results and amplify harmful biases.
Generative AI tools “carry the potential for otherwise misleading outputs,” a report from the Massachusetts Institute of Technology cautions. They’ve also been found to provide users with “fabricated data that appears authentic.” These inaccuracies “are so common,” the report noted, “they’ve earned their own moniker; we refer to them as ‘hallucinations.’”
Guardrails Needed
Consumer advocates say they believe new laws and regulations are needed to protect consumers from the ever-expanding use of AI decision-making.
“We need to have a conversation about how AI can be fair and accountable, and used in a way that helps consumers rather than holds them back,” CR’s Bell told Checkbook. “[There should be] clear disclosure of when algorithmic tools are being used, so that the consumer is aware that AI is being used. And we’d like to see people be able to appeal to a human being” if an algorithm rejects their applications.
Consumer Reports wants state and federal rules that would regulate what companies must do when AI is used to make “consequential decisions” about a consumer, such as whether they qualify for a loan, are selected for an apartment rental, get a promotion, or see their insurance rates go up.
Here are some of the key proposals in CR’s AI policy recommendations:
- Require clear disclosure when an algorithmic tool is being used to help make a consequential decision about a consumer.
- Require companies to explain why a consumer received an adverse decision. Explanations should be clear enough that, at a minimum, the applicant could tell if the decision was based on inaccurate information. Explanations should include actionable steps consumers can take to improve their outcomes.
- If a tool is so complex “that the company using it cannot provide specific, accurate, clear, and actionable explanations for the outputs it generates,” it should not be used to make consequential decisions.
- Prohibit “algorithmic discrimination,” and require companies that make AI tools to undergo independent, third-party testing for bias and accuracy before deployment, and regularly after deployment.
- Companies should be required to limit data collection, use, retention, and sharing to what is “reasonably necessary” to provide the service or conduct the activity that a consumer has requested, with limited additional permitted uses.
- Prohibit the sale and sharing of personal data collected by generative AI tools to third parties
In 2023, the European Union passed the AI Act, a set of rules that ensure AI systems used in the EU are “safe, transparent, traceable, non-discriminatory and environmentally friendly.”
“There is no reason why the U.S. can’t do the same and shouldn’t do the same,” Weinstock said. “And why are we behind the Europeans? It’s really incredibly important that these products are used responsibly and with the consumer in mind.”
Contributing editor Herb Weisbaum (“The ConsumerMan”) is an Emmy award-winning broadcaster and one of America's top consumer experts. He has been protecting consumers for more than 40 years, having covered the consumer beat for CBS News, The Today Show, and NBCNews.com. You can also find him on Facebook, Twitter, and at ConsumerMan.com.