Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Madison mortgage company agrees to settle with feds over alleged Birmingham redlining

25 October 2024 at 17:58
framed houses under construction

A row of framed houses under construction in a file photo. The Consumer Finance Protection Board and the U.S. Department of Justice reached a settlement with a Wisconsin-based mortage company over allegations of redlining in Birmingham. The company said that it feels that "justice did not prevail" in the situation. (Getty Images)

A mortgage company has agreed to pay civil penalties of almost $9 million to settle a lawsuit filed by the Consumer Financial Protection Bureau (CFPB) and the U.S. Department of Justice (DOJ) alleging  that the company had discriminated against people living in the predominantly Black neighborhoods in Birmingham.

The two agencies alleged that Fairway Independent Mortgage Insurance Corporation, based in Madison, Wisconsin, had engaged in redlining by failing to provide credit services, especially with respect to housing, for individuals living in communities of color.

“The CFPB and DOJ are holding Fairway accountable for redlining Black neighborhoods,” said CFPB Director Rohit Chopra in a statement. “Fairway’s unlawful redlining discouraged families from seeking loans for homes in Birmingham’s Black neighborhoods.”

Fairway said in a statement on its website that the lawsuit mischaracterized the matter.

“For one, the complaint characterizes Fairway’s actions as willful and reckless, a claim that was mutually rejected by the parties prior to settlement,” according to the statement. “In addition, the complaint characterizes Fairway’s actions as willful and intentional, despite the government agencies’ failure to identify any evidence to support such a claim.”

The company also stated that it has made more loans in majority-Black census tracts than any other non-bank lender with a physical presence in Birmingham, and said it moved to a settlement “to resolve the matter and curb the further expenditure of resources.”

“Fairway is disappointed in this outcome,” the company said in its statement. “We are a company that serves people and have always strived to help everyone achieve the dream of homeownership. Our numbers, our reputation, and our client testimonials prove such. We are equally disappointed in the regulatory and judicial systems over these actions. We feel justice did not prevail in this situation.”

Under the proposed settlement, which requires approval from a federal judge, Fairway would pay a $1.9 million civil penalty to the CFPB’s victims relief fund and provide $7 million to a loan subsidy program that offers people the chance to purchase, refinance and improve their homes in majority Black-neighborhoods.

The lawsuit alleged that Fairway placed all its physical Birmingham retail locations and loan officers in majority white areas without doing the same for places that were majority Black or in communities of color.

“Fairway predominantly directed its marketing and advertising to majority-white areas from 2018 through 2022, while failing to conduct effective marketing and advertising to majority-Black areas in the Birmingham MSA until at least late 2022,” the complaint from the CFPB states.

Through that practice, Fairway had discouraged minorities from obtaining loans, and that the data showed that the company generated a disproportionately low number of loan applications from majority Black areas within the Birmingham metropolitan service area as compared to other, similarly situated lenders.

The complaint states that from 2018 to 2022, the company generated slightly more than 10,000 applications from the Birmingham area, but that only about 4% of the loan applications were from properties in majority-Black areas. Similarly situated companies, it alleged, had more than 12% of their loan applications from the same majority-Black areas.

“Fairway’s peer lenders generated applications for properties in majority-Black areas at over three times the rate of Fairway,” the complaint states.

Fair housing advocates praised the actions of both the DOJ and the Consumer Financial Protection Bureau.

“As a general matter, an argument that says that we can make an individual judgment about any borrower based on where they actually live, or based on their race, or based on anything that is not related to that person’s individual ability to repay and evaluate their collateral, is irrational and banks should not be doing it,” Nestor M. Davidson, faculty Director at the Urban Law Center with Fordham Law School.

GET THE MORNING HEADLINES.

Alabama Reflector is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Alabama Reflector maintains editorial independence. Contact Editor Brian Lyman for questions: info@alabamareflector.com. Follow Alabama Reflector on Facebook and X.

As AI takes the helm of decision making, signs of perpetuating historic biases emerge

14 October 2024 at 10:15
Money, pile coin with saving book and paper home,concept

Studies show that AI systems used to make important decisions such as approval of loan and mortgage applications can perpetuate historical bias and discrimination if not carefully constructed and monitored. (Seksan Mongkhonkhamsao/Getty Images)

In a recent study evaluating how chatbots make loan suggestions for mortgage applications, researchers at Pennsylvania’s Lehigh University found something stark: there was clear racial bias at play.

With 6,000 sample loan applications based on data from the 2022 Home Mortgage Disclosure Act, the chatbots recommended denials for more Black applicants than identical white counterparts. They also recommended Black applicants be given higher interest rates, and labeled Black and Hispanic borrowers as “riskier.”

White applicants were 8.5% more likely to be approved than Black applicants with the same financial profile. And applicants with “low” credit scores of 640, saw a wider margin — white applicants were approved 95% of the time, while Black applicants were approved less than 80% of the time.

The experiment aimed to simulate how financial institutions are using AI algorithms, machine learning and large language models to speed up processes like lending and underwriting of loans and mortgages. These “black box” systems, where the algorithm’s inner workings aren’t transparent to users, have the potential to lower operating costs for financial firms and any other industry employing them, said Donald Bowen, an assistant fintech professor at Lehigh and one of the authors of the study.

But there’s also large potential for flawed training data, programming errors, and historically biased information to affect the outcomes, sometimes in detrimental, life-changing ways.

“There’s a potential for these systems to know a lot about the people they’re interacting with,” Bowen said. “If there’s a baked-in bias, that could propagate across a bunch of different interactions between customers and a bank.”

How does AI discriminate in finance?

Decision-making AI tools and large language models, like the ones in the Lehigh University experiment, are being used across a variety of industries, like healthcare, education, finance and even in the judicial system.

Most machine learning algorithms follow what’s called classification models, meaning you formally define a problem or a question, and then you feed the algorithm a set of inputs such as a loan applicant’s age, income, education and credit history, Michael Wellman, a computer science professor at the University of Michigan, explained.

The algorithm spits out a result — approved or not approved. More complex algorithms can assess these factors and deliver more nuanced answers, like a loan approval with a recommended interest rate.

Machine learning advances in recent years have allowed for what’s called deep learning, or construction of big neural networks that can learn from large amounts of data. But if AI’s builders don’t keep objectivity in mind, or rely on data sets that reflect deep-rooted and systemic racism, results will reflect that.

“If it turns out that you are systematically more often making decisions to deny credit to certain groups of people more than you make those wrong decisions about others, that would be a time that there’s a problem with the algorithm,” Wellman said. “And especially when those groups are groups that are historically disadvantaged.”

Bowen was initially inspired to pursue the Lehigh University study after a smaller-scale assignment with his students revealed the racial discrimination by the chatbots.

“We wanted to understand if these models are biased, and if they’re biased in settings where they’re not supposed to be,” Bowen said, since underwriting is a regulated industry that’s not allowed to consider race in decision-making.

For the official study, Bowen and a research team ran thousands of loan application numbers over several months through different commercial large language models, including OpenAI’s GPT 3.5 Turbo and GPT 4, Anthropic’s Claude 3 Sonnet and Opus and Meta’s Llama 3-8B and 3-70B.

In one experiment, they included race information on applications and saw the discrepancies in loan approvals and mortgage rates. In other, they instructed the chatbots to “use no bias in making these decisions.” That experiment saw virtually no discrepancies between loan applicants.

But if race data isn’t collected in modern day lending, and algorithms used by banks are instructed to not consider race, how do people of color end up getting denied more often, or offered worse interest rates? Because much of our modern-day data is influenced by disparate impact, or the influence of systemic racism, Bowen said.

Though a computer wasn’t given the race of an applicant, a borrower’s credit score, which can be influenced by discrimination in the labor and housing markets, will have an impact on their application. So might their zip code, or the credit scores of other members of their household, all of which could have been influenced by the historic racist practice of redlining, or restricting lending to people in poor and nonwhite neighborhoods.

Machine learning algorithms aren’t always calculating their conclusions in the way that humans might imagine, Bowen said. The patterns it is learning apply to a variety of scenarios, so it may even be digesting reports about discrimination, for example learning that Black people have historically had worse credit. Therefore, the computer might see signs that a borrower is Black, and deny their loan or offer them a higher interest rate than a white counterpart.

Other opportunities for discrimination 

Decision making technologies have become ubiquitous in hiring practices over the last several years, as application platforms and internal systems use AI to filter through applications, and pre-screen candidates for hiring managers. Last year, New York City began requiring employers to notify candidates about their use of AI decision-making software.

By law, the AI tools should be programmed to have no opinion on protected classes like gender, race or age, but some users allege that they’ve been discriminated against by the algorithms anyway. In 2021, the U.S. Equal Employment Opportunity Commission launched an initiative to examine more closely how new and existing technologies change the way employment decisions are made. Last year, the commission settled its first-ever AI discrimination hiring lawsuit.

The New York federal court case ended in a $365,000 settlement when tutoring company iTutorGroup Inc. was alleged to use an AI-powered hiring tool that rejected women applicants over 55 and men over 60. Two hundred applicants received the settlement, and iTutor agreed to adopt anti-discrimination policies and conduct training to ensure compliance with equal employment opportunity laws, Bloomberg reported at the time.

Another anti-discrimination lawsuit is pending in California federal court against AI-powered company Workday. Plaintiff Derek Mobley alleges he was passed over for more than 100 jobs that contract with the software company because he is Black, older than 40 and has mental health issues, Reuters reported this summer. The suit claims that Workday uses data on a company’s existing workforce to train its software, and the practice doesn’t account for the discrimination that may reflect in future hiring.

U.S. judicial and court systems have also begun incorporating decision-making algorithms in a handful of operations, like risk assessment analysis of defendants, determinations about pretrial release, diversion, sentencing and probation or parole.

Though the technologies have been cited in speeding up some of the traditionally lengthy court processes — like for document review and assistance with small claims court filings — experts caution that the technologies are not ready to be the primary or sole evidence in a “consequential outcome.”

“We worry more about its use in cases where AI systems are subject to pervasive and systemic racial and other biases, e.g., predictive policing, facial recognition, and criminal risk/recidivism assessment,” the co-authors of a paper in Judicature’s 2024 edition say.

Utah passed a law earlier this year to combat exactly that. HB 366, sponsored by state Rep. Karianne Lisonbee, R-Syracuse, addresses the use of an algorithm or a risk assessment tool score in determinations about pretrial release, diversion, sentencing, probation and parole, saying that these technologies may not be used without human intervention and review.

Lisonbee told States Newsroom that by design, the technologies provide a limited amount of information to a judge or decision-making officer.

“We think it’s important that judges and other decision-makers consider all the relevant information about a defendant in order to make the most appropriate decision regarding sentencing, diversion, or the conditions of their release,” Lisonbee said.

She also brought up concerns about bias, saying the state’s lawmakers don’t currently have full confidence in the “objectivity and reliability” of these tools. They also aren’t sure of the tools’ data privacy settings, which is a priority to Utah residents. These issues combined could put citizens’ trust in the criminal justice system at risk, she said.

“When evaluating the use of algorithms and risk assessment tools in criminal justice and other settings, it’s important to include strong data integrity and privacy protections, especially for any personal data that is shared with external parties for research or quality control purposes,” Lisonbee said.

Preventing discriminatory AI

Some legislators, like Lisonbee, have taken note of these issues of bias, and potential for discrimination. Four states currently have laws aiming to prevent “algorithmic discrimination,” where an AI system can contribute to different treatment of people based on race, ethnicity, sex, religion or disability, among other things. This includes Utah, as well as California (SB 36), Colorado (SB 21-169), Illinois (HB 0053).

Though it’s not specific to discrimination, Congress introduced a bill in late 2023 to amend the Financial Stability Act of 2010 to include federal guidance for the financial industry on the uses of AI. This bill, the Financial Artificial Intelligence Risk Reduction Act or the “FAIRR Act,” would require the Financial Stability Oversight Council to coordinate with agencies regarding threats to the financial system posed by artificial intelligence, and may regulate how financial institutions can rely on AI.

Lehigh’s Bowen made it clear he felt there was no going back on these technologies, especially as companies and industries realize their cost-saving potential.

“These are going to be used by firms,” he said. “So how can they do this in a fair way?”

Bowen hopes his study can help inform financial and other institutions in deployment of decision-making AI tools. For their experiment, the researchers wrote that it was as simple as using prompt engineering to instruct the chatbots to “make unbiased decisions.” They suggest firms that integrate large language models into their processes do regular audits for bias to refine their tools.

Bowen and other researchers on the topic stress that more human involvement is needed to use these systems fairly. Though AI can deliver a decision on a court sentencing, mortgage loan, job application, healthcare diagnosis or customer service inquiry, it doesn’t mean they should be operating unchecked.

University of Michigan’s Wellman told States Newsroom he’s looking for government regulation on these tools, and pointed to H.R. 6936, a bill pending in Congress which would require federal agencies to adopt the Artificial Intelligence Risk Management Framework developed by the National Institute of Standards and Technology. The framework calls out potential for bias, and is designed to improve trustworthiness for organizations that design, develop, use and evaluate AI tools.

“My hope is that the call for standards … will read through the market, providing tools that companies could use to validate or certify their models at least,” Wellman said. “Which, of course, doesn’t guarantee that they’re perfect in every way or avoid all your potential negatives. But it can … provide basic standard basis for trusting the models.”

GET THE MORNING HEADLINES.

❌
❌