BankThink

Workforce diversity can help banks mitigate AI bias

Financial institutions are learning that even algorithms can be biased.

Algorithms used to review loan applications, trade securities, predict financial markets, identify prospective employees and assess potential customers warrant concerns about fairness and bias. The risk of algorithmic bias is foreseeable in the lending context, where reliance on certain data inputs — such as the decades-old credit scoring model, which does not take into account consumer data on rent, utility and cellphone bill payments — has already proven to have discriminatory effects. Particularly, as the lending industry digitizes and moves toward “alternative lending,” i.e., considering nontraditional creditworthiness factors including behavioral data, financial institutions must balance the innovation of artificial intelligence with the substantial risk that machine learning could create disparate impact on minority populations.

Of course, algorithms are not inherently dangerous. Artificial intelligence is rapidly revolutionizing the financial services industry, making it easier than ever to improve the delivery of products to all consumers. But the industry must balance the innovation of AI with the foreseeable risk of discrimination.

As the technology continues to grow, so too does the artificial intelligence’s so-called “white-guy problem” and the risk that an innocuous algorithm may inadvertently generate discriminatory conclusions against communities less powerful than its Silicon Valley creators.

Biases in AI are not mere rounding errors — they are costly concerns for the financial services industry. The potential consequences of an innocuous-but-discriminatory algorithmic defect include crippling fair-lending lawsuits, heightened regulatory scrutiny and substantial reputational harm. The Supreme Court confirmed several years ago that business practices that discriminate against protected classes of people may violate federal anti-discrimination laws, even if the discrimination is unintentional. Technology and compliance managers in financial services, including fintech companies, must be mindful of how AI affects vulnerable consumers.

To begin, financial companies need to foster an awareness of and active engagement in identifying and reducing associated discrimination risks. This means countering biased or incomplete results, improving the transparency of decision-making and addressing general lack of consumer awareness and understanding.

But improving fairness in AI is easier said than done. The biggest challenges for AI engineers are contending with the accuracy and integrity of the data inputs and determining what data can and should be used in developing or operating AI. Financial institutions must evaluate what data is considered relevant, whether there are gaps or inconsistencies in the available data, how to clean the data and whether the data is truly representative. The algorithm may predict that I am a bad credit risk because I live in a certain part of Michigan, or because my parents were born in another country. But is where I live or where my parents are from a fair proxy for access to credit? Lenders, who increasingly rely on machine learning to predict creditworthiness based on alternative data, must evaluate every data point closely. For instance, creditors must determine how and why cellphone location data might be relevant to predicting an applicant’s likeliness to repay a loan. If location is somehow predictive, are there gaps or inconsistencies in the location data collected for an individual or across a lender’s portfolio? Inconsistent location data could be the result of any number of factors, including varying degrees of cellphone service, affordability of cellphone data by market, individual behavioral tendencies, use of proxy services or workplace policies governing cellphone use.

Generally, the algorithm itself is not the origin of the bias. The problem is the data being analyzed.

Another important consideration are the decisional outcomes that AI generates in the lending context. For instance, what deductions might an algorithm make about an applicant’s lifestyle data — whether she uses a mobile payment platform for groceries, whether he uses a dating app or whether she speaks certain languages? Do those algorithmic deductions have the effect of predicting race or gender? AI engineers must ensure that machine learning tools used for lending fairly predict the creditworthiness of applicants, and that the results do not have the effect of excluding groups of people from equal access to credit. Regardless of the expense and time, everyone, from large established companies to small startups, should audit AI results.

AI bias is arguably the result of human bias. Thus, financial institutions must hire a diverse cross-section of employees to diversify the perspectives baked into the design of machine-learning systems. Not only does inclusivity matter from a best practices and global business perspective, but also because an AI system will reflect the values of its designers. Without diversity and inclusivity, a financial institution risks constructing AI models that are exposed to antiquated, prejudicial and stereotypical ideas that violate anti-discrimination and fair- lending laws — and embarrass the institution — when released on a wide scale.

Since the perspectives of those who create AI systems shapes them, companies should not only hire diverse teams, they should also train everyone who touches the development of AI on diversity and inclusion as well as on fair-lending and anti-discrimination laws. Many AI developers are white, male and with similar backgrounds in terms of education and experience. There are countless examples of how this causes problems, from facial recognition that doesn’t recognize people with darker skin tones, to voice recognition that doesn’t hear women. Financial regulators, such as the Consumer Financial Protection Bureau, have examination and compliance manuals that speak to an institution’s obligations under federal fair-lending laws. Financial institutions that negatively impact consumer access to financial services via AI algorithms without a legitimate justification will face consequences from consumer watchdogs that will investigate suspect algorithms, training sets and the underlying data. Companies can no longer reserve fair-lending training for lawyers and compliance professionals — AI engineers must also understand how each data point used for machine learning amounts to a lever in their companies’ compliant fair-banking apparatus. AI teams must also be trained on anti-discrimination laws and implicit bias, emphasizing that negative impacts on protected classes of people can often be just as costly as ill-intended acts. If AI is to be safe and far-reaching, efforts need to be made not only to commit to fairness and due process, but also ensure that the culture in which AI is being designed is welcoming to all people, including women and minorities.

Financial services companies should also consider implementing a voluntary code of conduct to review and evaluate internal practices. This could include ensuring that AI decisions are checked by humans before they are released and allowed to have real life impacts. For example, this kind of review and evaluation process is exactly what Article 22 of the European Union’s General Data Protection Regulation rules, or GDPR, is trying to accomplish. The GDPR protects individuals from being subjected to purely automated decision-making, including profiling, without their consent, especially if that decision-making produces legal effects. Financial institutions, even those not subject to the GDPR, can stay ahead of the curve by testing and conducting risk assessments on AI-influenced products before they are launched to anticipate negative outcomes such as discrimination.

Discrimination and bias in the financial services industry perpetuates inequality through sidelining vulnerable communities and populations. AI has the potential to help level the playing field by making financial services faster, simpler and more accessible to protected classes. But without intentional care from stakeholders in the financial services industry, AI may generate very unintentional, costly and unfair results.

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning Consumer lending Consumer banking Diversity and equality
MORE FROM AMERICAN BANKER