- Key insight: If we allow algorithms to inherit yesterday’s incentives — maximizing return, minimizing empathy — then tomorrow’s system will be flawlessly efficient at reproducing inequality.
- What’s at stake: The code we write now — technical and moral — will shape how capital flows for decades, and whether AI doubles down on inequality or powers a more inclusive future.
- Supporting data: In 2023, the global financial sector spent roughly $35 billion on AI, a figure expected to approach $126 billion by 2028.
When banks first embraced automation, it was about scale. When they went digital, it was about convenience. Now, as
Processing Content
We are standing at a critical juncture. AI is no longer just a future trend; it is already a live operating force, making credit decisions, underwriting loans, drafting policies and shaping how billions experience money. In 2023, the global financial sector spent roughly $35 billion on AI, a figure expected to approach
If we allow algorithms to inherit yesterday’s incentives — maximizing return, minimizing empathy — then tomorrow’s system will be flawlessly efficient at reproducing inequality. But if we teach AI to reflect human and community values, it can become a powerful amplifier of financial inclusion.
AI is already
What we need instead is a moral architecture for financial AI: a framework that governs how algorithms are designed, trained, and deployed, grounded in the idea that finance is a social contract as well as a business.
At a minimum, this framework should ensure that AI in finance is purpose-driven, with every major use case tied to clear customer and community outcomes, not just cost savings or revenue lift. It should also be trained on fair data. That means inputs stress-tested for historical bias, enriched through partnerships with mission-driven lenders and communities, and monitored with routine disparate-impact testing and less-discriminatory alternative models, where available. And because AI is a relatively new technology, its use in finance must be easily explainable, so that customers can understand key decisions and employees are empowered and trained to challenge algorithmic outcomes.
Additionally, AI in finance must operate under accountable human oversight – with boards and senior management treating AI as a core element of risk and conduct, setting measurable fairness metrics, and tying leadership incentives to meeting them. In the same vein, it should be transparent and auditable to allow regulators, investors and communities to see how automated systems affect different groups over time.
Too many institutions still ask, “What can AI do for us?” when the better question is, “What do we want AI to do for society?” The technology’s greatest promise in finance lies in its ability to personalize services and extend them to people and places traditional models have ignored or misjudged.
We already see glimmers of this future. Take Verity Credit Union in Washington State, a community development financial institution, or CDFI. Partnering with Zest AI, it used machine learning to reassess credit applications that traditional models had declined. The result: approvals for members aged 62 and over increased by 270%, with lower delinquencies.
Similarly, Tennessee-based CDFI BetterFi used Stratyfy’s AI-powered platform to customize its credit decision management and adopt “fast track” approval flags. These changes enabled BetterFi’s expansion from rural to urban service areas, a 21% increase in approvals to borrowers of color, and a 20% increase in approvals to moderate-income borrowers.
Scaled thoughtfully, that kind of inclusive AI could help small farmers access microloans via mobile data, women entrepreneurs gain credit through alternative information instead of outdated scores, and first-time homebuyers be judged on potential, not just history. But it will only happen if we are deliberate about how AI learns.
Networks such as the
That responsibility now rests with every financial institution’s leadership team and board, not just those that identify as “values-based.” Leaders across banking, insurance, asset management, and fintech should treat AI strategy as a values strategy: setting inclusion and fairness goals for AI, requiring evidence that models meet these goals before deployment, investing in workforce upskilling in digital ethics, and collaborating to establish shared standards for responsible AI.
We have a brief window to act. The code we write now — technical and moral — will shape how capital flows for decades, and whether AI doubles down on inequality or powers a more inclusive future.