PayNearMe
PayNearMe
PayNearMe

Safe, Observable and Auditable: A Framework for Responsible AI

Articles
October 12, 2023
Share:
responsible ai for payments

Before you jump in: Read “How AI is Transforming Bill Payments” for a detailed primer on PayNearMe’s thoughts on artificial intelligence and machine learning.

Widespread awareness of AI in financial services has caused new a dichotomy to form. On one hand, the technology opens new opportunities for efficiency, innovation and convenience. Those who embrace AI will benefit from its ability to automate processes, enhance the customer experience and significantly reduce costs, creating a more accessible financial services market for all stakeholders.

On the other hand, large scale change comes with risks. Left unchecked and without sufficient guardrails, AI that has been released haphazardly can do more harm than good, leading to risk management nightmares and harsh regulatory actions.

This is why everyone in the AI supply chain needs to incorporate a framework for Responsible AI that can amplify the benefits while reducing potential harm.

At PayNearMe, we’re embracing responsible use of AI and ML to drive better payment experiences and outcomes for our clients. We believe taking the time to do things the right way will mitigate risk and reduce unpredictability, without neutering the benefits of AI.

As a foundational approach, we’re focused on using AI responsibly. We want to innovate as fast as we can—yet do it in ways that are safe, observable, and auditable for our clients and our business.”

Roger Portela, Senior Director, Product Management

What is Responsible AI?

Think of responsible AI as a set of guardrails. The goal is to allow models to accelerate your existing goals (such as improving payment acceptance rates or driving down support volume) without going off the rails. By putting in place specific techniques and tactics, companies using data-driven technologies like AI and ML can help safeguard against ethical, legal and compliance risks.

AI risk comes in many forms. For starters, it can trigger issues around data privacy, ethical use of data, potential discriminatory bias and transparency. Anyone in the financial services industry knows those are serious red flags, all of which circle around to potential compliance, risk, regulatory and brand reputation costs.

In addition to maintaining fairness and trust, organizations using AI need to be able to clearly explain how they arrived at decisions that impact customers. As an example, responsible AI can help lenders avoid backlash over credit declines based on AI risk decisioning models – a topic that’s now under close scrutiny by the Consumer Financial Protection Bureau (CFPB). 

The CFPB has issued new guidelines for lenders using AI, emphasizing that, ”Even for adverse decisions made by complex algorithms, creditors must provide accurate and specific reasons. There is no special exemption for artificial intelligence.”

That leads us to building AI and ML models that have three defining characteristics:

  • Safe: Models should not introduce new harm to consumers or the business
  • Observable: Outputs should track towards a specific goal, and practitioners should be able to understand and explain the output
  • Auditable: AI should leave a “paper trail” and stand up to compliance, regulatory and scientific rigor

Building Guardrails That Fit Your Goals

Clear goals are imperative when defining how to deliver a responsible AI framework, and these will often depend on the specific business, industry and use case. For financial services firms (such as auto lenders or banks), these guidelines often go beyond the scope of general consumer software products.

For example, here are some of the goals PayNearMe has in mind for our responsible AI framework:

  • Protect consumer privacy. Our data ecosystem is built with privacy and security in mind to safeguard consumer financial information and payments data in a secure and compliant environment. We place this front and center as an innovator in the payments industry.
  • Reduce data bias. Any unintended biases that come out of AI or ML could result in misapplication of data, and could be costly for clients or their customers. PayNearMe is focused on mitigating AI bias by using techniques such as fairness and bias detection, regularly monitoring and making necessary adjustments.
  • Enable auditable decisions. Companies making critical business decisions based on AI/ML recommendations or predictions will often need explainable audit trails. Particularly for billers and lenders using AI to assess credit risk, it’s essential to clarify specific reasons for credit declines (as noted earlier), not just check a box on a list of sample reasons. As a best practice, PayNearMe models are built using explainable AI techniques to enable clear, contextual descriptions about how decisions are derived.
  • Avoid AI hallucinations. With all the buzz around ChatGPT and other generative AI tools, this issue is becoming a major concern. With hallucinations, outputs don’t match the model training data, but rather, the generative AI has tried to fill in gaps in the data with potentially plausible content. Companies eager to tap into this technology will need to exercise extreme caution and manual oversight, especially for customer-facing applications where reliability of information is essential to avoid risk.

At the end of the day, it’s important to remember that AI should aid in your existing processes and follow many of the guidelines that have been established over time to protect the integrity of the financial services industry and the safety of consumers.

We’re excited to see how the industry embraces advancements in AI, and we’re proud to be taking a responsible approach.

To learn more, contact us here or email sales@paynearme.com.

Related Posts
Personalization Strategies to Boost Auto Loan Repayment
Read Article
Build vs. Buy: Which Path is Right for You?
Read Article