Toggle Menu

Blog > Artificial Intelligence (AI) > Contextualizing Responsible AI Practices for Fraud Detection

Contextualizing Responsible AI Practices for Fraud Detection

Developing and sharing Responsible Artificial Intelligence (AI) practices are a key element in protecting the interests and safety of all citizens and organizations. While practitioners and governance bodies have created many helpful documents and guidelines that inventory Responsible AI best practices, implementing Responsible AI involves more than checking off items on a list. Instead, it’s […]

By Mary Scott Sanders Melisa Bardhi

May 16, 2022

Developing and sharing Responsible Artificial Intelligence (AI) practices are a key element in protecting the interests and safety of all citizens and organizations. While practitioners and governance bodies have created many helpful documents and guidelines that inventory Responsible AI best practices, implementing Responsible AI involves more than checking off items on a list. Instead, it’s a complex task that includes determining which practices are best for achieving security, privacy, fairness, explainability, and sustainability for a given task, and honing them for a given context. 

At Excella, we work to ensure that our solutions are secure, protect the privacy and data rights of stakeholders, and that they are accountable, fair, explainable, and sustainable. We utilize Responsible AI best practices as reference materials to guide our teams on our journey. Using examples across several fraud detection projects, we share how our teams adopted and adapted Responsible AI best practices to fit the context of various clients and meet their project goals.  

Best Practices We Adopted

Consequence Scanning 

Consequence scanning is an invaluable exercise performed in the early design phases of a project and regularly revisited during implementation. Teams speculatively identify the full range of both risks and benefits of the proposed solution, while remaining cognizant of unintended consequences and proactively developing mitigation strategies for all stakeholders. Consequence scanning is crucial to help the development team and stakeholders think through the solution, its implications, and the optimal way to apply practices to mitigate potential harm. We begin each new project and AI/ML feature with this exercise and return to it periodically to ensure that we remain aligned in our goals as our work evolves.  

Human-in-the-loop 

The purpose of “human-in-the-loop” design is that humans (aka “people”) are ultimately in control of the results an AI solution produces, whether that be directly, by having veto power over each AI decision, or indirectly, by providing feedback in the model training process. Since our teams operate in a variety of high-stakes contexts that can have a significant impact on users and society, we partner with our clients to create custom solutions that achieve shared goals. We leverage the appropriate amount of automation to meet objectives desirable to both the client and end users. In one case we limited the role of our product to operate as a research aid for investigators, as opposed to a tool that makes decisions autonomously. Doing so enabled humans with domain expertise to maintain full decision-making power and accountability for the determinations impacting users. Human oversight also provides an additional buffer to protect users from erroneous findings. By using AI to surface and prioritize findings for expert review, people remain accountable for the outcomes of the process our system is embedded in.   

Using Compute Resources Efficiently 

Using compute resources efficiently is a rare instance where ethical and monetary incentives align; using fewer resources reduces both environmental impact and cloud provider costs. On our projects, we: 

In general, these practices are good for the environment, good for building client relationships, and are relatively low-lift to implement technically. 

Security Best Practices 

Ensuring your AI model is secure is an essential part of protecting citizens and organizations from unethical use. Of all the Responsible AI principles, security has the most fully matured set of corresponding practices, which helps make the process of implementing these practices relatively straightforward. On projects, we implement safeguards such as zero trust architecture, encrypting data and models at rest and in transit, and using substitute data (mocked, anonymized, or synthesized) instead of personally identifiable information when possible. 

 

Best Practices We Adapted

During fraud detection projects, we modify the following Responsible AI best practices to fit our client’s context: 

Giving Users Clarity and Control over Data Use 

Providing users clarity and control over data use is of paramount importance but inherently complicated in the fraud detection context. When building a fraud detection solution, you are assuming that some subpopulation of your users will be fraudulent. Therefore, you do not want to broadcast how you are detecting fraud and risk alerting the fraudulent minority who could use that information to avoid detection. Unfortunately, this disadvantages non-fraudulent users, because they are unaware of exactly how their data is being used and therefore how to spot concerns should they arise. In one case we enabled users to have visibility into the data we stored about their case, and did not expose the process by which the AI system made a determination. This enabled us to strike the optimal balance between maintaining information transparency and protecting the integrity of the fraud application.  

Setting a Requirement for Fairness 

One best practice is to set a requirement that AI is fair, just as you would set a requirement for a level of model accuracy. Within the Responsible AI conversation, there are multiple definitions of fairness, such as:  

Because absolute fairness is difficult to define and may be impossible to achieve, the best way to make progress on projects is to make the system fairer step by step. For one client, our team incorporated the first definition of fair — equal treatment across all groups — via an automated user evaluation system. The system evaluates individual users against all other users to highlight points of concern or anomalies that could indicate a potential case of fraud to a reviewer. It does this consistently by performing the evaluation automatically, across all applications, at a regular cadence and with automatic monitoring. This consistency enhances the client’s manual review process, (which is limited by capacity and reviewer experience), into one where the designated decision-maker is empowered to make decisions both fairly and efficiently. This system is helping reviewers identify more fraud than previously possible given human capacity constraints. AI is also helping redistribute resources from fraudulent users to non-fraudulent users by enabling reviewers to spend more time on the core business operations, rather than on identifying fraud itself. We actively work out the best strategies for our organization’s mission, including legal, technical, and cultural environments, to continue evolving towards enabling fairness.  

Explainable Models over Black Box Models 

Before identifying the optimal model for any solution, you want to align on the level of explainability and interpretability required. While an interpretable model can associate a cause to an effect, one with high explainability can justify its results using specific, concrete parameters. In the case of AI applications that can have significant implications to users, building explainable AI that can directly support its conclusions with justifiable logic is crucial. At Excella, we’ve developed solutions that are both interpretable and explainable. On one project, we found that the models best suited to our use case were neural networks, a kind typically deemed “black box.” Our client became comfortable deploying a neural network after we invested in educating clients how the model worked by interpreting the results across a large number of examples. Over time, our client has become familiar with the model and built an intuition on how effectively it will handle a new case. With another client, we opted for more explainable models, because simpler models addressed the business need successfully and there was no advantage in moving towards more black box models. It is important to weigh model efficacy against explainability. By aligning with our clients on the level of interpretability and explainability required for our users, we ensure that the decisions we make are consistently backed by the level of justification expected. 

 

Achieving Success with Responsible AI

Achieving success with Responsible AI begins with culture and context. While research into Responsible AI best practices can help provide guidance, there is no universal success formula for implementing Responsible AI. The best approach in any AI project will be one that’s determined after having tough conversations regarding tradeoffs, risks, and resource investment into building responsible tech that fits your context. Learning to adopt and adapt various best practices will guide you to the most applicable AI solution for your use case. 

Responsible AI is a field that continues to evolve. If you are passionate about building responsible, high-impact technology solutions for clients and learning within a growing organization, take a look at our open positions. 

View Current Open Positions 

You Might Also Like