When an information services company discovered its model monitoring process was fraught with inefficiencies and roadblocks, they needed to redesign the platform to support automation, provide confidence in results, and facilitate widespread adoption throughout the organization. The solution, which reduced the cost and improved the efficiency of model performance monitoring, eliminated single-person dependency to operate the process, reduced the number of analysts needed to monitor the model, and dramatically enhanced the coverage and speed of the execution cycle. A top-to-bottom audit of the new model performance monitoring process within months of implementation resulted in no findings, giving leadership comfort in the rigor and reliability of the system and confidence to act on modeling insights that drive business.
In the course of analyzing a considerable amount of data related to consumer and business credit history worldwide, an information services company discovered it needed help monitoring the performance of 200-plus models. The performance of these models was core to the company’s mission and generated a sizeable portion of its $4 billion in annual revenue. The models being monitored were used predominantly to assess creditworthiness for consumer and business clients from industries that included financial services, telecommunication, automotive, and insurance.
The company’s model monitoring process was fraught with operational inefficiencies and roadblocks. For starters, the code base for monitoring the models and the data platform were situated on different servers adding to execution time. In addition, the complexity of the software and the level of manual work required to manage the process required a fully dedicated team. A dedicated data scientist was needed to execute model monitoring for a few models each. To make matters worse, the entire process suffered from a lack of clear ownership as job mobility moved people from one team to another.
Constantly battling these obstacles hampered the company’s ability to derive insights from data analytics, and those they did derive were often unreliable given the issues in model monitoring. Ultimately, the overall process often failed due to vacillating business requirements, software changes, and system limitations. This situation drove the need to improve and standardize the model performance monitoring process across the organization.
Redesign the model performance monitoring process to support automation, provide confidence in results, and facilitate widespread adoption throughout the organization.
“Our solution needed to pull data, calculate key statistical metrics to analyze over time, and then provide timely and consumable reporting to stakeholders.”
From the start, communication and collaboration with participants and stakeholders were essential to ensuring an effective solution. To prioritize which models required enhanced monitoring, we built a clear understanding of the problem that needed to be solved and the needs of leadership and key stakeholders when reviewing model monitoring reports. We gathered insight into the impact and materiality of model output downstream in the organization.
We started by defining the objectives required by leadership. The goal was to have a fully automated solution that provided stakeholders with a consumable report that would provide both the confidence to act on model results and a solid audit trail to defend to regulators and compliance groups. Our solution needed to pull data, calculate key statistical metrics to analyze over time, and then provide timely and consumable reporting to stakeholders.
With objectives defined, we assessed the existing architecture and identified where it was inefficient. As the graphic below illustrates, the process we developed paired existing enterprise tools with open-source tools to establish a scalable system that could handle the high volume of processing and automation. We partnered with IT to build a centralized solution that performed the aggregation calculations and generated key metrics within the central Hadoop environment before transferring data to the enterprise. This standardized framework ensured consistency across monitoring processes and users while supporting strong data governance and security around individual PII and PCI data.
With the platform’s newfound speed and power, we were able to add more insightful views and segmentation to the reporting. These enhancements improved the platform’s value and consumability for stakeholders, as well as the ease with which monitoring reports could be interpreted. With reports that were easy to interpret, analysts were able to dive deeper into the results and make business decisions with confidence. To ensure maximum visibility of the performance reports, a summarized PDF report was proactively delivered by email to each key stakeholder based on the execution cycle they selected – weekly, monthly, quarterly, or annually.
Within months of implementation, a top-to-bottom audit of the model performance monitoring process resulted in no findings, giving leadership comfort in the rigor and reliability of the system and confidence to act on modeling insights that drive the business.