13
WHITE PAPER Decision Management Analytics Operationalizing Complex Analytic Models at Scale Best practices for © 2019 Fair Isaac Corporation. All rights reserved.

Best Practices for Operationalizing Complex Analytic Models at … · 2019-10-03 · Best Practices for Operationalizing Comple Analytic odels at Scale Analytics Decision anagement

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Best Practices for Operationalizing Complex Analytic Models at … · 2019-10-03 · Best Practices for Operationalizing Comple Analytic odels at Scale Analytics Decision anagement

WH

ITE P

AP

ERDecision ManagementAnalytics

Operationalizing Complex Analytic Models at Scale

Best practices for

© 2019 Fair Isaac Corporation. All rights reserved.

Page 2: Best Practices for Operationalizing Complex Analytic Models at … · 2019-10-03 · Best Practices for Operationalizing Comple Analytic odels at Scale Analytics Decision anagement

© 2019 Fair Isaac Corporation. All rights reserved. 2

Best Practices for Operationalizing Complex Analytic Models at ScaleAnalytics

Decision Management

The adoption of predictive and prescriptive analytics across multiple industries is accelerating rapidly, fueled by the competitive need to drive business agility. The rationale for adoption is clear enough: Companies are sitting on mountains of relevant data that could help inform their business processes and decisions. To unlock the value of that data, they’ve hired data scientists and invested in data modeling and analytics tools. They have given the green light to advanced analytics projects, and the financial and operations stakeholders are looking for evidence that they are getting value from their investments. Many are still asking themselves whether they ultimately will derive the hoped-for business outcomes. Practical ROI metrics include business growth, increased automation and agility, and success in optimizing complex decisions.

Even stakeholders who are confident that analytics can deliver those outcomes may wonder whether their own organizations are ready for these initiatives, or whether they have rational roadmaps for success. The biggest hurdle to maximizing the business value of data is to develop the capacity to operationalize analytics throughout the organization. This isn’t a technology problem—it’s an execution problem. Organizations must get the people, processes, and technology aligned and moving together to achieve results.

As in any complex initiative, success in analytics adoption is not inevitable. But it is within the capacity of virtually any well-run organization, with the right guidance, the right strategy, the right technology platform, and the right tools.

This white paper is intended assure adopters that operationalizing advanced analytics, while a complex undertaking, is achievable, and is likely to provide substantial value to their businesses.

Page 3: Best Practices for Operationalizing Complex Analytic Models at … · 2019-10-03 · Best Practices for Operationalizing Comple Analytic odels at Scale Analytics Decision anagement

© 2019 Fair Isaac Corporation. All rights reserved. 3

Best Practices for Operationalizing Complex Analytic Models at ScaleAnalytics

Decision Management

How FICO can help you succeedFICO is committed to enabling success in advanced analytics adoption, as a strategic partner, adviser, and provider of analytics technology and services.

FICO’s core business is in helping organizations make decisions by assessing and managing the risk in a decision. Predictive analytics and artificial intelligence were natural places for FICO to invest, and have grown into core competencies. The company has a large and growing patent portfolio in AI and machine learning, and a long history of operationalizing analytic models. These technologies have taken FICO and its clients beyond risk-focused applications for financial services and into the areas of cybersecurity, mobile, retail marketing, and the Internet of Things (IoT).

FICO provides:

• Technology—Acompleteplatform

(on-premisesorin-cloud)supporting

theend-to-endprocessestodevelop,

deploy,monitor,andimproveanalytic

predictionsanddecisions.

• Intellectualproperty—Bestpractices

(e.g.,toevaluateyourdataandstructure

yourmodel),pre-builtmodels,and

modeldevelopmentmethodologies.

FICOhasalargeandgrowingpatent

portfolioinartificialintelligence,machine

learning,anddecisionscience.

• Acommitmenttoopenness—

Supportingtheuseofour

customers’preferredtechnologies

withinourplatform.

• Consultingservices—Bringing

yourmodelingteamuptospeedon

themostadvancedandeffective

technologiesandtechniques.

• Modelgovernance—Including

modelvalidationandongoing

performanceassessment.

FICO has led the way in helping companies turn their analytic investments into business value for 60+ years. This paper addresses the issues that most often impede their efforts to operationalize analytics in this modern era of machine learning and artificial intelligence.

Page 4: Best Practices for Operationalizing Complex Analytic Models at … · 2019-10-03 · Best Practices for Operationalizing Comple Analytic odels at Scale Analytics Decision anagement

© 2019 Fair Isaac Corporation. All rights reserved. 4

Best Practices for Operationalizing Complex Analytic Models at ScaleAnalytics

Decision Management

Is your organization ready for automated, analytically driven decisioning? Any company that is in the early stages of analytics adoption should assess its objectives first. Virtually every industry has undergone, or is in the process of undergoing, a global digital transformation and has access to volumes of data unimaginable in the 1990s from diverse sources, many of which did not exist when senior executive stakeholders began their careers. Many companies have transitioned to data-driven commerce, but others are still struggling.

Even in organizations that have established data science capacity and have begun deploying analytics into niche applications, executives frequently express concerns that they do not see the strategy. Their companies have succeeded tactically in specific analytics projects, but have not established a strategic core competency.

Fortunately, there are basic principles virtually any organization can follow to put its analytics adoption initiative on a firm footing.

Define the problemAnalytics are tools; their value is not inherent. Rather, value is realized by modeling the business problem and understanding where analytics will add value. This should be happening at the micro level within every business function. There should be an easy-to-use, explainable strategy that empowers

everyone in the enterprise to draw on analytics to solve decisioning problems. Executive responsibility for this capacity should belong to a business stakeholder.

That stakeholder should determine how analytics can improve something already done today—e.g., how analytics can be incorporated into a business process to make a better, more informed decision,

in many instances, in automated fashion. Defining the problem in this way draws critical input and buy-in from business stakeholders, rather than relying solely on data scientists. That balance of priorities can help avoid the risk of investing in analytics that no one actually applies and which therefore provide no value to the organization.

Grow by iterationCreating a collaborative, scalable, transparent process for getting advanced analytics out of the lab and into the everyday business is the only way to truly operationalize and get maximum business value from analytic investments.

Any system will need to incorporate a feedback loop and accommodate changes to improve decisions and outcomes. This can be challenging

for those unaccustomed to an iterative learning process, but is critical in any large-scale initiative. Analytics is no exception. Models should be expected to continually improve, based on assessment of actual business results and adaptation to changing conditions, to yield better predictions. Setting this expectation in advance should satisfy concerns that the analytic model is a mysterious black box, and enable the business user to feel in control of the outcomes.

Page 5: Best Practices for Operationalizing Complex Analytic Models at … · 2019-10-03 · Best Practices for Operationalizing Comple Analytic odels at Scale Analytics Decision anagement

© 2019 Fair Isaac Corporation. All rights reserved. 5

Best Practices for Operationalizing Complex Analytic Models at ScaleAnalytics

Decision Management

Scale the vision up Once analytics are effectively deployed, the next challenge is to scale them up for widespread operational use. Analytics typically gain their first footholds in

functions such as risk management, marketing, account originations, and the like. The most effective organizations migrate advanced analytics into production control systems, supply chain management, logistics, ERP, and

more. They understand that the more analytics becomes embedded into their operational systems, the more they can automate decisions and improve business outcomes.

Break down the wallsMost organizations clearly can benefit from analytics adoption. But the companies best positioned for success are those that have set a strategic objective to break down the silo walls that keep individual functional teams from benefitting from each other’s analytic learnings. They gain value most immediately by sharing the models among teams who are touchpoints supporting the same products or customers.

There is further value in multiple teams sharing development resources, so that there is capacity to develop the analytics needed across business functions, get them fully implemented and refresh them as needed. A highly valuable objective in organizations adopting analytics is the development of centralized capacity, not just for model deployment but for decisioning as well. Companies that have accomplished this centralization evolve the sense that decisions are connected, and their various teams achieve synergy by fully leveraging each other’s decisioning.

Many organizations can build centralized analytics expertise and deployment capacity on their own, organically. For those who need a boost, FICO offers strategic consulting through its Fair Isaac® Advisors consulting arm. Starting with a Current State Assessment, an on-site review of processes, data, analysis, and reporting capabilities, FICO consultants work with the client to develop a customized roadmap of prioritized actions that will help the organization venture beyond risk mitigation and build core competency.

Page 6: Best Practices for Operationalizing Complex Analytic Models at … · 2019-10-03 · Best Practices for Operationalizing Comple Analytic odels at Scale Analytics Decision anagement

© 2019 Fair Isaac Corporation. All rights reserved. 6

Best Practices for Operationalizing Complex Analytic Models at ScaleAnalytics

Decision Management

Sources of riskWhile virtually any organization can achieve some level of success with advanced analytics, clearly there are risk factors governing that success. These risks fall into at least two categories:

Strategic design

• Understandingthedecisiontobe

supportedbythemodel,andthe

valuetobegainedbyautomating

orimprovingthatdecision

• Determiningthedatarequiredand

availableatthetimeofthedecision

• Appreciatingthedegreetowhichthe

model’soutcomesmustbeexplained,

inthecontextofeverysingledecision,

tothedecisionoperators(e.g.,acashier

orcustomerswipingapaymentcard)

• Evaluatingandcontrastingthebenefits

ofawell-madedecision,versusthe

consequencesofapoordecision,which

mayfurtherinfluencethetransparency

andexplainabilityrequiredofthemodel

Technical

• Formofmodel—Whatmathematical

familydoesthemodelbelongto?Isit

aregression,adecisiontree,aneural

network,oranensembleofsuchtypes?

• Supervisedorunsupervised—Isthe

modellearnedfromdatathatcontains

afuturebusinessoutcome(i.e.,isit

tagged,labeled,orotherwisebearinga

knownoutcome,toyieldasupervised

model),oristhemodellearnedfrom

datathatlackstheseknownoutcomes,

andthusanunsupervisedmodel?

• Speedofthemodel—Informedby

thebusinessanddecisioncontext,

howquicklymustthemodelprocess

itsinputdataandreturnanestimate

ordecision,inordertobeuseful?

Doesthisimpedethedepthof

analysisthatcanbeaccomplished?

• Sizeofmodel—Howmuchstateful

memorydoesthemodeltakeup?

Whatresourcesdoesitconsume

intrainingoratruntime?How

muchenergydoesitdemand?

• Languageofmodel—Whatstatistical

orprogramminglanguagewasused

todevelopthemodel?Common

examplesincludeR,Python,andScala.

• Modelframework—Whatdata

scienceframeworkorlibrarieswere

usedtodevelopthemodel?Andwhat

specificversionsofthoselibraries?

• Fixedorcontinuouslylearning—At

runtime,isthemodelreturningresults

fromastaticformula,containing

patternsofpre-learnedrelationships,

orisitcontinuallylearningfromnew

inputsandupdatingitsformula?

Attributes like these underscore the complexity of advanced analytics, but as the discussion below will make clear, all of these issues are controllable factors in model design and deployment. FICO has developed models for diverse decisioning scenarios, at many different scales of operation, on premises or in the cloud, on its FICO® Decision Management Platform and has assisted hundreds of organizations in operationalizing analytics on their own on-premises platforms.

Page 7: Best Practices for Operationalizing Complex Analytic Models at … · 2019-10-03 · Best Practices for Operationalizing Comple Analytic odels at Scale Analytics Decision anagement

© 2019 Fair Isaac Corporation. All rights reserved. 7

Best Practices for Operationalizing Complex Analytic Models at ScaleAnalytics

Decision Management

What do we mean by “operationalizing”?When we say “operationalize” a model, we mean the act of moving the model out of the analytic laboratory in which it was conceived, and making it available as robust, tested, runtime software that accurately computes the model—in real-time, batch, or streaming contexts—as needed to inform a timely decision.

Naturally, to achieve accurate computations, we must faithfully implement all of the model’s internal logic and parameters, such as the coefficients of a regression function, the connections and weights of a neural network, and the branching logic and leaf-node estimates of a decision tree. Getting all these details precisely correct is critical to an accurate production-time application (“scoring”) of the model.

And of course, we must also connect that model scoring software to the production-time data feeds (database query, event stream, messaging queue, etc.), and wire together the available data supply to the required inputs for the model’s scoring interface. We also must accommodate any necessary pre-processing of the data.

Most real-world models also contain feature generation steps, to translate raw

inbound data (such as a date of birth) into modeling-ready values (such as an age, calculated in a whole number of years). And these transformations can range from almost trivial (e.g., capping a revenue value at some upper bound to eliminate the undue influence of outliers) to remarkably complex (e.g., computing a time-decayed moving average of purchase amounts at a specific merchant type, over the last 90 days, and comparing it to the amount of the pending transaction). Many models also include post-scoring transformations, such as computing the average or selecting the most confident prediction from among a committee of models.

To operationalize a model means, at the very least, to enable the calculation of model features, of the model itself, and any other needed data transformations, within the production-time environment, as needed to inform and perform the real-world decisions.

Decisioning may be fully automated for this specific use case, or it may be embedded in a “human in the loop” application (e.g., for account originations or debt collection) to support intensive

agent interaction.

You need to compartmentalize the steps in the model and orchestrate the whole in the context of the decision you’re supporting. Operationalizing isn’t really a job for modelers. It’s a job for an analytics team, working closely with IT, that includes specialists in software implementation, testing, deployment, and governance. While analytics requires specific types of expertise, the process increasingly mirrors the more conventional cycle of development, QA, deployment, and refinement used in software delivery generally.

FICO takes a technology-agnostic, requirements-first approach to analytics. No matter what commitments the organization has already made—whatever choice you have made with respect to the model form (e.g., regression, decision tree, random forest, neural network, time series), development language or framework, scale or speed requirement—FICO experts can help devise a practical solution, without recoding, to fit any performance or infrastructure requirements.

FEEDBACK TO IMRPOVE THE MODEL

Combine recent predictions with latest business outcomes

Evaluate data for emerging trends andnew features

Train many new models vs. champion

Select best newly trained model

Operationalize updated model

Figure 1: Model development flow

Page 8: Best Practices for Operationalizing Complex Analytic Models at … · 2019-10-03 · Best Practices for Operationalizing Comple Analytic odels at Scale Analytics Decision anagement

© 2019 Fair Isaac Corporation. All rights reserved. 8

Best Practices for Operationalizing Complex Analytic Models at ScaleAnalytics

Decision Management

Technical risks: Is our infrastructure ready?While the training of some large, complex machine learning models can be very time-, compute-, and energy-intensive, most models will run efficiently in the types of IT infrastructures typically found in mid- to large-size enterprises—or in cloud infrastructures. There are risks, however, that must be addressed to develop and deploy decisioning models effectively.

General classes of risk you will need to manage include issues of:

• Datasupply:Atthemomentofthe

decision,doweactuallyhaveaccess

toallthedataneededbythemodel?

Regardlessofhowthemodelperforms

inthelab,doesthemodelrequire

variablesthatsimplyarenotavailable

atruntime?Wemayhaveidentifieda

datasourcethatclearlyispredictive,

butwhichisexpensivetoobtain.

Doesthevaluejustifythecost?

• Formofthemodel:Canwe

operationalizewhatthedatascientistis

handinguswithoutmodifyingit?Some

decisionscanbeautomatedusinga

relativelysimplesetofbusinessrules.

Morecomplexdecisionsmayrequire

moresophisticatedmodels.There

aremanysuchmodels,whichfallinto

severalmathematicalfamiliesorforms:

Decisiontrees;scorecards;neural

networks;supportvectormachines;and

hybridorensemblemodels,including

tree-basedensemblessuchasrandom

forestorgradient-boostedtreemodels.

• Environment:Canweproduce

amodelthatcanrunonthe

businessuser’splatform—e.g.,

asanappforacellphone?

• Speed:Yourrequirementsforthe

decisioningapplicationwillbedefined

bythebusiness—notthedatascientist.

Atypicalrequirementmightbetoscore

5,000transactionspersecondina

request/responsemode,or2million

accountsinabatchwindowof15

minutes,nightly.Thoserequirements

havedesignimplications.Andthey

arenotthetypicalrequirementsgiven

toadatascientist,whoseprincipal

responsibilityistobuildthemost

powerfulandpredictivemodelpossible,

withintheconstraintsoftherelevant

regulations.Iftheresultingmodelisso

computationallycomplexthatitviolates

thenonfunctionalrequirementsdefined

bythebusiness,thenitfailsasaproject.

• Scale:Whatpercentageofthe

businessisgoingtorelyonthemodel?

Howdowemanagetheriskofitbeing

unavailable?Isthereafailoverinplace?

Howbigisthemodelitself,interms

ofthevolumeofcode,datastorage

requirements,andmemoryconsumption

atruntime?Howcanweparallelize

calculationsforgreaterscale,shorter

computationtimes,andhigherreliability?

FICO can provide consulting assistance to evaluate and overcome infrastructure issues such as these, based on its experience over decades, deploying analytics in a wide variety of infrastructures. And it has adopted deployment solutions that enable models to run in their native form, as written, without expensive recoding, in virtually any enterprise infrastructure.

Page 9: Best Practices for Operationalizing Complex Analytic Models at … · 2019-10-03 · Best Practices for Operationalizing Comple Analytic odels at Scale Analytics Decision anagement

© 2019 Fair Isaac Corporation. All rights reserved. 9

Best Practices for Operationalizing Complex Analytic Models at ScaleAnalytics

Decision Management

Languages and frameworksIf your organization has invested time and resources to develop analytics, it is likely that you are committed to specific technologies and skills. Few organizations are deep in all the open source data science languages, frameworks, and libraries, and instead rely on a preferred subset of those technologies.

Today, R and Python represent the dominant languages and tools of choice for data scientists, even while SQL, Scala, and SAS have strong presence. Each of these languages has its own syntax, its own set of actors, nouns, classes, libraries, and modules. Effectively, each language becomes a world unto itself, with few intersections to others. R has

always been a programming language for statistics and data science, with a rich ecosystem of contributed modules. In contrast, Python serves many broad needs, but now possesses a prodigious and active collection of data science modules.

Typically, a data scientist’s career will reflect his or her expertise in either R or Python. Few scientists speak both languages with equal fluency, and teams will often focus on one language and its data science framework (e.g., scikit-learn for Python, or caret for R).

In turn, these technologies define data in their own unique terms, down to the supported scalar data types (i.e., integers, floats, doubles, strings, and Booleans),

the collection types (lists, arrays, factors, data frames, and dictionaries), and the core operations one can perform on such data. It becomes nearly impossible to disentangle a model’s abstract computational content from the syntax and functions of the language. Further, it’s almost unthinkably difficult to translate (recode) a complex ML/AI model from one language to another with perfect fidelity.

In light of these considerations, our general strategy for operationalizing models developed with R and Python is to run them as written, natively on their respective R and Python interpreters. But this alone provides no guarantee of accurate or efficient execution.

Execution contextsAnother challenge in operationalizing a complex analytic model is to consider the execution contexts that differ between development time and production time.

In the laboratory, you will typically train your models from static batches of data, repeatedly, to refine and tune, in search of the best model available from that development dataset. When you ultimately operationalize that model, the goal of course will be to compute against new data records, not the same familiar records you reviewed thousands of times in the lab. And this is where execution

context can vary widely between the laboratory and operations.

The operational data might also be a batch of finite-sized data (e.g., a CSV file, a database query result, a single directory full of documents). But it could also be a flow of streaming data, such as a series of events arriving at a network router, with no obvious beginning and no obvious end. In terms of execution, the batch job is simple: start crunching the file from its beginning and process up until the end. But the stream is different: We need a software service that’s up all the time, ready to respond to each event, and

with no obvious moment to initialize the model, nor terminate the scoring service.

Whether a model’s job is to operate on a fixed batch or an endless stream, our wish would be to still have a single means to faithfully represent the model’s logic, and invoke it accurately in either context. Furthermore, different technical requirements will spell out the time and resources afforded to process the batch or score the stream.

Here again, we seek to execute the model natively, within the same language and framework, to avoid costly and error-prone recoding of the model.

Page 10: Best Practices for Operationalizing Complex Analytic Models at … · 2019-10-03 · Best Practices for Operationalizing Comple Analytic odels at Scale Analytics Decision anagement

© 2019 Fair Isaac Corporation. All rights reserved. 10

Best Practices for Operationalizing Complex Analytic Models at ScaleAnalytics

Decision Management

Native execution via containerizationHow can we faithfully operationalize the model’s computational logic, without any risk of recoding error, in various execution contexts, with finite computational resources, but with highly available and scalable architectures, using precisely the same software library versions, and without risking cross-contamination of multiple models running side by side?

These are certainly some stiff challenges. You might conclude the situation is nearly hopeless, with only one feasible approach: constrain our data scientists to use exactly and only one language, one narrow set of modules, and one centrally managed collection of software versions. Fortunately, there is a far better approach, which lets us effectively cope with all these obstacles in a consistent

and manageable way, without stifling the scientist’s need to innovate.

FICO can help you operationalize these models and run them with perfect fidelity to their laboratory-born counterparts, regardless of which language, framework, or modules you used, and in the computational context needed for your business application.

We accomplish this with a container-based strategy, backed by proven management services. In short, each model—along with its underlying core language, required data science modules, and libraries—is frozen into a binary image, and cataloged in an image repository. Then, we deploy virtual machines to faithfully execute that code, using precisely the same libraries from its authoring time, as a carefully quarantined run-time service with high availability.

This ability to run the model in its native form eliminates any need to redevelop a model in an alternative framework, or recode the model in a new language, giving your model a fast path to production, and a shortest-possible time to value. The capacity to easily construct, independently execute, and effortlessly manage containers is a specific function of the FICO® Decision Management Platform known as Model Execution Services.

These model execution services provide a native environment for your model, running on-premises or in the cloud, with the availability and scale to meet your operational requirements. It is simply the fastest, most accurate, and most reliable means to operationalize your analytic models.

Page 11: Best Practices for Operationalizing Complex Analytic Models at … · 2019-10-03 · Best Practices for Operationalizing Comple Analytic odels at Scale Analytics Decision anagement

© 2019 Fair Isaac Corporation. All rights reserved. 11

Best Practices for Operationalizing Complex Analytic Models at ScaleAnalytics

Decision Management

Strategic risks: Can we design for success?Too often companies that invest in developing data science teams to generate high-performance machine learning and AI underinvest in the tracking and ongoing maintenance of models used in the real world. Or, they have difficulty achieving the internal alignment that is required to push ML/AI and analytics into production.

These issues often turn out to be driven by design deficiencies. For example, the model may be based on an effective understanding of the business problem to be solved, but delivered in a form that makes sense to the data scientists but not the business users. This is characteristic of organizations with strong data science capabilities, but where no one is effectively “translating” requirements between the model developers, whose goal is “finding the lift,” and the end users, whose goal is to

“actualize the lift.” That is an essential (and non-trivial) skill set characteristic of people whose careers have straddled the technical and business worlds. Those individuals are difficult to recruit and retain.

A best practice and guiding principle is to “design with the end in mind.” The most powerful of machine learning models will likely go unused and deliver minimal value to the business if it is designed without reference to the business context and the execution environment in which it will be implemented. If the end users are people whose core competency is in the line of business, such as account origination or debt collection, and the results are incomprehensible to a user with those skills, they will not use it. Or, if the application requires the model to generate a recommendation within 50 milliseconds, but the infrastructure

cannot support that type of response time, the model will not fit into the context of that business process.

FICO frequently assists clients faced with exactly those kinds of problems. Often the operational constraints require a trade-off to be made—e.g., an architecturally simpler model that produces a result close enough to the precision required within the infrastructure the organization has in place, but with dramatically faster runtimes and far less mystery within the model itself. FICO technology helps clients achieve this, typically on their own.

There are, of course, times when there isn’t a trade-off to be found, and complexity is required for the best solution. FICO® Decision Management Suite supports the creation of simple or complex models, and provides tools to objectively measure their predictive strength.

ExplainabilityWhen businesses operationalize analytics, decisions increasingly are automated. This requires sustained buy-in from executives and stakeholders from across the enterprise. Trusting the models to make good decisions is challenging and many projects have failed because they didn’t have the support of operational stakeholders close to the decision. Understanding how the models arrive at their results and having data and accuracy checks is critical to securing that support.

This is where explainable AI (“xAI”), which provides a business user with an explanation of why the model generated the result it did, is needed. This is particularly true in regulated industries, where the need to clearly understand the behaviors and impacts of ML/AI start with the data scientists constructing the models, the business leaders overseeing usage of the models, internal auditors who will carefully scrutinize the design, composition, and behaviors of the models, and ultimately with external auditors seeking to ensure safety, soundness, and transparency in the decisions. Compliance

will also become increasingly important, as corporations will need to demonstrate accountability and transparency in their business decisions, and that certainly extends to the ML/AI models used within those decisions.

What’s the opportunity in a better decision? What’s at risk in a bad decision? The higher the consequence, the higher the level of scrutiny.

Often, more transparent forms of analytics are required to meet regulatory compliance mandates. Companies must eliminate any chance that their actions might discriminate against a

Page 12: Best Practices for Operationalizing Complex Analytic Models at … · 2019-10-03 · Best Practices for Operationalizing Comple Analytic odels at Scale Analytics Decision anagement

© 2019 Fair Isaac Corporation. All rights reserved. 12

Best Practices for Operationalizing Complex Analytic Models at ScaleAnalytics

Decision Management

protected class, such as in their product offerings or pricing. When the regulators go into a business, what they look for is clarity. They need to understand how a decision is made. They are much more comfortable if they can see a decision tree, a scorecard, or set of pricing matrices—graphical evidence of a transparent decision model that explains how offers are provided.

There is a strongly held perception that machine learning models supporting these decision processes are impenetrable “black boxes.” The

recommendations that drive those decisions come out of the ML/AI model, and there is no way to tell the actual basis for the decision. Some of those models have had that character, but FICO has made major investments in making ML/AI models more transparent and explainable.

FICO has engineered transparency into its analytics methodology.

With FICO’s analytic technology, data scientists build efficient models that are inherently transparent (so-called “white box” models such as scorecards and

decision trees), or build complex ML/AI models, but apply FICO’s groundbreaking xAI techniques to deeply understand, explain, and document the behaviors of those otherwise “black box” models. In either case, at implementation time, all of these models are capable of explaining their individual predictions to the decision operators and other stakeholders. This transparency makes it easier to fully understand how decisions are being made and to document and explain decision processes to executives and regulators.

Model governanceModel governance focuses on how models are developed and validated, how they perform in the real world, and the steps we take to monitor and manage the models over their lifetime. For those responsible for the governance of analytics, ML/AI introduces new challenges. The models are more complex and more opaque than conventional predictive analytics. As a result of their fine tuning to the development data, their performance in the real world may degrade more rapidly, and it may be more difficult for business users to understand the behavior of the model. They may be more concerned that the machine learning model will surreptitiously encode complex behaviors that are present in historical data, but that run counter to business goals, ethical guidelines, or regulatory requirements (e.g., disparate impact on members of protected classes). Ensuring that the models conform to requirements, and

continue to conform over time, is a core concern of governance. Successful governance of ML/AI models requires vigorous monitoring.

FICO technology enhances model governance both at the development and monitoring stages. In FICO® Analytics Workbench™, the built-in xAI Toolkit enables developers to capture, explore, document, and share the behaviors of the ML model and its features, well before it is deployed. Further, to grow confidence in the ML/AI recommendations, individual

predictions can be explained in simple, plain language for the decision operators. Once the model is in production, the performance and stability monitoring functionality in FICO® Decision Central™ will alert the governance team if a model’s performance degrades as the environment changes. Thus, stakeholders can immediately know when features’ or a model’s behaviors stray outside the acceptable boundaries, and take actions to correct for those impacts, and ensure decision performance.

Page 13: Best Practices for Operationalizing Complex Analytic Models at … · 2019-10-03 · Best Practices for Operationalizing Comple Analytic odels at Scale Analytics Decision anagement

FICO, Fair Isaac, Analytics Workbench and Decision Central are trademarks or registered trademarks of Fair Isaac Corporation in the United States and in other countries. Other product and company names herein may be trademarks of their respective owners. © 2019 Fair Isaac Corporation. All rights reserved.

4761WP 09/19 PDF

+1 888 342 6336 [email protected]

NORTH AMERICA

www.fico.com www.fico.com/blogs

FOR MORE INFORMATION

+55 11 5189 8267 [email protected]

LATIN AMERICA & CARIBBEAN

+44 (0) 207 940 8718 [email protected]

EUROPE, MIDDLE EAST, & AFRICA

+65 6422 7700 [email protected]

ASIA PACIFIC

More PreciseDecisions

Best Practices for Operationalizing Complex Analytic Models at ScaleAnalytics

Decision Management

Analytics adoption with confidenceRisk management, fraud management, marketing, collections and recovery, and many other business functions across a diverse range of industries, are undergoing digital transformation, and analytics are playing a crucial role. Having a strategic technology partner such as FICO will enable even a newly established analytics team to:

• Workfromacommonstrategic

methodology.Addresstheneeds

ofindividualstakeholders,while

evolvingacentralizedanalytics

capacity,bothformodeldevelopment

andforoperationalization/

deploymentintodiversedecisioning

strategiesandsettings.

• Designwiththedecisioninmind.

Articulatethedecisioningproblem

andcontexttobeaddressedbefore

developingthemodel.Startfroma

clearunderstandingofthebusiness

requirementsandtheconstraintsof

theinfrastructure.Whatisthescope

ofthedecision?Whatcanbepredicted

thatcancreateabetterdecision?What

criticaldataelementswillandwillnot

beavailableatthetimeofthedecision?

• Stakeoutandcommittoacoreset

oftoolsandcompetencies.Whatever

classesofmodelsyoutrust,whatever

developmentlanguagesorframeworks

youprefer,theFICO®Decision

ManagementPlatformwillempoweryour

teamtobuildeffectivemodels,drawin

thedatayouneedtosource,andmanage

modelsacrosstheirentirelifecycles.

FICO’scontainerizationstrategyensures

thatmodelswillruneffectivelyintheir

nativeenvironment,asoriginallywritten—

noneedtorecodeanything.FICO’sbroad

platformofcapabilitiesisavailablefor

on-premisesorclouddeployment,and

canbedeliveredasamanagedservice

viatheFICO®AnalyticCloud(viaAWS).

• Upholdthehigheststandardsof

modelgovernance.“Explainability”is

engineeredintoFICO’sdevelopment

platform,ensuringthatmodelssatisfy

themostexactingregulatorycompliance

requirements.Documentationofhow

modelsarriveattheirresultsisgenerated

quicklyandisclearandintelligibleto

businessstakeholders,internallegaland

audit,andexternalregulatoryagencies.

• LeverageFICO’sdecadesof

experience.FICOhasindustry-leading

depthinfrauddetection,creditrisk

assessment,empiricallyoptimized

decisioning,artificialintelligence/

machinelearning,andthepractical

deploymentofanalytics.

Given the velocity of change in today’s business environment, speed to decision and process optimization are no longer nice-to-haves. This is the year for businesses to make real inroads to operationalizing analytics throughout the organization, or risk being left behind.