Artificial Intelligence Interviews

Global Fintech Interview with Scott Zoldi, Chief Analytics Officer at FICO

Global Fintech Interview with Scott Zoldi, Chief Analytics Officer at FICO

Hi Scott please tell us about your role and the team / technology you handle at FICO. How did you arrive at FICO?

My analytics career began in 1999 at HNC Software, which was acquired by FICO in 2002. FICO has provided a steady stream of professional and intellectual opportunities ever since. I’ve been Chief Analytics Officer at FICO for over five years now, driving the company’s innovation in artificial intelligence and incorporating it into FICO solutions, including the FICO® Falcon® Fraud Platform, which protects about two-thirds of the world’s payment card transactions from fraud. While at FICO I have authored over 110 analytic patents with 62 patents granted and 48 in process. In addition, I am strongly engaged with industry leaders on how best to develop practical applications and standards for Explainable AI, Ethical AI and Responsible AI.

I oversee an analytics research and development team of over 100 scientists at FICO, but the reach of analytics in the organization is much broader; we are a leading software platform organization focused on analytics that help businesses best develop and use digital decisions. Because AI and analytics are the foundation of every aspect of FICO’s business, my team delivers to a drumbeat of analytic innovation. My team and I are responsible for continuous adaptation of FICO’s industry-leading products for fraud detection, compliance and credit risk, among numerous focus areas. For example, threat landscapes are in constant flux due to opportunistic criminal opportunities and shifting economic crises; in the past few years we have delivered production models to combat evolving financial crime in money laundering and cyber threats.

Read More: Global Fintech Interview with Nishant Nair, CEO at RecVue

How has your role evolved through the pandemic crisis? How did you stay on top of your game?

My role shifted in a few ways during the pandemic. For my own staff, I have focused on the team working as effectively as possible, remotely, to meet our customer deliverables; responding to customer concerns in connection with changing data patterns; and enabling remote innovation cycles.

With FICO clients, my focus has been education about Responsible AI. This topic has a strong real-world impact; models built responsibility are inherently robust and can be leveraged even when strategies need to change. Poorly built models have to be thrown away or, worse, organizations may back away from using AI and machine learning in applications that can directly impact clients.

One lesson you learned by working with technology and people during the pandemic?

I’ve observed the paramount importance of customers understanding the AI technology they are using. Organizations that saw models and decisions gyrating to changing customer behavior panicked. Those without models built around a Responsible AI framework didn’t have the assets to audit their AI, and were tremendously uncomfortable during the pandemic. This lack of understanding was amplified by massive shifts to digital channels, a new behavior for certain customers. New behaviors further emphasize an organization’s need to know what drives their machine learning, and whether it will be stable and responsible to continue to use.

Tell us more about your recent report on Responsible AI. How could companies better use AI ML for an ethical use?

We recently released our second annual State of Responsible AI report in partnership with market intelligence firm Corinium, which examined how global organizations are deploying AI and progress being made to ensure AI is used ethically, transparently, securely and in customers’ best interests. Overall, the report, which surveyed over 100 c-level analytic and data executives, uncovered a lack of urgency around Responsible AI use that is ultimately putting many organizations at risk. For example, despite the increased demand and use of AI tools, almost two-thirds (65%) of respondents’ companies can’t explain how specific AI model decisions or predictions are made.

While many businesses are investing in AI tools, specifically over the last year as the pandemic sped up digital transformation, the importance of AI governance and Responsible AI use has not elevated alongside mass AI adoption. These standards should be elevated to the boardroom level as organizations are increasingly leveraging AI to automate key processes that in some cases are making life-altering decisions for their customers and stakeholders. As context, the study also found that the lack of awareness of how AI is being used and whether it’s being used responsibly is concerning as 39% of board members and 33% of executive teams have an incomplete understanding of AI ethics.

How would you define ‘Responsible AI’ through the lens of a business intelligence executive? Who is accountable for the ethical use of AI applications?

At a high level, I view Responsible AI as an enforceable and auditable standard for how a  company’s board of directors approve and support the use of machine learning and AI, to ensure that an organization’s AI implementations are safe, trustworthy and unbiased. In large companies, the chief analytics officer is ideally positioned to set Responsible AI standards for the entire analytics function. If an organization doesn’t have a CAO, it should be another, analytically capable c-level executive.

AI bias affects all levels of an organization, from data scientists to consumers so, Responsible AI needs to be supported by strong AI model development governance standards that are auditable, versus a comparatively ambiguous code of conduct or oath, and strictly enforced by the organization’s CEO.

Read More: Global Fintech Interview with Olivier Novasque, Founder and CEO at Sidetrade

Do you think every company now needs a Chief AI Leader to mentor AI and data teams on ethical applications? Could you tell us about FICO’s governance on this development?

Ethical applications of AI should be top-of-mind for everyone within a company as it often affects everyone from the data scientists to the end-users of a product or service. That said, if the organization does not have a specific Chief AI Officer or Chief Analytics Officer, setting Responsible AI standards should be a priority for another capable c-level executive, or through an expert on the board of advisors to the company. As I mentioned, though, while one analytically minded executive can set the Responsible AI standards, AI model development, governance must be implemented, immutable, and enforced, and needs to be a priority for the broader c-suite and board of directors.

FICO has in place a mature, blockchain-based process for artificial intelligence and machine learning model development governance, which functions as the linchpin in the production of Responsible AI models. This blockchain persists outcomes of critical and mandatory tasks such as data extent, target assignment, data ethics and bias detection, permissible algorithm mandates, use of synthetic data, extraction of latent features driving the model, stability testing, bias testing, and monitoring requirements for when in production to name a few. These blockchains identify all individuals involved, their results, approvals, and sign off which is critical as when these models are in production individuals may not be at the firm to ask, or proper records not kept. We focus on building models that are highly performant within Responsible AI constraints, to ensure fair decisioning and clear understandability and in compliance from an ethics perspective. The model development governance process ensures that all models go through a centrally designed and audited model development process.

Hear it from the pro: What are the biggest trends in fintech industry that is heavily disrupted by RPA, AI ML and Big Data.

Responsible AI is a huge trend! One need only look at the newly announced and anticipated regulation around the use of AI, given the far too many examples of AI deployed “in the wild” that has been unsafe, unethical, or both. AI is growing up and, as such, responsible use is a very hot topic. Companies are recognizing they must ensure that standards are defined at the corporate level, and meet minimum industry requirements for robust model development. As part of this topic, Explainable AI, Ethical AI and model monitoring share the top spot on conversation agendas.

Another hot topic is adversarial learning and AI attacks. When I’m not developing new innovations in Responsible AI, I am filing patents in the adversarial AI area. Much like we worry about cyberattacks, ransomware and data compromise impacting services we depend on, very adept criminals are using machine learning to learn how to expertly attack AI. The potential attack surface is large; criminals can go after either the model development process by injecting errant data, or attack the models that are making these decisions. Developing models to detect adversarial AI activity, while we may be under attack, is very hot.

Read More: Global Fintech Interview with Ivy Lu, Chief Data Scientist at Oxygen

Tell us more about the hiring challenges for Fintech marketing teams of leading technology companies like FICO? What advice do you have for the industry leaders in this regard?

Hiring properly trained data scientists is essential to having a successful data science function within an organization, and demand for data science talent is high. When organizations are looking to build their data science teams, I suggest that before assembling a team, you first stop, take a hard look at your organization and ask questions. What are you trying to accomplish with this team? What resources and strengths do you already have in place — technology, expertise and executive sponsorship — to support this team? What are your company’s data analytic strengths and weaknesses, and how do new hires impact those areas?  What is the most important weakness to strengthen?

There’s no template or magic formula for getting it right. I firmly believe in the power of diversity—my best advice is to hire people who are different than the staff and strengths you already have. This will allow new vantage points and technologies to be explored, and will create useful debate to drive innovation and advance a data science organization. Not hiring where you have strength makes sense because you can always train people on what you are already good at. Data science leaders need to determine which areas that experts on staff can’t address today, and focus on building a diverse set of data science talent and voices.

Thank you, Scott! That was fun and we hope to see you back on globalfintechseries.com soon.

Scott Zoldi is Chief Analytics Officer at FICO, driving the company’s innovation in artificial intelligence and incorporating it into FICO solutions including the Falcon® Fraud® Platform, which protects about two-thirds of the world’s payment card transactions from fraud. While at FICO, Scott has been responsible for authoring 110 analytic patents with 56 patents granted and 54 in process. He is an industry leader in developing practical applications and standards for AI, Explainable AI, Ethical AI and Responsible AI, and was named one of Corinium’s 2020 Global Top 100 Innovators in Data & Analytics. Scott is most recently focused on operationalizing artificial intelligence in FICO’s solutions for fraud management, compliance and cybersecurity. Scott serves on the Boards of Directors of Tech San Diego and Cyber Center of Excellence, and on the Cybersecurity Advisory of the California Technology Council. Scott received his Ph.D. degree in theoretical physics from Duke University.

FICO Logo

FICO powers decisions that help people and businesses around the world prosper. Founded in 1956 and based in Silicon Valley, the company is a pioneer in the use of predictive analytics and data science to improve operational decisions. FICO holds more than 195 US and foreign patents on technologies that increase profitability, customer satisfaction and growth for businesses in financial services, telecommunications, health care, retail and many other industries. Using FICO solutions, businesses in more than 120 countries do everything from protecting 2.6 billion payment cards from fraud, to helping people get credit, to ensuring that millions of airplanes and rental cars are in the right place at the right time.

FICO is a registered trademark of Fair Isaac Corporation in the US and other countries.

Related posts

AI-Powered Platform Tickeron Simplifies Cryptocurrency Trading With New Features

Fintech News Desk

Global Fintech Interview with Vincent Bezemer, SVP Americas at Backbase

Paroma Sen

Arc Launches First AI Platform For $2 Trillion Private Credit Industry

PR Newswire
1