Machine Learning at Banks: 5 Lessons

Banks have been using machine learning for several years already. Other industries, such as medical device manufacturers, can learn from their experiences. This also applies to testing organizations, such as regulatory authorities and notified bodies.

Based on the parallels between the two industries, we have drawn up five best practices and recommendations for saving a lot of costs and avoiding unnecessary trouble with auditors.

A contribution and video from and with Dr. Daniel Lohner and Dr. Christian Johner

Daniel Lohner (d-fine) explains how banks are using machine learning and discusses with Dr. Christian Johner how medical device manufacturers can learn from it.

1. Common features shared by banks and medical device manufacturers

a) Importance of risk management

Risk management plays a decisive role in both industries. Banks have to manage financial risks, such as credit and market risks, and medical device manufacturers have to manage risks caused by unsafe medical devices.

In other words, both industries have to manage enormous risks. In one case, the stability of banks and, therefore, national economies must be guaranteed, and in the other, the physical integrity of patients, users and third parties.

In addition, both are required to analyze and minimize regulatory risks.

b) Strong and complex regulation

This is because both industries are heavily regulated. The regulatory requirements are constantly increasing.

Banking regulations

For the banks, this regulatory tightening took place at the start of the 2000s (Basel II) and the regulations tightened further as part of the process of getting to grips with the banking crisis in 2009, which was partly caused by drastic misjudgments of credit risks.

New developments as well as modifications are now subject to a strict authorization process and have to be validated regularly.

This regulatory structure is complex and varies even within the EU. However, a start was made on harmonizing these requirements, for the bigger banks at least, in 2014.

Bank regulators have even created financial incentives for banks to develop their own internal risk models and to look at the content of their investments in greater detail.

Regulation of medical device manufacturers

The breast implant scandal led to the creation of the EU regulations the MDR and IVDR in 2017.

For both banks and medical device manufacturers, the focus of audits and inspections is on transparency, the management of risks and risk management measures.

c) Digitalization

Both industries are included in the current technology hype (fintech and medtech/biotech, respectively) and have the increased use of machine learning in common.

Customers and patients can look forward to innovative products. However, in addition to the positive expectations, algorithms, in particular machine learning, also represent major risks from the regulators’ perspective.

2. Where banking regulation is already one step ahead of medical technology

a) Early digitalization in the banking industry

The banking industry was an “early adopter” of computer technology. It started using paperless transaction processing as early as the 1970s and banks have been offering digital products for decades.

Algorithms have also played an important role for many years. Examples of such algorithms include credit risk models for estimating the probability of customers defaulting on loans. Many of these models use machine learning methods.

b) Stricter regulation of algorithms

It is precisely because banks started using algorithms so early and because inadequate algorithms have already had catastrophic consequences (banking crisis) that banking regulators started drawing up and requiring rules on the use of these algorithms earlier than regulators of other industries.

These requirements relate to:

  • Evaluation criteria for the quality of models
  • Selection of models
  • Validation of models
  • Transparency of models, model development and data processing
  • Quality of the data for learning and verifying the models
  • Governance of data and algorithms
  • Continuous verification and improvement of the models

 Further information

The Johner Institute’s AI Guideline contains an example of these requirements for the use of machine learning in medical devices.

As a result of these regulations, banks do not use a lot of machine learning methods. When it comes to machine learning, the banks have got used to:

  • Regulatory audits
  • Ad-hoc requests

Mandatory annual model validation

3. Five lessons: what medical device manufacturers should learn from banks

This section looks at five aspects where medical device manufacturers should learn from banks if they want to use machine learning.

For each aspect, we describe:

  • Bad practice: what a lot of banks have done but you shouldn’t
  • The consequences: what happened to the banks as a result
  • Better practices: how medical device manufacturers could do it better
  • Recommendations: specific tips

Lesson 1: view model development not as a project, but as a continuous process

Bad practice

A lot of banks initially expected that algorithms would be developed, approved and put into operation once. This meant they often saw model development as a one-off project activity.

As a result, they set up these projects within their organizations and only documented the final project result.

The consequences

In retrospect, however, it is clear that this was not a good approach from either a technical or regulatory perspective.

Banks have to continuously validate algorithms, e.g., the machine learning algorithms, and adapt them to changing requirements, new data and new findings.

A lot of banks were not prepared for this, from an organizational and a technical perspective.

As a result, audits, ad hoc requests and the mandatory annual model validation represented massive challenges for a lot of financial institutions, leading to capacity bottlenecks. This is because they often had to rebuild their data preparation system and check consistency with old reports manually, which was a time consuming process.

Better practice

The banks that were able to avoid costs and problems with the regulator were those that had set up clear organizational responsibilities and control processes, including data management, at an early stage and had automated standardized processes as much as possible.

The following has proven to be a sustainable solution:

  1. The replacement of project-specific “monolithic” SAS or R files and “single-use” Excel files with modular, versioned scripts and database solutions
  2. Dashboards for recurring standard evaluations
  3. Process governance (e.g., actionable validation concepts) that is regularly reviewed and continuously developed

Recommendation

Processes and infrastructure should be designed for iterative and incremental development right from the start.

An efficient technological and organizational infrastructure for a validation and development environment and data management does represent a high initial investment, but it pays for itself relatively quickly. In the end, these aspects were actually anchored in the regulatory framework following negative experiences.

Therefore, it is to be expected that medical technology companies that take the initiative and tackle these issues at an early stage will also be able to benefit from cost savings and better audit results in the medium term.

Lesson 2: submit data, code and models to configuration management

Bad practice

A lot of banks only saved the “finished end results”. This became a problem because the banking regulators repeatedly demanded comparative analyses of the development sample for rule validations in productive environments and, in particular, in the event of changes to models and procedures.

Therefore, banks had to justify, time and again, why they had used a particular machine learning method and not a different one or a classical computing method.

The consequences

The financial institutions were unable to demonstrate that the results with current data were consistent with the results with the training data. They were also unable of compare the different models to one another and therefore unable to provide proof of improvements. This led to regulatory problems.

Better practice

The origin of data and the data preparation should be fully documented and reproducible. This applies for the data sample used for model development in particular.

This is because the models have to be continuously developed for several reasons:

  1. Data quality improves continuously over time and erroneous and incorrect data are corrected.
  2. The scope of the model or the underlying risk drivers may change.
  3. The evaluation criteria for models have changed over recent years, particularly as a result of the European harmonization of regulatory requirements. As a result, regulators today demand different analyses compared to 10 years ago. These analyses should also be calculated based on historical development data.

Recommendation

Although completely historicized data models would mean a higher initial investment, they have proven to be a sustainable solution in practice. Historicization includes version and configuration controls for

  • Raw data (data corrections)
  • Scripts and programs for data preparation
  • Libraries

Further Information

Read more on the validation of machine learning libraries here.

Lesson 3: justify the selection of models/algorithms

The selection of models, particularly with regard to model alternatives (e.g., other machine learning methods, other hyperparameters), should be documented and reproducible.

When approving new models and modifying models, banking regulators ask about alternative or historical model designs.

Bad practices

In the past, such requests were often regarded as special analyses and analyzed in an individual analysis alongside the actual model development.

The consequences

Experience shows that such demands are made again and again and that, under time pressure, the risk of errors and inconsistencies increases significantly.

Better practice

For this reason, and from a technical point of view as well, it is sensible to compare models with alternative models and techniques regularly during validation. Companies can significantly reduce time and costs by integrating alternative models and approaches into the model development and validation environment directly.

Recommendation

Companies should develop a model portfolio with “challenger” models that are run in parallel to the productive model using the same data. This makes it possible to always produce consistent results and it also helps demonstrate later why you selected the models you did and discarded others.

Tip

In the “Artificial Intelligence” seminar, manufacturers learn how to conduct and document these model comparisons without overheads and, thus, meet the regulatory requirements.

Lesson 4: the models and model results should be easy to explain

Bad practice

A major lesson of the financial crisis was that a lack of transparency about the risks and blind trust in “black box predictions” had led banks to do too much business with products that were supposedly safe but which were actually high-risk.

This was partly because of the rating agencies whose forecasts are still not transparent today.

The consequences

Today, regulators pay special attention to the transparency and economic plausibility of model predictions.

In practice, this also means restrictions in the choice of machine learning model.

This was ultimately also a consequence of too much uncritical use of external model results before the financial crisis.

Since then, regulators have performed targeted individual assessments in which they scrutinize individual model results and their internal acceptance (known as a “use test”).

In addition, banks were unable to comply with their customers’ right of access to their data, which was not actually established by the GDPR but was in fact already required by the German Federal Data Protection Act.

Better practice

Companies should select algorithms and models on the basis of their transparency and traceability as well as their performance (e.g., specificity and sensitivity). This is important for three reasons.

  1. It avoids subjective perceptions
    Due to their controlling function, model results should be transparent to decision-makers, particularly if the model results differ from intuitive assessments. Such cases often result from a selective perception of individual cases or errors in data collection.
  2. Better data quality
    Transparency also helps create a more effective human-machine interaction. Transparency with regard to how the data entered affects the model result also helps improve model acceptance and data quality.
  3. Behavior control
    A transparent algorithm can also be used for behavior control: Credit risk assessments also provide feedback to a bank's customers on how financially “healthy” their income-expenditure behavior is.
    This can also be transferred to personalized medicine: An algorithm should not only calculate a personalized suggestion regarding the dose of a drug a patient should take. It should also give the patient feedback on how their behavior can positively affect the recovery process.

Recommendation

“Interpretability” should already be a factor in the choice of model. Users must be able to understand how the input data (feature) has affected the predictions made by a machine learning model. Regulators should be able to see that a model doesn’t just provide correct predictions at random or within narrow limits.

This forces manufacturers to use well documented and stable methods such as decision trees, logistic regression and scenario simulation models in preference to unnecessarily complex models.

In other words, “keep it simple” when it comes to selecting a model: companies should only use less comprehensible models, such as neural networks, if they provide demonstrable benefits. Banks are currently barely able to have such machine learning methods “approved”.

In addition, companies should use techniques to improve the interpretability of the models, in other words, to show auditors and users how the models work. Such techniques include, for example, LRP (layer-wise relevance propagation) and spectral relevance analysis (SpRAy).

Lesson 5: stay one step ahead of the regulator

Bad practice

A lot of banks waited until the last moment to adapt to regulatory changes.

The consequences

This wait-and-see attitude certainly reduced time and effort in individual cases where regulatory requirements changed after industry consultations shortly before they came into effect, or when it had become apparent that certain aspects were not interpreted as strictly in practice.

But in a lot of cases this strategy led to considerable additional costs: Firstly, as a result of “last minute” implementation projects and, secondly, due to the missed opportunity to save costs and avoid trouble with the regulator in the long term through early implementation of a technologically and organizationally sustainable infrastructure.

Better practice

Companies should do what is right and proper and not wait until an authority or regulator forces them to do something. They themselves benefit in several ways from such an approach:

  1. They save money, as described above.
  2. Companies reduce the probability of getting into regulatory trouble.
  3. They are better able to anticipate and even control regulatory requirements.
  4. They are better able to live up to their claims of “acting properly” and their own values.

Recommendation

Companies should actively, and independently of regulatory requirements, continuously improvetheir best practices regarding the use of algorithms, particularly machine learning algorithms. They should act more as trendsetters and not be one of those companies that only follows and is surprised by changes.

4. Conclusion

The medical technology industry is currently going through a change that started 10 years ago in the banking sector. Therefore, medical device manufacturers should take a close look at what they can learn from developments at banks.

The companies that follow the recommendations above will, like the “proactive” banks, generally always come out of audits well and benefit from a sustainable infrastructure for model maintenance.

Over-engineering, particularly of overly complex machine learning models, often leads to higher and avoidable follow-up costs and, in practice, often provides little added value compared to simpler models.

For the banks, the key to maximum efficiency is a critical cost-benefit analysis of model complexity while at the same time complying with the framework conditions such as transparency, reproducibility and clear responsibilities.

For medical device manufacturers, safety and benefits for the patients must be the top priorities. A more complex algorithm may increase the benefits for a lot of patients but compromise the safety of some patients.

Author:

Prof. Dr. Christian Johner

Find out what Johner Institute can do for you
Starter-Kit_rot_dunkel

A quick overview: Our

Starter-Kit

Learn More Pfeil_weiß
blog_rot_dunkel

Always up to date: Our

Newsletter

Learn More Pfeil_grau
X

Privacy settings

We use cookies on our website. Some of them are essential, while others help us improve this website and your experience.