The Bank of England and the Financial Conduct Authority (FCA) have today launched the Artificial Intelligence Public-Private Forum, with a warning that the benefits of AI come hand in hand with risk.
Dave Ramsden, Deputy Governor for Markets & Banking, launched the new forum saying that while the adoption of AI in both the financial services sector in the wider community the level of regulation remains fragmented and as such the Forum would seek to deliver greater consistency:
“Over the past few years, we’ve seen a huge expansion in the use digital technologies across society and across the economy,” he said. “And this has affected all of us in some way, whether we’re shopping online or listening to music, streaming a film or engaging with friends and colleagues on social-media platforms, our lives have been transformed by remarkable innovation and technological advances. And Artificial Intelligence is increasingly at the heart of those innovative technologies.”
With AI and Machine Learning growing very rapidly in complexity, sophistication, and importance for society and the economy as a whole – and financial services in particular – regulators needed to ensure they were aware of how it was being implanted, added Mr Ramsden.
“AI is already embedded in many of the things we use every day, from mobile phones and tablets, to online shopping platforms. It forms the core of many of the technologies that are becoming increasingly familiar, such as biometric devices or autonomous vehicles,” he explained.
“This is likely to accelerate as smaller and smarter devices become more widely used and embedded in homes and workplaces, in retail and consumer products, across the so-called Internet-of-Things and as the global context changes rapidly due to the Covid crisis. And those same AI systems and processes are being used today across the financial sector to bring benefits to consumers, to firms, and to the UK financial system as a whole.
“AI-driven online platforms may, for example, help customers manage their money and savings more effectively, as well as make the processing of loan applications or insurance claims easier, quicker, and more transparent.”
Mr Ramsden added: “Financial firms may, in turn, benefit from AI in making efficiency and productivity gains, in tailoring products to customers’ needs, and in more effective ways of combating financial crime. AI may also boost the overall efficient functioning and resilience of financial markets and the wider economy.
“Clearly, the world is a very different place today from the one we started this year with; and while the Covid crisis has had, and will continue to have, a profound impact on the economy, and on the households and businesses that drive it, it has also increased and focussed interest in the potential uses of AI in tackling some of the many immediate problems and challenges precipitated by the crisis.
“For example, many businesses, including financial firms, are looking to use AI, sometimes combined with alternative data sources, for enhancing customer engagement, for driving more automation of internal processes, and for improving virtual working environments.”
He revealed that in a recent survey of regulated banks and insurers, conducted as part of the ongoing monitoring of the impact of Covid-19 by the Bank’s Fintech Hub, around 45% of those firms that participated reported that the crisis has led to an increase in the importance of AI and data science applications for their future operations, with around 55% reporting no change.
However, Mr Ramsden said there were downsides to the advance of technology.
“While the use of AI has clear benefits in an increasingly data-driven economy, there are risks and challenges,” he warned. “The impact of those benefits and risks will be felt at different paces and different depths at technological, social, corporate, and systemic levels. It’s useful to think of the risks and challenges in a hierarchical way, starting with those that may become apparent at the data level and building up to the level of models trained on those data; then, widening to the level of the firm, and finally, up to the level of the financial system as a whole. Overlaying this are the regulatory and legal challenges presented by AI.”
He said data form the core of all AI models and good data management and data governance are essential in controlling issues with, for example, biases that may be embedded in the data. At the model level, regulators need to think about issues such as ensuring that models continue to perform under a wide range of different conditions and being able to explain the outputs of complex AI systems.
Some of the key issues for firms will revolve around risk management and governance structures, around accountability and appropriate controls. For the financial system as a whole, AI may amplify network effects such as unexpected changes in the scale and direction of market moves.
“In terms of the regulatory challenges, it’s clear that policy thinking in this arena is also evolving rapidly. But the existing regulatory landscape is somewhat fragmented when it comes to AI, with different pieces of regulation applying to different aspects of the AI pipeline, from data through model risk to governance,” said Mr Ramsden. “Policy must strike a balance between high-level principles and a more rules-based approach.”
The Forum will draw on the knowledge of 21 leading AI experts from across the financial and technology sectors as well as academia.
The specific aims of the Forum are to share information and understand the practical challenges of using AI in financial services, identify existing or potential barriers to deployment, and consider any potential risks or trade-offs. It will also gather views on areas where principles, guidance, or regulation could support safe adoption of these technologies.