Protection  

Are we any closer to regulating AI?

  • To be able to explain some of the regulatory issues around AI
  • To identify the role of GDPR
  • To be able to summarise the business process steps to take
CPD
Approx.30min
Are we any closer to regulating AI?
(DC_Studio/Envato Elements)

The regulatory challenge around artificial intelligence is as broad as its potential applications. 

In response to the scale of the task ahead, the government’s white paper "A pro-innovation approach to AI regulation", published last year, deliberately lay emphasis on the themes of fostering and encouraging innovation.

The ambition is clear: to not only create an environment that enables businesses to feel free to develop and invest in AI, but at the same time protect businesses and consumers from the potential harms and some of the worst excesses of the technology.

Article continues after advert

Regulation today

What we have now is the indirect regulation of AI, which means there are no specific UK laws in place designed to address AI. Instead, there exists a range of legal frameworks that are indirectly relevant to the development and use of AI.

AI systems are fundamentally creatures of data, initially relying on large amounts of information to train the models that underpin these systems.

Personal data is often used both to develop and operate AI systems, and individuals whose personal data is used will have all of the normal rights that they benefit from under existing laws such as the general data protection regulation.

These typically include rights of transparency, rights to data access and, perhaps most importantly, existing rights under GDPR not to be subject to automated decision-making in relation to significant decisions, except under special circumstances. 

Impact on insurance

The current approach to the regulation of AI in the UK, as outlined in the government's white paper, is to try and rely to the greatest extent possible on existing regulatory frameworks.

Particularly relevant for the insurance industry is financial services regulation and the role to be played by the Financial Conduct Authority and Prudential Regulation Authority.

The FCA and PRA will use their existing powers to monitor and enforce against the misuse of AI, applying existing principles; for example, consumer protection principles and the concept of treating customers fairly and how these might be impacted if an insurer is relying on an AI system and predictive models to make pricing decisions.

Because there is a risk that some customers might be discriminated against or even priced out of insurance markets through the increased use of AI, the FCA is carefully considering how to translate existing principles when regulating firms in the use and misuse of AI. 

Embracing the future

Most companies accept that it will not be possible to hold back the tide of AI. As such, many leading businesses are already focusing on how to integrate AI into their operations and apply it to their existing business model, recognising the need to embrace rather than resist change.

In the insurance industry, not all of this is new. For many years, insurers have used algorithms and machine learning principles for risk assessment and pricing purposes.