HomeInsightsAI and Biometrics: ICO launches new strategy

Contact

The Information Commissioner’s Office (ICO) has launched a new AI and biometrics strategy, setting out how it intends to ensure that organisations are developing and deploying new technologies in compliance with data protection law.

The ICO cites research that suggests that the public are particularly concerned about the over-reliance on AI in certain areas (for example in employment contexts or to determine welfare eligibility) as well as the risks posed to civil liberties by facial recognition technology.

To address these concerns, and ensure that organisations are developing new technologies safely and legally, the ICO has developed a strategy which sees it taking action in a number of areas, including:

  • Consulting on its automated decision-making (ADM) and profiling guidance;
  • Developing a statutory code of practice on AI and ADM which will provide guidance on matters such as transparency, explainability, bias, rights, and redress;
  • Setting out regulatory expectations for the use of ADM in government departments to ensure that they are using it responsibly and with appropriate safeguards;
  • Setting “clear expectations for the responsible use of automated decision-making in recruitment”;
  • Securing assurances from developers of AI foundation models that personal information used in model training is sufficiently safeguarded, setting clear regulatory expectations, and taking action if unlawful model training creates risks;
  • Publishing guidance on how police forces can govern and use facial recognition technology in line with data protection law; and
  • Engaging with industry to “assess the data protection implications of agentic AI” and consulting on emerging data protection challenges.

Commenting on the strategy, John Edwards, the UK Information Commissioner, said “our personal information powers the economy, bringing new opportunities for organisations to innovate with AI and biometric technologies. But to confidently engage with AI-powered products and services, people need to trust their personal information is in safe hands. It is our job as the regulator to scrutinise emerging technologies – agentic AI, for example – so we can make sure effective protections are in place, and personal information is used in ways that both drive innovation and earn people’s trust”.