OnPolicy blog header - globe on red

Welcome to Onfido’s Policy Corner, your regular briefing on key global policy updates from the world of digital identity, regulatory compliance, AI, and data privacy.

2024 will be a year of elections, and as a result, a year of policy and legislative consolidation. Deepfakes are of special concern as a threat to democracy, and at Onfido, we have been sharing our insights from the Fraud Lab to lead the global conversation and support legislators. We’ve taken advantage of a pre-electoral and slower paced policy environment with deep dives into the EU compliance landscape, eIDAS and KYC requirements, and how Onfido is managing data protection challenges too.

EU AI Act Passes

The EU AI Act has finally passed a vote in the European Parliament enjoying 75% support. There is now a clear democratic mandate for the legislation, which will enter into force in May of this year. The Act will be the global standard for AI compliance, and we will be closely monitoring its impact. There’s a lot more to do to facilitate implementation and Onfido is looking forward to partnering with key figures in Brussels to provide insights and support to ensure the Act is fit for purpose for commencement in May 2026.


In February, HMG published their response to the AI White Paper Consultation.

The response confirmed the government’s pro-innovation policy to AI regulation through a context-based approach based on five principles (safety, transparency, fairness, accountability, contestability). It will establish a steering committee between regulators, recognizing the importance of greater collaboration and joined-up approaches between different parts of the AI regulatory ecosystem.

Ultimately the response remained non-committal regarding future legislation. It is telling that the only proposed (voluntary) code of practice between copyright holders and AI developers has been abandoned. 

However, HMG also published the introduction to AI Assurance. The proposed steps to building AI assurance were not unhelpful in the absence of regulation, such as ensuring adherence to existing legislation under UK GDPR and the Equality Act 2010 in lieu of explicit regulation for bias mitigation. Further advice to upskill organizations and review internal governance demonstrates that the government understands that the route to trustworthy, reliable and compliant AI goes beyond just technology, to also encompass people and process.

The strongest feature of the UK steps was the drive for AI deployers to participate in AI standardization, referring deployers of AI to the AI Standards Hub (run by the Alan Turing Institute and British Standards Institute). In the absence of legislation, the drive to develop responsible AI to global standards will build confidence in the quality, reliability, ethics and interoperability of UK developed AI solutions. 

While there is reason to be confident the UK Government understands the issues relating to AI, reluctance to regulate leaves the UK as a rule-taker in AI. Statements promising divergence from the EU AI Act have little substance to explain the potential benefits, and a drive to push NIST standards over ETSI suggest that SMEs will potentially face unnecessary regulatory barriers to grow and expand into the EU – their nearest and most sophisticated export market.