The joint statement from the Information Commissioner’s Office (ICO) and Ofcom marks a significant moment in the UK’s evolving approach to digital regulation. Much of the immediate reaction has focused on regulatory cooperation between the privacy and communications regulators. However, the more prominent issue for organisations is the way the statement frames age assurance as both a child safety requirement and a potential threat to data privacy.
Online services are under growing pressure to use reliable age checks to keep children away from harmful content. At the same time, those same services must ensure that any personal data collected in the process is limited, proportionate and justified. For technology companies, platforms and app providers, the challenge is no longer just about whether to check a user's age, but how to do it in a way that is both legal and trustworthy.
The privacy paradox in age assurance
What is age assurance? In simple terms, it refers to the mechanisms used by a digital service to determine whether a user is above or below a relevant age threshold before allowing access to certain content or features. This can range from age verification (checking a formal identification such as a passport) to age estimation (using AI to guess age based on facial features).
The ICO and Ofcom have been clear that age assurance measures must be proportionate, necessary and privacy-preserving. This is an important regulatory message. It means that the most intrusive solution will not automatically be the most compliant. Instead, organisations must find a balance: a system needs to be strong enough to protect children but lean enough to avoid collecting more data than is reasonably needed.
This is where the practical difficulty arises. High-security methods often require sensitive biometric data, which carries higher privacy risks. On the other hand, "light" checks that are easier for users to complete may be rejected by regulators if they are too easy to bypass. In other words, organisations must now demonstrate that their chosen age assurance model is not just privacy-conscious, but also genuinely effective.
A risk-based approach to compliance
The joint statement centres on a "risk-based" approach. This means companies must choose their age checks based on the specific risks of their platform. A site hosting high-risk, user-generated content will need tougher checks than a low-risk app with limited interaction or restricted content categories.
However, the risk-based approach does not operate solely in favour of stronger checks; it is also about privacy and data protection. The statement indicates that organisations should not only ask whether a particular method works, but whether it is the least intrusive means of achieving that outcome. This is likely to become the most important compliance question in the industry.
For businesses, this means age assurance can no longer be an add-on tool. It must be embedded within wider design processes. The goal is to justify why a specific method was chosen and to show that it minimises data risk at every step.
Why biometric and sensitive data remain a flashpoint
The debate becomes more nuanced where biometric data is involved. Facial age estimation tools may be favourable because it is fast, scalable, and less disruptive than requiring every user to upload formal identification. But these tools also raise questions about accuracy, bias, and the law of processing sensitive physical data.
The ICO and Ofcom have established that "privacy-by-design" is not an optional extra; it is central to whether age assurance will be seen as legitimate. Organisations using biometric or quasi-biometric tools must be able to explain what data is used, how long it is retained, whether it is shared, and why a less intrusive option would not be sufficient.
Early market responses
This evolving framework is already shaping how the industry approaches age-based access. Apple, for instance, has introduced age-related safety features and child account protections that leverage on-device processing to uphold user privacy. The significance of this approach lies in its ability to assess age and apply protections without the need to share sensitive personal data across multiple services.
Instagram has adopted a similar privacy-first stance through its partnership with Yoti, a digital identity provider. When users attempt to edit their age from under-18 to over-18, the platform offers a video selfie option. Yoti’s AI estimates the user's age based on facial features and, crucially, deletes the image immediately after the check is complete. This model allows Instagram to fulfil its "highly effective" safety duties without permanently storing biometric identifiers or creating a long-term surveillance record of the user's face.
These examples show where the market is moving. Privacy-friendly tools, processing data on the device, and collecting as little information as possible are becoming the new standard. However, as technology evolves, so too will the rules that govern it.
The future of UK age assurance regulation
The joint position from the ICO and Ofcom signals a clear shift in UK policy: age assurance and data privacy must now co-exist, rather than compete. For digital services, the lesson is that how a system is built matters just as much as its ability to protect underage users. By treating age checks as a design challenge rather than a barrier, companies can show that protecting children and protecting data are two sides of the same coin.
How To Get In Contact
If you require assistance with any aspect of data protection and privacy law, or have questions about your legal obligations, please contact our Data Protection and Privacy team on 020 4600 9907 or email info@culbertellis.com.
Accurate at the time of writing. This information is provided for general information purposes only and should not be relied upon as legal advice.





