January 20, 2026
OpenAI rolls out age verification for ChatGPT teen safety
Behavioral signals flag under-18 users for teen-specific safety protections
January 20, 2026
Behavioral signals flag under-18 users for teen-specific safety protections
OpenAI launched an AI-powered age prediction model for ChatGPT consumer plans on January 20, 2026. The system analyzes account-level and behavioral signals to estimate whether a user is under 18. It does not rely on any single data point but uses patterns across user interactions to make its prediction.
When the age prediction model flags a user as potentially under 18, it triggers teen-specific protections automatically. These protections include content restrictions that limit exposure to inappropriate material and additional safety features designed for younger users. The specific protections are calibrated to balance safety with usability.
Users who believe they have been incorrectly flagged as underage can appeal through Persona, a third-party identity verification service. The appeal process uses either a live selfie or government-issued identification. Persona deletes the verification data within seven days of the appeal, and OpenAI states it never receives the identity documents directly.
Regulatory pressure from three major jurisdictions drove the rollout
The UK Online Safety Act requires platforms to implement age-appropriate protections
The EU General Data Protection Regulation includes specific provisions for children's data (GDPR-K) The US Children's Online Privacy Protection Act (COPPA) restricts data collection from children under 13.
An 'adult mode' for verified adults is also in development, expected to launch later in 2026. This mode would provide access to less restricted AI capabilities once a user's adult status is confirmed, suggesting the company plans to tier access based on verified age.
Deployment began in the United States, with EU rollout planned in subsequent weeks. This phased approach allows the company to adjust the system based on initial results before expanding to jurisdictions with different regulatory requirements.
Because an AI model is making judgments about users based on how they interact with ChatGPT, the behavioral signals approach raises questions about accuracy, bias, and what patterns the system associates with being under 18. False positives restrict adult users unnecessarily, while false negatives leave minors exposed to unrestricted content.
AI company deploying age verification; developer of ChatGPT
Third-party identity verification service
EU body overseeing GDPR enforcement