With the introduction of its new age-based safety system, OpenAI is undergoing a fundamental identity shift—moving from being purely a creator of powerful technology to also being a strict curator of the user experience. This new, hands-on role is a direct result of the legal and ethical fallout from a lawsuit over a teen’s death.
Historically, OpenAI’s focus has been on building and improving its large language models. The user experience was largely uniform. However, the tragedy involving Adam Raine has forced the company to accept that a one-size-fits-all approach is no longer tenable.
The new role of curator will be most evident in the “under-18 experience.” Here, OpenAI will not just provide access to AI; it will actively shape and limit the conversations that can be had. By blocking topics like self-harm, explicit content, and flirting, the company is curating a specific, sanitized version of its technology deemed safe for minors.
This curatorial duty also extends to deciding who gets to experience the full-power version of ChatGPT. By implementing potential ID checks, OpenAI is taking on the role of a gatekeeper, actively deciding which users are mature enough for unrestricted access. This is a far cry from the open-access ethos of the early internet.
This shift from creator to curator is a sign of maturation for both OpenAI and the AI industry. It reflects an understanding that building world-changing technology is only half the job; the other, perhaps more difficult half, is managing how that technology is experienced and ensuring it doesn’t cause irreparable harm.
OpenAI Moves from Creator to Curator with New Age-Based ChatGPT Rules
2