AI Against Humanity
← Back to articles
Privacy 📅 February 11, 2026

Concerns Over ChatGPT Ads and User Safety

Zoë Hitzig resigns from OpenAI over concerns about the ethical implications of ChatGPT's advertising strategy, warning of risks similar to Facebook's history. The resignation raises important questions about user privacy and trust in AI.

Former OpenAI researcher Zoë Hitzig resigned in protest of the company's new advertising strategy for ChatGPT, which she fears could lead to ethical pitfalls similar to those experienced by Facebook. Hitzig expressed concerns over the sensitive personal data shared by users with ChatGPT, calling it an unprecedented archive of human candor. She warned that the push for ad revenues could compromise user trust and lead to manipulative practices that prioritize profit over user welfare. Hitzig drew parallels to Facebook’s erosion of user privacy promises, suggesting that OpenAI might follow a similar trajectory as it seeks to monetize its AI platform. As ads are tested in ChatGPT, Hitzig highlighted a potential conflict between user safety and corporate interests, raising alarms over adverse effects like 'chatbot psychosis' and increased dependency on AI for emotional support. The article underscores the broader implications of AI deployment in society, especially concerning personal data and user well-being, and calls for structural changes to ensure accountability and user control.

Why This Matters

This article matters because it highlights crucial risks associated with AI systems, particularly regarding user privacy and mental health. As AI technologies become increasingly integrated into daily life, understanding the potential harms is essential for safeguarding users and ensuring ethical practices. Recognizing these risks can help inform public discourse and influence regulations surrounding AI deployment.

Original Source

OpenAI researcher quits over ChatGPT ads, warns of "Facebook" path

Read the original source at arstechnica.com ↗

Type of Company