AI Against Humanity
← Back to articles
Safety 📅 February 11, 2026

UpScrolled Faces Hate Speech Moderation Crisis

UpScrolled is facing severe moderation issues as it struggles to control hate speech amidst rapid user growth. This raises significant concerns about online safety.

UpScrolled, a social networking platform that gained popularity after TikTok's ownership change in the U.S., is facing significant challenges with content moderation. With over 2.5 million users in January and more than 4 million downloads by June 2025, the platform is struggling to control hate speech and racial slurs that have proliferated in usernames, hashtags, and content. Reports from users and investigations by TechCrunch revealed that slurs and hate speech, including antisemitic content, were rampant, with offending accounts remaining active even after being reported. UpScrolled’s attempts to address the issue include expanding its moderation team and upgrading technology, but the effectiveness of these measures remains uncertain. The Anti-Defamation League (ADL) has also noted the rise of extremist content on the platform, highlighting a broader concern about the implications of rapid user growth on social media platforms' ability to enforce community standards. The situation raises critical questions about the challenges faced by social networks in managing harmful content, particularly during periods of rapid expansion, as seen with UpScrolled and other platforms like Bluesky. This scenario underscores the need for effective moderation strategies and the inherent risks associated with AI systems in social media that can inadvertently allow harmful behaviors to flourish.

Why This Matters

This article matters because it highlights the risks associated with the rapid growth of social media platforms, particularly in managing and moderating harmful content. With the rise of hate speech and extremist content, the societal implications are profound, affecting individuals and communities by fostering a toxic online environment. Understanding these risks is crucial for developing better AI moderation systems and ensuring safer digital spaces. As AI technologies evolve, the capacity for social networks to handle content responsibly remains a significant concern for users and stakeholders alike.

Original Source

UpScrolled’s social network is struggling to moderate hate speech after fast growth

Read the original source at techcrunch.com ↗

Topic