AI Against Humanity
← Back to articles
Safety 📅 February 4, 2026

OpenClaw's AI Skills: Security Risks Unveiled

Security concerns around OpenClaw's AI skills have emerged, revealing malware risks that threaten user safety and data integrity. The implications are significant for AI trust.

OpenClaw, an AI agent gaining rapid popularity, has raised significant security concerns due to the presence of malware in its marketplace, ClawHub. Security researchers discovered numerous malicious add-ons, with 28 identified as harmful within a short span. These malicious skills are designed to mimic legitimate functions, such as cryptocurrency trading automation, but instead serve as vehicles for information-stealing malware, targeting sensitive user data including exchange API keys, wallet private keys, and browser passwords. The risks are exacerbated by users granting OpenClaw extensive access to their devices, allowing it to read and write files and execute scripts. Although OpenClaw's creator, Peter Steinberger, is implementing measures to mitigate these risks—like requiring a GitHub account to publish skills—malware continues to pose a threat, highlighting the vulnerabilities inherent in open-source ecosystems. The implications of such security flaws extend beyond individual users, affecting the trustworthiness and safety of AI technologies in general, and raise critical questions about the oversight and regulation of rapidly developing AI systems.

Why This Matters

This article is crucial as it highlights the potential dangers of AI systems that lack adequate security measures. The risks posed by malicious add-ons can lead to significant financial losses and breaches of personal data, impacting users and their trust in AI technologies. Understanding these vulnerabilities is essential for developing safer AI applications and ensuring responsible deployment in society.

Original Source

OpenClaw’s AI ‘skill’ extensions are a security nightmare

Read the original source at theverge.com ↗

Topic