AI Against Humanity
← Back to articles
IP & Copyright 📅 February 12, 2026

Cloning Risks of AI Models Exposed

Google reveals that attackers prompted its Gemini chatbot over 100,000 times to clone its capabilities, raising serious concerns about AI model theft.

Google reported that attackers have prompted its Gemini AI chatbot over 100,000 times in an attempt to clone its capabilities. This practice, termed 'model extraction,' is seen as a form of intellectual property theft, although Google itself has faced similar accusations regarding its data sourcing practices. The technique of distillation allows competitors to create cheaper imitations of sophisticated AI models by analyzing their outputs. Google indicated that these attacks are primarily driven by private companies and researchers seeking a competitive advantage, raising questions about the ethics and legality of AI cloning. The issue highlights the vulnerability of AI models to unauthorized replication and the ongoing challenges in protecting intellectual property in the rapidly evolving AI landscape, emphasizing the blurred lines between legitimate innovation and theft. Furthermore, the lack of legal precedents complicates the distinction between acceptable AI distillation and intellectual property violations, posing risks to companies heavily invested in AI development.

Why This Matters

This article matters because it sheds light on the risks associated with AI model cloning, which can undermine innovation and intellectual property rights. Understanding these threats is crucial as they affect not only companies but the broader AI industry, potentially leading to a race to the bottom in terms of ethical standards. The implications of these practices could hinder advancements in AI technology and affect consumer trust. Addressing these challenges is essential for the sustainable development of AI systems.

Original Source

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says

Read the original source at arstechnica.com ↗