AI Against Humanity
Back to categories

IP & Copyright

19 articles found

Microsoft deletes blog telling users to train AI on pirated Harry Potter books

February 20, 2026

Microsoft recently faced significant backlash after publishing a now-deleted blog post that suggested developers use pirated Harry Potter books to train AI models. Authored by senior product manager Pooja Kamath, the post aimed to promote a new feature for integrating generative AI into applications and linked to a Kaggle dataset that incorrectly labeled the books as public domain. Following criticism on platforms like Hacker News, the blog was removed, revealing the risks of using copyrighted material without proper rights and the potential for AI to perpetuate intellectual property violations. Legal experts expressed concerns about Microsoft's liability for encouraging such practices, emphasizing the blurred lines between AI development and copyright law. This incident highlights the urgent need for ethical guidelines in AI development, particularly regarding data sourcing, to protect authors and creators from exploitation. As AI systems increasingly rely on vast datasets, understanding copyright laws and establishing clear ethical standards becomes crucial to prevent legal repercussions and ensure responsible innovation in the tech industry.

Read Article

Over 1,000 Kenyans enlisted to fight in Russia-Ukraine war, report says

February 19, 2026

A recent report from Kenya's National Intelligence Service (NIS) reveals that over 1,000 Kenyans have been recruited to fight for Russia in the ongoing Russia-Ukraine war, with 89 confirmed to be on the front lines as of February. The report highlights a disturbing network of rogue officials and human trafficking syndicates that have been allegedly colluding to facilitate this recruitment. Many recruits, primarily ex-military personnel and unemployed individuals, are lured by promises of lucrative salaries, only to find themselves deployed to combat roles after minimal training. The Kenyan government is under pressure to act, having shut down over 600 recruitment agencies suspected of duping citizens with false job offers. The Russian embassy in Nairobi has denied involvement in illegal enlistment, while Kenyan officials are investigating the situation and working to rescue those still caught in the conflict. This alarming trend raises concerns about the exploitation of vulnerable populations and the risks associated with illegal recruitment practices, as well as the broader implications for Kenyan society and international relations.

Read Article

The Chinese AI app sending Hollywood into a panic

February 19, 2026

The emergence of Seedance 2.0, an AI model developed by the Chinese tech company ByteDance, has caused significant concern in Hollywood due to its ability to generate high-quality videos from simple text prompts. This technology has raised alarms not only for its potential to infringe on copyrights—prompting major studios like Disney and Paramount to issue cease-and-desist letters—but also for the broader implications it holds for the creative industry. Experts warn that AI companies are prioritizing technological advancements over ethical considerations, risking the exploitation of copyrighted content without proper compensation. The rapid development of Seedance highlights the ongoing challenges of copyright in the age of AI, as well as the need for robust systems to manage licensing and protect intellectual property. As AI continues to evolve, its impact on creative sectors could lead to significant shifts in production practices and economic structures, particularly for smaller firms that may benefit from such technology, yet face ethical dilemmas in its use.

Read Article

Risks of AI-Generated Music Expansion

February 18, 2026

Google has introduced a music-generation feature in its Gemini app, powered by DeepMind's Lyria 3 model. Users can create original songs by describing their desired track, with the app generating music and lyrics accordingly. While this innovation aims to enhance creative expression, it raises significant concerns regarding copyright infringement and the potential devaluation of human artistry. The music industry is already grappling with lawsuits against AI companies over the use of copyrighted material for training AI models. Additionally, platforms like YouTube and Spotify are monetizing AI-generated music, which could lead to economic harm for traditional artists. The introduction of AI-generated music could disrupt the music landscape, affecting artists, listeners, and the broader industry as it navigates these challenges. Google has implemented measures like SynthID watermarks to identify AI-generated content, but the long-term implications for artists and the music industry remain uncertain.

Read Article

Iran security official appears to fire on crowd at cemetery

February 18, 2026

In a tragic incident in Abdanan, Iran, a security official reportedly opened fire on a crowd of mourners commemorating victims of recent government crackdowns. The gathering was part of a traditional ceremony held 40 days after deaths, which in this case honored those killed during protests against the Iranian government. Witnesses captured verified footage showing the security personnel firing into the crowd, leading to chaos as people screamed and fled the scene. This incident reflects the ongoing tension in Iran, where anti-government protests have resulted in thousands of deaths and arrests since late December. State media, however, claimed that the event was peaceful, contradicting reports of violence. The protests, initially sparked by economic grievances, escalated into widespread calls for political change, further highlighting the volatile situation in the country. The Iranian government, led by Supreme Leader Ayatollah Ali Khamenei, has faced increasing criticism for its handling of dissent and the brutal measures employed to suppress it, as evidenced by the acknowledgment of the high death toll during the protests and the blame placed on external forces for the unrest.

Read Article

Record scratch—Google's Lyria 3 AI music model is coming to Gemini today

February 18, 2026

Google's Lyria 3 AI music model, now integrated into the Gemini app, allows users to generate music using simple prompts, significantly broadening access to AI-generated music. Developed by Google DeepMind, Lyria 3 enhances previous models by enabling users to create tracks without needing lyrics or detailed instructions, even allowing image uploads to influence the music's vibe. However, this innovation raises concerns about the authenticity and emotional depth of AI-generated music, which may lack the qualities associated with human artistry. The technology's ability to mimic creativity risks homogenizing music and could undermine the livelihoods of human artists by commodifying creativity. While Lyria 3 aims to respect copyright by drawing on broad creative inspiration, it may inadvertently replicate an artist's style too closely, leading to potential copyright infringement. Furthermore, the rise of AI-generated music could mislead listeners unaware that they are consuming algorithmically produced content, ultimately diminishing the value of original artistry and altering the music industry's landscape. As Google expands its AI capabilities, the ethical implications of such technologies require careful examination, particularly regarding their impact on creativity and artistic expression.

Read Article

ByteDance backpedals after Seedance 2.0 turned Hollywood icons into AI “clip art”

February 16, 2026

ByteDance is facing significant backlash from Hollywood following the launch of its AI video tool, Seedance 2.0, which has been criticized for generating unauthorized content featuring iconic characters and the likenesses of celebrities from major franchises like Disney and Paramount. Major studios, including Disney and Paramount Skydance, have sent cease-and-desist letters, claiming the tool's outputs infringe on intellectual property rights and treat these characters as if they were public domain. The Motion Picture Association (MPA) and SAG-AFTRA have also condemned the model for undermining the livelihoods of human talent and raising ethical concerns about consent and personal autonomy. In response to the legal threats, ByteDance announced plans to implement safeguards against unauthorized use of copyrighted material. However, investigations into the copyright violations, including scrutiny from Japan's AI minister, highlight the urgent need for responsible AI development and legal frameworks to protect creators' rights. This incident underscores the broader implications of AI technology in creative industries, emphasizing the potential for misuse and the necessity for ethical guidelines in AI deployment.

Read Article

ByteDance to curb AI video app after Disney legal threat

February 16, 2026

ByteDance, the Chinese tech giant, is facing legal challenges regarding its AI video-making tool, Seedance, which has been accused of copyright infringement by Disney and other Hollywood studios. Disney's cease-and-desist letter claims that Seedance utilizes a 'pirated library' of its characters, including those from popular franchises like Marvel and Star Wars. The Motion Picture Association and the actors' union Sag-Aftra have also voiced concerns, demanding an immediate halt to Seedance's operations. In response to these allegations, ByteDance has stated its commitment to respecting intellectual property rights and is taking steps to enhance safeguards against unauthorized use of copyrighted material. The controversy highlights the broader implications of AI technologies in creative industries, raising questions about copyright infringement and the ethical use of AI-generated content. Additionally, the Japanese government has initiated an investigation into ByteDance over potential copyright violations involving anime characters. This situation underscores the ongoing tensions between technological innovation and intellectual property rights, as AI tools increasingly blur the lines of ownership and creativity in the entertainment sector.

Read Article

Hollywood's Copyright Concerns Over Seedance 2.0

February 15, 2026

Hollywood is expressing significant concern over ByteDance's new AI video model, Seedance 2.0, which is accused of facilitating widespread copyright infringement. The model allows users to generate videos by inputting simple text prompts, similar to OpenAI’s Sora, but lacks adequate safeguards against the unauthorized use of copyrighted material and the likenesses of real individuals. Prominent figures in the entertainment industry, including the Motion Picture Association (MPA) and various unions, have condemned the tool as a threat to creators' rights and livelihoods. Disney and Paramount have already taken legal action against ByteDance, claiming that Seedance 2.0 has unlawfully reproduced characters and content from their franchises, further amplifying concerns about the implications of AI in creative fields. The backlash highlights the urgent need for regulatory frameworks to address the intersection of AI technology and intellectual property rights, as the rapid deployment of such tools poses risks to established industries and the rights of creators.

Read Article

David Greene's Lawsuit Against Google Over AI Voice

February 15, 2026

David Greene, a longtime NPR host, has filed a lawsuit against Google, claiming that the voice used in the company's NotebookLM tool closely resembles his own. Greene asserts that the AI-generated voice mimics his unique cadence, intonation, and use of filler words, leading to concerns about identity and personal representation. Google, however, contends that the voice is based on a professional actor and not Greene himself. This case highlights ongoing issues surrounding AI voice replication, raising questions about consent, intellectual property, and the ethical implications of using AI to imitate real individuals. Previous instances, such as OpenAI's removal of a voice after actress Scarlett Johansson's complaint, suggest a growing tension between AI technology and personal rights. The implications of such cases extend beyond individual grievances, as they point to broader societal concerns regarding the authenticity and ownership of one's voice and likeness in an increasingly AI-driven world.

Read Article

Hollywood's Backlash Against AI Video Tool

February 14, 2026

The launch of ByteDance's Seedance 2.0, an AI video generation tool, has sparked outrage in Hollywood due to concerns over copyright infringement. This tool allows users to create short videos by entering text prompts, similar to OpenAI's Sora, but lacks sufficient safeguards against the unauthorized use of copyrighted material and the likenesses of real people. The Motion Picture Association (MPA) has called for an immediate halt to Seedance 2.0’s operations, citing significant violations of U.S. copyright law that threaten the livelihoods of creators and the integrity of intellectual property. Major organizations, including the Human Artistry Campaign and SAG-AFTRA, have condemned the tool, labeling it a direct attack on the rights of creators worldwide. The situation escalated when Disney issued a cease-and-desist letter against ByteDance for allegedly reproducing and distributing its characters, highlighting the potential for widespread legal ramifications. The controversy underscores the growing tension between technological advancements in AI and the need for robust legal frameworks to protect intellectual property rights in the entertainment industry.

Read Article

Cloning Risks of AI Models Exposed

February 12, 2026

Google reported that attackers have prompted its Gemini AI chatbot over 100,000 times in an attempt to clone its capabilities. This practice, termed 'model extraction,' is seen as a form of intellectual property theft, although Google itself has faced similar accusations regarding its data sourcing practices. The technique of distillation allows competitors to create cheaper imitations of sophisticated AI models by analyzing their outputs. Google indicated that these attacks are primarily driven by private companies and researchers seeking a competitive advantage, raising questions about the ethics and legality of AI cloning. The issue highlights the vulnerability of AI models to unauthorized replication and the ongoing challenges in protecting intellectual property in the rapidly evolving AI landscape, emphasizing the blurred lines between legitimate innovation and theft. Furthermore, the lack of legal precedents complicates the distinction between acceptable AI distillation and intellectual property violations, posing risks to companies heavily invested in AI development.

Read Article

Amazon Explores AI Content Licensing Marketplace

February 10, 2026

The article highlights the ongoing challenges in the AI industry regarding the use of copyrighted material for training data. Amazon is reportedly considering launching a content marketplace to enable publishers to license their content directly to AI companies, a move that follows Microsoft's establishment of a similar marketplace. The AI sector is facing a multitude of lawsuits concerning copyright infringement, as companies like OpenAI have struck deals with major media organizations, yet the legal landscape remains fraught with uncertainty. Media publishers are increasingly concerned that AI-generated summaries are negatively impacting web traffic, potentially harming their business models. As AI systems continue to evolve and proliferate, the implications for copyright, revenue generation, and the sustainability of media outlets are significant and complex, raising questions about the balance between innovation and intellectual property rights.

Read Article

Adobe's Animate Software: User Trust at Risk

February 4, 2026

Adobe recently reversed its decision to discontinue Animate, a 2D animation software that has been in use for nearly 30 years. The company faced significant backlash from users who felt that discontinuing the software would cut them off from years of creative work and negatively impact their businesses. The initial announcement indicated that users would lose access to their projects and files, which caused anxiety among animators, educators, and studios relying on the software. The backlash was intensified by concerns over Adobe's increasing focus on artificial intelligence tools, which many users see as undermining the artistry and creativity of traditional animation. Although Adobe has committed to keeping Animate accessible and providing technical support, the prior uncertainty has led some users to begin searching for alternative solutions, indicating a loss of trust in the company. The situation highlights the tension between user needs and corporate strategies, especially as technology evolves and companies pivot towards AI-driven solutions.

Read Article

Ethical Concerns of AI Book Scanning

February 3, 2026

The article highlights the controversial practices of Anthropic, particularly its 'Project Panama', which involved scanning millions of books to train its AI model, Claude. This initiative raised significant ethical and legal concerns, as it relied on controversial methods including book destruction and accessing content through piracy websites. While Anthropic argues that it operates within fair use laws, the broader implications of its actions reflect a growing trend among tech companies prioritizing rapid AI development over ethical considerations. The situation underscores a critical risk in AI deployment: the potential for significant harm to creative industries, particularly authors and publishers, who may see their intellectual property rights undermined. This trend may also lead to a chilling effect on creativity and innovation, as creators might hesitate to produce new works for fear of unauthorized use. The article serves as a cautionary tale about the need for a balance between technological advancements and the preservation of intellectual property rights.

Read Article

Local AI Video Generation: Risks and Benefits

January 6, 2026

Lightricks has introduced a new AI video model, Lightricks-2, in collaboration with Nvidia, which can run locally on devices rather than relying on cloud services. This model is designed for professional creators, offering high-quality AI-generated video clips up to 20 seconds long at 50 frames per second, with native audio and 4K capabilities. The on-device functionality is a significant advancement, as it allows creators to maintain control over their data and intellectual property, which is crucial for the entertainment industry. Unlike traditional AI video models that require extensive cloud computing resources, Lightricks-2 leverages Nvidia's RTX chips to deliver high-quality results directly on personal devices. This shift towards local processing not only enhances data security but also improves efficiency, reducing the time and costs associated with video generation. The model is open-weight, providing transparency in its construction while still not being fully open-source. This development highlights the growing trend of AI tools becoming more accessible and secure for creators, while also raising questions about the implications of AI technology in creative fields and the potential risks associated with data privacy and intellectual property.

Read Article

Facebook's AI Content Dilemma and User Impact

October 7, 2025

Facebook is updating its algorithm to prioritize newer content in users' feeds, aiming to enhance user engagement by showing 50% more Reels posted on the same day. This update includes AI-powered search suggestions and treats AI-generated content similarly to human-generated content. Facebook's vice president of product, Jagjit Chawla, emphasized that the algorithm will adapt based on user interactions, either promoting or demoting AI content based on user preferences. However, the integration of AI-generated content raises concerns about misinformation and copyright infringement, as platforms like Meta struggle with effective AI detection. Users are encouraged to actively provide feedback to the algorithm to influence the type of content they see, particularly if they wish to avoid AI-generated material. As AI technology continues to evolve, it blurs the lines between different content types, leading to a landscape where authentic, human-driven content may be overshadowed by AI-generated alternatives. This shift in content dynamics poses risks for creators and users alike, as the reliance on AI could lead to a homogenization of content and potential misinformation issues.

Read Article

Risks of AI Deployment in Society

September 29, 2025

Anthropic's release of the Claude Sonnet 4.5 AI model introduces significant advancements in coding capabilities, including checkpoints for saving progress and executing complex tasks. While the model is praised for its efficiency and alignment improvements, it raises concerns about the potential for misuse and ethical implications. The model's enhancements, such as better handling of prompt injection attacks and reduced tendencies for deception and delusional thinking, highlight the ongoing challenges in ensuring AI safety. The competitive landscape of AI is intensifying, with companies like OpenAI and Google also vying for dominance, leading to ethical dilemmas regarding data usage and copyright infringement. As AI systems become more integrated into various sectors, the risks associated with their deployment, including economic harm and safety risks, become increasingly significant, affecting developers, businesses, and society at large.

Read Article

Concerns Rise as OpenAI Prepares GPT-5

August 7, 2025

The anticipation surrounding OpenAI's upcoming release of GPT-5 highlights the potential risks associated with rapidly advancing AI technologies. OpenAI, known for its flagship large language models, has faced scrutiny over issues such as copyright infringement, illustrated by a lawsuit from Ziff Davis alleging that OpenAI's AI systems violated copyrights during their training. The ongoing development of AI models like GPT-5 raises concerns about their implications for employment, privacy, and societal dynamics. As AI systems become more integrated into daily life, their capacity to outperform humans in various tasks, including interpreting complex communications, may lead to feelings of inadequacy and dependency among users. Additionally, OpenAI's past experiences with model updates, such as needing to retract an overly accommodating version of GPT-4o, underscore the unpredictable nature of AI behavior. The implications of these advancements extend beyond technical achievements, pointing to a need for careful consideration of ethical guidelines and regulations to mitigate negative societal impacts.

Read Article