AI Against Humanity
Back to categories

Legal/Consulting

6 articles found

As AI data centers hit power limits, Peak XV backs Indian startup C2i to fix the bottleneck

February 16, 2026

As the demand for AI data centers surges, energy consumption has become a critical limiting factor, prompting investments in innovative solutions to enhance efficiency. C2i Semiconductors, an Indian startup, has secured $15 million in funding from Peak XV Partners, Yali Deeptech, and TDK Ventures to develop advanced power solutions aimed at reducing energy losses in data centers. Current estimates suggest that electricity consumption from data centers could nearly triple by 2035, with power demand expected to rise significantly due to inefficient energy conversion processes. C2i's technology aims to minimize energy waste by integrating power conversion and control into a single system, potentially saving substantial amounts of energy and reducing operational costs for data centers. This investment highlights the growing importance of energy efficiency in AI infrastructure, as companies seek to balance the high costs associated with energy consumption and the need for scalable AI solutions. The implications of these developments extend beyond economic factors, as the environmental impact of increased energy demand raises concerns about sustainability and the carbon footprint of AI technologies.

Read Article

India's AI Regulations and Content Moderation Risks

February 10, 2026

India's recent amendments to its IT Rules require social media platforms to enhance their policing of deepfakes and other AI-generated impersonations. These changes impose stringent compliance deadlines, demanding that platforms act on takedown requests within three hours and respond to urgent user complaints within two hours. The new regulations aim to provide a formal framework for managing synthetic content, mandating labeling and traceability of such materials. The implications are significant, particularly for major tech companies like Meta and YouTube, which must adapt quickly to these new requirements in one of the world's largest internet markets. While the intent is to combat harmful content—like deceptive impersonations and non-consensual imagery—the reliance on automated systems raises concerns about censorship and the erosion of free speech, as platforms may resort to over-removal due to compressed timelines. Stakeholders, including digital rights groups, warn that these rules could undermine due process and leave little room for human oversight in content moderation. This situation highlights the challenge of balancing regulation with the protection of individual freedoms in the digital landscape, emphasizing the non-neutral nature of AI in societal implications.

Read Article

Varaha Secures Funding for Carbon Removal

February 3, 2026

Varaha, an Indian climate tech startup, has secured $20 million in funding to enhance its carbon removal projects across Asia and Africa. The company aims to be a cost-effective supplier of verified emissions reductions, capitalizing on lower operational costs and a robust agricultural supply chain in India. Varaha focuses on regenerative agriculture, agroforestry, biochar, and enhanced rock weathering to produce carbon credits, which are increasingly in demand from corporations like Google and Microsoft that face rising energy usage from data centers and AI workloads. The startup's strategy emphasizes execution over proprietary technology, enabling it to meet international verification standards while keeping costs low. Varaha has already removed over 2 million tons of CO2 and plans to expand its operations in South and Southeast Asia, collaborating with thousands of farmers and industrial partners to scale its carbon removal efforts. This funding marks a significant step in Varaha's growth as it addresses global climate challenges by providing sustainable solutions for carbon offsetting.

Read Article

AI's Role in Immigration Surveillance Concerns

January 30, 2026

The US Department of Homeland Security (DHS) is utilizing AI video generators from Google and Adobe to create content for public dissemination, enhancing its communications, especially concerning immigration policies tied to President Trump's mass deportation agenda. This strategy raises concerns about the transparency and ethical implications of using AI in government communications, particularly in the context of increased scrutiny on immigration agencies. As DHS leverages AI technologies, workers in the tech sector are calling on their employers to reconsider partnerships with agencies like ICE, highlighting the moral dilemmas associated with AI's deployment in sensitive areas. Furthermore, the article touches on Capgemini, a French company that has ceased working with ICE after governmental inquiries, reflecting the growing resistance against the use of AI in surveillance and immigration tracking. The implications of these developments are profound, as they signal a troubling intersection of technology, ethics, and human rights, prompting urgent discussions about the role of AI in state functions and its potential to perpetuate harm. Those affected include immigrant communities, technology workers, and society at large, as the normalization of AI in government actions could lead to increased surveillance and erosion of civil liberties.

Read Article

Is AI Putting Jobs at Risk? A Recent Survey Found an Important Distinction

October 8, 2025

The article examines the impact of AI on employment, particularly through generative AI and automation. A survey by SHRM involving over 20,000 US workers found that while many jobs contain tasks that can be automated, only a small percentage are at significant risk of displacement. Specifically, 15.1% of jobs are at least 50% automated, but only 6% face vulnerability due to nontechnical barriers like client preferences and regulatory issues. This suggests a more gradual transition in the labor market than the alarming predictions from some AI industry leaders. High-risk sectors include computer and mathematical work, while jobs requiring substantial human interaction, such as in healthcare, are less likely to be automated. The healthcare industry continues to grow, emphasizing the importance of human skills—particularly interpersonal and problem-solving abilities—that generative AI cannot replicate. This trend indicates a shift in workforce needs, prioritizing employees who can handle complex human-centric challenges, highlighting the necessity for a balanced approach to AI integration that maintains the value of human skills in less automatable sectors.

Read Article

Founder of Viral Call-Recording App Neon Says Service Will Come Back, With a Bonus

October 1, 2025

The Neon app, which allows users to earn money by recording phone calls, has been temporarily disabled due to a significant security flaw that exposed sensitive user data. Founder Alex Kiam reassured users that their earnings remain intact and promised a bonus upon the app's return. However, the app raises serious privacy and legality concerns, particularly in states with strict consent laws for recording calls. Legal expert Hoppe warns that users could face substantial legal liabilities if they record calls without obtaining consent from all parties, especially in states like California, where violations may lead to criminal charges and civil lawsuits. Although the app claims to anonymize data for training AI voice assistants, experts caution that this does not guarantee complete privacy, as the risks associated with sharing voice data remain significant. This situation underscores the ethical dilemmas and regulatory challenges surrounding AI data usage, highlighting the importance of understanding consent laws to protect individuals from potential privacy violations and legal complications.

Read Article