AI Against Humanity
Back to categories

Government Contractors

7 articles found

AI Super PACs Clash Over Congressional Candidate

February 20, 2026

The article highlights the political battle surrounding New York Assembly member Alex Bores, who is facing opposition from a pro-AI super PAC called Leading the Future, which has significant financial backing from prominent figures in the AI industry, including Andreessen Horowitz and OpenAI President Greg Brockman. In response, a rival PAC, Public First Action, supported by a $20 million donation from Anthropic, is backing Bores with a focus on transparency and safety standards in AI development. This conflict arises partly due to Bores' sponsorship of the RAISE Act, legislation aimed at ensuring AI developers disclose safety protocols and report misuse of their systems. The contrasting visions of these PACs reflect broader concerns about the implications of AI deployment in society, particularly regarding accountability and ethical standards. The article underscores the growing influence of AI companies in political discourse and the potential risks associated with their unchecked power in shaping policy and public perception.

Read Article

AI, Surveillance, and Ethical Dilemmas

February 12, 2026

The article delves into the implications of AI in the context of government surveillance and ethical dilemmas faced by tech companies. It highlights a report from WIRED revealing that the U.S. Immigration and Customs Enforcement (ICE) is planning to expand its operations across nearly every state, raising concerns about increased surveillance and potential civil rights violations. The discussion also touches on Palantir Technologies, a data analytics company, where employees have expressed ethical concerns regarding their work with ICE, particularly in relation to the use of AI in facilitating surveillance and deportation efforts. Additionally, the article features an experiment with an AI assistant, OpenClaw, which illustrates the limitations and challenges of AI in everyday life. This convergence of AI technology with governmental authority raises critical questions about privacy, ethics, and the societal impact of AI systems, emphasizing that AI is not a neutral tool but rather a reflection of human biases and intentions. The implications of these developments are profound, affecting marginalized communities and raising alarms about the potential for abuse of power through AI-enabled surveillance systems.

Read Article

Concerns Over AI Ethics Spark Controversy at OpenAI

February 11, 2026

Ryan Beiermeister, former vice president of product policy at OpenAI, was reportedly fired following allegations of sex discrimination made by a male colleague. Her termination occurred after she raised concerns about a controversial new feature for ChatGPT known as 'adult mode,' which would incorporate erotic content into the chatbot's interactions. This feature has sparked debate within the company regarding its potential impacts on users, particularly vulnerable populations. Despite OpenAI's statement that Beiermeister's firing was unrelated to her concerns, the incident raises significant questions about workplace dynamics, ethical considerations in AI deployment, and how dissenting voices are treated in tech environments. The situation highlights the complex interplay between product development, employee rights, and the societal implications of AI technologies, particularly as they pertain to sensitive content and user safety.

Read Article

Hacking Tools Sold to Russian Broker Threaten Security

February 11, 2026

The article details the case of Peter Williams, a former executive at Trenchant, a U.S. company specializing in hacking and surveillance tools. Williams has admitted to stealing and selling eight hacking tools, capable of breaching millions of computers globally, to a Russian company that serves the Russian government. This act has been deemed harmful to the U.S. intelligence community, as these exploits could facilitate widespread surveillance and cybercrime. Williams made over $1.3 million from these sales between 2022 and 2025, despite ongoing FBI investigations into his activities during that time. The Justice Department is recommending a nine-year prison sentence, highlighting the severe implications of such security breaches on national and global levels. Williams expressed regret for his actions, acknowledging his violation of trust and values, yet his defense claims he did not intend to harm the U.S. or Australia, nor did he know the tools would reach adversarial governments. This case raises critical concerns about the vulnerabilities within the cybersecurity industry and the potential for misuse of powerful technologies.

Read Article

Tech Industry's Complicity in Immigration Violence

February 3, 2026

The article highlights the alarming intersection of technology and immigration enforcement under the Trump administration, noting the violence perpetrated by federal immigration agents. In 2026, immigration enforcement intensified, resulting in the deaths of at least eight individuals, including U.S. citizens. The tech industry, closely linked to government policies, has been criticized for its role in supporting agencies like ICE (U.S. Immigration and Customs Enforcement) through contracts with companies such as Palantir and Clearview AI. As tech leaders increasingly find themselves in political alliances, there is growing pressure for them to take a stand against the violent actions of immigration enforcement. Figures like Reid Hoffman and Sam Altman have voiced concerns about the tech sector's complicity and the need for more proactive opposition against ICE's practices. The implications of this situation extend beyond politics, as the actions of these companies can directly impact vulnerable communities, highlighting the urgent need for accountability and ethical considerations in AI and technology deployment in society. This underscores the importance of recognizing that AI systems, influenced by human biases and political agendas, can exacerbate social injustices rather than provide neutral solutions.

Read Article

China Takes Stand on Car Door Safety Standards

February 2, 2026

China's new safety regulations mandate that all vehicles sold in the country must have mechanical door handles, effectively banning the hidden, electronically actuated designs popularized by Tesla. This decision follows multiple fatal incidents where occupants were trapped in vehicles due to electronic door locks failing, raising significant safety concerns among regulators. The U.S. National Highway Traffic Safety Administration has also launched investigations into Tesla's door handle designs, citing difficulties in accessing manual releases, especially for children. The move by China, which began its regulatory process in 2025 with input from over 40 manufacturers including BYD and Xiaomi, emphasizes the urgent need for safety standards in the evolving electric vehicle market. Tesla, notably absent from the drafting of these standards, faces scrutiny not only for its technology but also for its lack of compliance with emerging safety norms. As incidents involving electric vehicles continue to draw attention, this regulation highlights the critical intersection of technology and user safety, raising broader questions about the responsibility of automakers in safeguarding consumers.

Read Article

AI Tools Targeting DEI and Gender Ideology

February 2, 2026

The article highlights how the U.S. Department of Health and Human Services (HHS), under the Trump administration, has implemented AI technologies from Palantir and Credal AI to scrutinize grants and job descriptions for adherence to directives against 'gender ideology' and diversity, equity, and inclusion (DEI) initiatives. This approach marks a significant shift in how federal funds are allocated, potentially marginalizing various social programs that promote inclusivity and support for underrepresented communities. The AI tools are used to filter out applications and organizations deemed noncompliant with the administration's policies, raising concerns about the ethical implications of using such technologies in social welfare programs. The targeting of DEI and gender-related initiatives not only affects funding for vital services but also reflects a broader societal trend towards exclusionary practices, facilitated by the deployment of biased AI systems. Communities that benefit from inclusive programs are at risk, as these AI-driven audits can lead to a reduction in support for essential services aimed at promoting equality and diversity. The article underscores the need for vigilance in AI deployment, particularly in sensitive areas like social welfare, where biases can have profound consequences on vulnerable populations.

Read Article