AI Against Humanity
Back to categories

Automotive

14 articles found

Tesla Avoids Suspension by Changing Marketing Terms

February 18, 2026

The California Department of Motor Vehicles (DMV) has decided not to suspend Tesla's sales and manufacturing licenses for 30 days after the company ceased using the term 'Autopilot' in its marketing. This decision comes after the DMV accused Tesla of misleading customers regarding the capabilities of its advanced driver assistance systems, particularly Autopilot and Full Self-Driving (FSD). The DMV argued that these terms created a false impression of the technology's capabilities, which could lead to unsafe driving practices. In response to the allegations, Tesla modified its marketing language, clarifying that the FSD system requires driver supervision. The DMV's initial ruling to suspend Tesla's licenses was based on the company's failure to comply with state regulations, but the corrective actions taken by Tesla allowed it to avoid penalties. The situation highlights the risks associated with AI-driven technologies in the automotive industry, particularly concerning consumer safety and regulatory compliance. Misleading marketing can lead to dangerous assumptions by drivers, potentially resulting in accidents and undermining public trust in autonomous vehicle technology. As Tesla continues to navigate these challenges, the implications for the broader industry and regulatory landscape remain significant.

Read Article

Heron Power raises $140M to ramp production of grid-altering tech

February 18, 2026

Heron Power, a startup founded by former Tesla executive Drew Baglino, has raised $140 million to accelerate the production of solid-state transformers aimed at revolutionizing the electrical grid and data centers. This funding round, led by Andreessen Horowitz’s American Dynamism Fund and Breakthrough Energy Ventures, highlights the increasing demand for efficient power delivery systems in data-intensive environments. Solid-state transformers are smaller and more efficient than traditional iron-core models, capable of intelligently managing power from various sources, including renewable energy. Heron Power's Link transformers can handle substantial power loads and are designed for quick maintenance, addressing challenges faced by data center operators. The company aims to produce 40 gigawatts of transformers annually, potentially meeting a significant portion of global demand as many existing transformers approach the end of their operational lifespan. While this technological advancement promises to enhance energy efficiency and reliability, it raises concerns about environmental impacts and energy consumption in the rapidly growing data center industry, as well as the competitive landscape as other companies innovate in this space.

Read Article

What happens to a car when the company behind its software goes under?

February 17, 2026

The growing reliance on software in modern vehicles poses significant risks, particularly when the companies behind this software face financial difficulties. As cars evolve into software-defined platforms, their functionality increasingly hinges on the survival of software providers. This dependency can lead to dire consequences for consumers, as seen in the cases of Fisker and Better Place. Fisker's bankruptcy left owners with inoperable vehicles due to software glitches, while Better Place's collapse rendered many cars unusable when its servers shut down. Such scenarios underscore the potential economic harm and safety risks that arise when automotive software companies fail, raising concerns about the long-term viability of this model in the industry. Established manufacturers may have contingency plans, but the used car market is especially vulnerable, with older models lacking ongoing software support and exposing owners to cybersecurity threats. Initiatives like Catena-X aim to create a more resilient supply chain by standardizing software components, ensuring vehicles can remain operational even if a software partner becomes insolvent. This shift necessitates a reevaluation of ownership and maintenance practices, emphasizing the importance of software longevity for consumer safety and investment value.

Read Article

Elon Musk's Lunar Ambitions Raise Concerns

February 11, 2026

Elon Musk's recent all-hands meeting at xAI revealed ambitious plans for lunar manufacturing to enhance AI capabilities, including building a factory on the moon. Musk suggested that this move would enable xAI to harness computational power beyond any current rivals. However, the meeting also highlighted instability within xAI, as six of its twelve founding members have departed, raising concerns about the company's future viability. Musk's focus on lunar ambitions comes amidst speculation regarding a SpaceX IPO, indicating a shift from Mars to the moon as a strategic target for development. The legal implications of lunar resource extraction remain uncertain, especially given international treaties that restrict sovereign claims over celestial bodies. This article underscores the potential risks of unchecked AI ambitions in the context of space exploration, hinting at ethical and legal challenges that could arise from Musk's grand vision.

Read Article

Combatting Counterfeits with Advanced Technology

February 10, 2026

The luxury goods market suffers significantly from counterfeiting, costing brands over $30 billion annually while creating uncertainty for buyers in the $210 billion second-hand market. Veritas, a startup founded by Luci Holland, aims to tackle this issue by developing a 'hack-proof' chip that can authenticate products through digital certificates. This chip is designed to be minimally invasive and can be embedded into products, allowing for easy verification via smartphone using Near Field Communication (NFC) technology. Holland's experience as both a technologist and an artist informs her commitment to protecting iconic brands from the growing sophistication of counterfeiters, who have become adept at producing high-quality replicas known as 'superfakes.' Despite the promising technology, Holland emphasizes the need for increased education on the importance of robust tech solutions to combat counterfeiting effectively. The article highlights the intersection of technology and luxury branding, illustrating how AI and advanced hardware can address significant market challenges, yet also underscores the ongoing risks posed by counterfeit products to consumers and brands alike.

Read Article

Risks of Advanced Digital Key Technology

February 8, 2026

The rising sophistication of digital car keys marks a significant shift in automotive technology, as demonstrated during the recent Plugfest hosted by the Car Connectivity Consortium (CCC). This annual event brought together automobile and smartphone manufacturers to address interoperability issues among various digital key systems. The integration of digital keys into vehicles allows users to lock, unlock, and start their cars via smartphones, but it comes with complexities due to the fragmented nature of device hardware and software. Companies like Rivian emphasize the need for deep integration across vehicle systems to ensure seamless connectivity, especially as vehicles evolve into software-defined platforms that receive over-the-air updates. The role of major phone manufacturers, such as Apple, is crucial, as they enforce strict data security and privacy standards that auto brands must adhere to. The CCC, along with the FiRa Consortium, is pivotal in advancing industry standards and facilitating cooperation among competitors. With the rapid increase in digital key certifications—from two in 2024 to 115 in 2025—this technology's adoption is accelerating, highlighting both the potential for innovation and the risks associated with fragmented systems and security vulnerabilities in the automotive sector.

Read Article

Waymo's AI Training Risks in Self-Driving Cars

February 6, 2026

Waymo, a Google spinoff, is expanding its self-driving car fleet using its new Waymo World Model, developed with Google DeepMind's Genie 3. This model enables the creation of hyper-realistic simulated driving environments, allowing for the training of AI systems on rare or dangerous driving conditions that are often underrepresented in real-world data. While Waymo claims the technology can enhance the safety and adaptability of self-driving cars, significant risks persist, including the accuracy of the simulations and the potential for unforeseen consequences during deployment. The reliance on a virtual training model raises concerns over the AI's ability to handle real-world unpredictability, especially in challenging environments that differ from the initial testing conditions. As Waymo prepares to introduce its technology in more complex urban settings, the potential ramifications for urban safety, regulatory scrutiny, and public trust in AI systems remain critical issues that need addressing. The implications of inadequately trained AI could lead to accidents and erode public confidence in autonomous driving technologies, emphasizing the need for careful oversight and transparency in the development of AI systems for public use.

Read Article

Congress Faces Challenges in Regulating Autonomous Vehicles

February 4, 2026

During a recent Senate hearing, executives from Waymo and Tesla faced intense scrutiny over the safety and regulatory challenges associated with autonomous vehicles. Lawmakers expressed concerns about specific incidents involving these companies, including Waymo's use of a Chinese-made vehicle and Tesla's decision to eliminate radar from its cars. The hearing highlighted the absence of a coherent regulatory framework for autonomous vehicles in the U.S., with senators divided on the potential benefits versus risks of driverless technology. Safety emerged as a critical theme, with discussions centering on Tesla's marketing practices related to its Autopilot feature, which some senators labeled as misleading. The lack of federal regulations has left gaps in accountability, raising questions about the safety of self-driving cars and the U.S.'s competitive stance against China in the autonomous vehicle market.

Read Article

China Bans Hidden Door Handles for EVs

February 3, 2026

China is set to implement a ban on concealed electric door handles in electric vehicles (EVs) effective January 1, 2027, due to safety concerns. This decision follows multiple incidents where individuals faced difficulties opening vehicles with electronic door handles during emergencies, most notably a tragic incident involving a Xiaomi SU7 Ultra that resulted in a fatality when the vehicle's handles malfunctioned after a collision. The ban specifically targets the hidden handles that retract to sit flush with the car doors, a design popularized by Tesla and adopted by other EV manufacturers. In the U.S., Tesla's electronic door handles are currently under investigation for similar safety issues, with over 140 reports of doors getting stuck noted since 2018. The regulatory measures indicate a growing recognition of the potential dangers posed by advanced vehicle designs that prioritize aesthetics and functionality over user safety. Consequently, these changes highlight the urgent need for manufacturers to balance innovation with practical safety considerations to prevent incidents that could result in loss of life or injury.

Read Article

Musk's Space Data Centers: Risks and Concerns

February 3, 2026

Elon Musk's recent announcement of merging SpaceX with his AI company xAI has raised significant concerns regarding the environmental and societal impacts of deploying AI technologies. Musk argues that moving data centers to space is a solution to the growing opposition against terrestrial data centers, which consume vast amounts of energy and face local community resistance due to their environmental footprint. However, this proposed solution overlooks the inherent challenges of space-based data centers, such as power consumption and the feasibility of operating GPUs in a space environment. Additionally, while SpaceX is currently profitable, xAI is reportedly burning through $1 billion monthly as it competes with established players like Google and OpenAI, raising questions about the financial motivations behind the merger. The merger also highlights potential conflicts of interest, as xAI's chatbot Grok is under scrutiny for generating inappropriate content and is integrated into Tesla vehicles. The implications of this merger extend beyond corporate strategy, affecting local communities, environmental sustainability, and the ethical use of AI in military applications. This situation underscores the urgent need for a critical examination of how AI technologies are developed and deployed, reminding us that AI, like any technology, is influenced by human biases and interests,...

Read Article

China Takes Stand on Car Door Safety Standards

February 2, 2026

China's new safety regulations mandate that all vehicles sold in the country must have mechanical door handles, effectively banning the hidden, electronically actuated designs popularized by Tesla. This decision follows multiple fatal incidents where occupants were trapped in vehicles due to electronic door locks failing, raising significant safety concerns among regulators. The U.S. National Highway Traffic Safety Administration has also launched investigations into Tesla's door handle designs, citing difficulties in accessing manual releases, especially for children. The move by China, which began its regulatory process in 2025 with input from over 40 manufacturers including BYD and Xiaomi, emphasizes the urgent need for safety standards in the evolving electric vehicle market. Tesla, notably absent from the drafting of these standards, faces scrutiny not only for its technology but also for its lack of compliance with emerging safety norms. As incidents involving electric vehicles continue to draw attention, this regulation highlights the critical intersection of technology and user safety, raising broader questions about the responsibility of automakers in safeguarding consumers.

Read Article

SpaceX and xAI Merger Raises Ethical Concerns

February 2, 2026

SpaceX's acquisition of Elon Musk's artificial intelligence startup, xAI, aims to create space-based data centers to address the energy demands of AI. Musk highlights the environmental strain caused by terrestrial data centers, which have been criticized for negatively impacting local communities, particularly in Memphis, Tennessee, where xAI has faced backlash for its energy consumption. The merger, which values the combined entity at $1.25 trillion, is expected to strengthen SpaceX's revenue stream through satellite launches necessary for these data centers. However, the merger raises concerns about the implications of Musk's relaxed restrictions on xAI’s chatbot Grok, which has been used to create nonconsensual sexual imagery. This situation exemplifies the ethical challenges and risks associated with AI deployment, particularly regarding exploitation and community impact. As both companies pursue divergent objectives in the space and AI sectors, the merger highlights the urgent need for ethical oversight in AI development and deployment, especially when tied to powerful entities like SpaceX.

Read Article

Tesla 'Full Self-Drive' Subscription, Starlink Access in Iran, and Should You Be 'Rude' to Chatbots? | Tech Today

January 15, 2026

The article highlights several significant developments in the tech sector, particularly focusing on Tesla's decision to make its 'Full Self-Drive' feature subscription-based, which raises concerns about accessibility and affordability for consumers. This shift could lead to a divide between those who can afford the subscription and those who cannot, potentially exacerbating inequalities in transportation access. Additionally, the article discusses Starlink's provision of free internet access in Iran amidst political unrest, showcasing the dual-edged nature of technology as a tool for empowerment and control. Lastly, a study revealing that 'rude' prompts can yield more accurate responses from AI chatbots raises ethical questions about user interaction with AI, suggesting that the design of AI systems can influence user behavior and societal norms. These issues collectively underscore the complex implications of AI and technology in society, emphasizing that advancements are not neutral and can have far-reaching negative impacts on communities and individuals.

Read Article

What Is Vibe Coding? Everything to Know About AI That Builds Apps for You

December 15, 2025

Vibe coding, a term coined by Andrej Karpathy, is revolutionizing software development by enabling users to create applications through natural language prompts instead of traditional coding. This approach allows individuals with minimal programming experience to generate code by simply describing their ideas, making app development more accessible. However, while platforms like ChatGPT and GitHub Copilot facilitate this process, they do not eliminate the need for basic computer literacy and understanding of the tools involved. New users may still struggle with procedural tasks, and the reliance on AI-generated code raises concerns about security, maintainability, and the potential for errors or 'hallucinations' that inexperienced users may overlook. Despite the democratization of coding, the quality and accountability of software remain critical, necessitating knowledgeable oversight to ensure that applications meet production standards. As AI technologies evolve, the importance of skilled developers persists, highlighting the need for human expertise to navigate the complexities of software development and maintain the integrity of the coding process.

Read Article