TubeReads

AI Whistleblower: We Are Being Gaslit By The AI Companies! They're Hiding The Truth About AI!

The AI revolution promised progress and prosperity for all. Instead, it has birthed a new form of empire—one that extracts value from artists, writers, and workers while concentrating power in the hands of a few tech billionaires. After interviewing over 250 people, including more than 90 current or former OpenAI employees and executives, journalist Karen Hao has uncovered internal documents revealing how AI companies engineer public mythology to maintain control. From mass layoffs to environmental devastation, from exploited data annotators to communities fighting supercomputer facilities poisoning their air—the gap between Silicon Valley's utopian promises and ground-level reality has never been wider. The question is no longer whether AI will transform society, but who will control that transformation and at what cost.

Длительность видео: 2:09:13·Опубликовано 26 мар. 2026 г.·Язык видео: English
8–9 мин чтения·23,485 произнесённых словсжато до 1,727 слов (14x)·

1

Ключевые выводы

1

AI companies purposely cultivate dual narratives of utopian abundance and existential catastrophe to convince the public that only they should control AI development, while internal documents show this mythmaking is a deliberate strategy to maintain power and access resources.

2

The jobs being created by AI are overwhelmingly worse than those being displaced: college-educated professionals laid off from stable careers now compete for data annotation work that mechanizes their labor, devalues their expertise, and pays a fraction of what they previously earned.

3

AI supercomputer facilities disproportionately harm vulnerable communities by consuming over 20% of a city's power, competing for freshwater during droughts, and pumping toxins into the air of working-class neighborhoods that were never consulted about hosting these facilities.

4

Nearly every major AI company co-founder has left to start a competitor after clashing with leadership, revealing that these billionaires want to create AI in their own image and will fracture partnerships rather than compromise on their vision of the technology.

5

Breaking up the AI empires and building «bicycle» alternatives—smaller, specialized AI systems like AlphaFold that use curated datasets and dramatically less computational resources—can deliver enormous benefit without the environmental and social costs of current «rocket» AI models.

Вкратце

The AI industry operates as a modern empire—extracting resources, exploiting labor, and monopolizing knowledge while mythologizing existential threats to justify consolidating power in the hands of a few companies that profit enormously from maintaining control over a technology that affects billions.


2

The Mythology Machine: How AI Companies Engineer Public Opinion

Internal documents reveal AI companies deliberately cultivate dual threat narratives to consolidate power.

AI companies have mastered the art of mythmaking, wielding dual narratives of utopian promise and existential catastrophe with surgical precision. Internal documents obtained by Hao reveal that OpenAI and its competitors purposely engineer public sentiment, showing audiences «dazzling demonstrations» while crafting missions designed to generate «leniency» for their empire-building agenda. The pattern is consistent: when seeking capital, AGI becomes «a system that will generate hundred billion of revenue.» When lobbying Congress, it transforms into a cure for cancer and climate change. When recruiting consumers, it's the ultimate digital assistant.

This mythmaking serves a crucial function. By elevating both the promise and the peril to civilizational stakes, these companies convince the public that only they possess the expertise to navigate such treacherous waters. As Hao notes, «80% of Americans think that the AI industry needs to be regulated,» yet these same companies spend hundreds of millions to kill every piece of inconvenient legislation. The executives themselves become lost in their own mythology—like Paul Atreides stepping into the Messiah role in Dune, they begin as cynical myth-makers but eventually blur the line between performance and belief. When Dario Amodei speaks of a «10% to 25%» chance of catastrophic outcomes, he is simultaneously engaging in strategic narrative-building and genuinely believing his own prediction, because cognitive dissonance demands resolution.


3

The Cost of Ambition

AI's unprecedented resource demands create environmental crises in vulnerable communities worldwide.

Power Consumption of OpenAI's Abilene Facility
Over 1 gigawatt
More than 20% of New York City's power demand, covering an area the size of Central Park with 1 million computer chips
Meta's Louisiana Supercomputer Power Demand
Half of NYC's average power
Facility is 1/5 the size of Manhattan, four times larger than OpenAI's Texas project
Musk's Colossus Facility Power Source
35 methane gas turbines
Built in Memphis, Tennessee, pumping thousands of tons of toxins into a predominantly Black and brown working-class community
Klarna Workforce Reduction via AI
From 6,000 to under 3,000 employees
Staff cut in half over two years through natural attrition while revenue doubled; AI handles 70% of customer service
Public Support for AI Regulation
80% of Americans
Rare bipartisan consensus on need for industry oversight

4

«I've become a monster»

Displaced professionals are absorbed into dehumanizing data annotation work that trains their own replacements.

I have so much anxiety about when the project is going to come, when it's going to leave that when the project came, it was right when my kid was coming off of school. And I just started tasking furiously because I don't know what's going to go and I need to earn as much money as possible in this window of opportunity. So then my kid came home and tried to talk to me, I screamed at my child for distracting me. And then she was like, I've become a monster and I'm not even allowed to go to the bathroom or take care of my kids, let alone myself, because this industry that is absorbing more and more of the workers that are being laid off, is mechanizing my life, atomizing my work, devaluing my expertise, and then harvesting it for the perpetuation of this machine.

Laid-off professional now working in data annotation, quoted in New York Magazine


5

The Fracturing of the AI Founder Class

Personal conflicts and competing visions have splintered every major AI company's original team.

1

The Rosewood Dinner (2015) Sam Altman assembles the founding team at a San Francisco hotel, recruiting Ilya Sutskever, Greg Brockman, and Dario Amodei with the promise of meeting Elon Musk and building safe AGI together.

2

Musk's Exit (circa 2018) After Altman convinces Greg Brockman that Musk would be too «unpredictable» as CEO of the for-profit entity, Musk is muscled out and later launches xAI, feeling manipulated by Altman's language mirroring.

3

Dario's Departure The former VP of Research leaves OpenAI feeling that Altman used his «intelligence, capabilities, skills» to build a vision he fundamentally disagreed with, founding Anthropic and Claude as a competitor.

4

The Board Coup (November 2023) Ilya Sutskever and Mira Murati approach independent board members with concerns that Altman is creating «chaos» and «pitting teams against each other,» leading to Altman's firing—which is reversed days later after stakeholder revolt.

5

The Final Exodus Ilya launches Safe Superintelligence after never returning to OpenAI. Mira departs shortly after to start Thinking Machines Lab. Each founder now controls their own AI vision, competing against the empire they helped build.


6

The Labor Crisis Hidden in Plain Sight

AI eliminates entry-level jobs while creating exploitative annotation work, breaking the career ladder.

The economic restructuring underway is not a simple story of automation replacing humans. Instead, it follows a brutal pattern: entry-level and mid-tier jobs are hollowed out, creating a barbell economy of highly-paid experts and a growing underclass of data annotators earning poverty wages. Anthropic's internal analysis shows 40% of entry-level roles already disrupted, with finance, law, office administration, and even creative fields in the immediate blast radius. Yet the jobs being created tell a darker story.

Data annotation—the process of teaching AI systems by manually labeling images, correcting text, and demonstrating desired outputs—has become one of LinkedIn's top ten fastest-growing job categories. Award-winning directors, lawyers with advanced degrees, and laid-off marketers now compete on Slack channels for micro-tasks that pay pennies per annotation. They refresh their screens obsessively, terrified of missing the narrow window when a project opens, unable to care for children or use the bathroom during work bursts. As one worker described it: «This industry is mechanizing my life, atomizing my work, devaluing my expertise, and then harvesting it for the perpetuation of this machine.» The cruelest irony? These workers are training the very models that will eliminate the next wave of jobs, including potentially their own.


7

OpenAI's Internal Turmoil

Co-founders weaponized board structure to remove Altman, citing chaos and manipulation.

⚠️

OpenAI's Internal Turmoil

When Ilya Sutskever and Mira Murati approached OpenAI's independent board members in late 2023, they presented documentation showing Altman was «creating too much instability» by pitting teams against each other and making contradictory promises. The board concluded that while such behavior might not warrant firing at «an Instacart,» the stakes were existentially different for a company claiming to build technology that could «make or break the world.» The coup failed within days because stakeholders like Microsoft were blindsided, triggering a campaign to reinstate Altman. The episode reveals that even those closest to the technology—its co-creators—believed the CEO should not have «the finger on the button for AGI.»


8

Bicycles vs. Rockets: The Path Not Taken

Specialized AI systems can deliver transformative benefits without empire-scale resource extraction.

ROCKET AI
Large Language Models & General-Purpose Systems
These systems demand supercomputers the size of Central Park, consume more power than entire cities, require continuous retraining on scraped global data, and employ armies of data annotators. They promise to do «everything for everyone» but extract enormous environmental and social costs. OpenAI, Anthropic, and Google pursue this path because it justifies massive capital raises and imperial control over the technology.
BICYCLE AI
Specialized Systems Like DeepMind's AlphaFold
These systems use small, curated datasets for specific high-value problems—like predicting protein folding from amino acid sequences. AlphaFold won the 2024 Nobel Prize in Chemistry, dramatically accelerates drug discovery, and requires a fraction of the computational resources of general-purpose models. This approach delivers «enormous benefit to people» without the environmental destruction, labor exploitation, or mythological narratives required to justify empire-building.

9

Breaking the Empire

🚫
Data Withholding
Artists, writers, and creators are suing to prevent their intellectual property from being scraped. Individuals can opt out of training datasets and support legislation requiring explicit consent for data use.
🏭
Data Center Resistance
Communities worldwide are protesting supercomputer facilities, successfully stalling projects and banning construction in dozens of localities by asserting their right to clean air, fresh water, and affordable energy.
🏢
Workplace Policy Debates
Schools, companies, and institutions are deciding AI adoption policies right now. Employees and stakeholders have leverage to demand ethical implementation that doesn't automate away livelihoods or extract value unfairly.
🔬
Building Bicycle Alternatives
Support and fund development of specialized AI systems that solve specific high-value problems with minimal resource consumption, proving that the current empire model is a political choice, not a technical necessity.

10

Упомянутые ценные бумаги

MSFTMicrosoft Corporation
METAMeta Platforms Inc.
GOOGLAlphabet Inc. (Google)
TSLATesla Inc.

11

Люди

Karen Hao
Author & Journalist, Former MIT Technology Review AI Reporter
guest
Steven Bartlett
Host & CEO
host
Sam Altman
CEO of OpenAI
mentioned
Elon Musk
Co-founder of OpenAI (departed), Founder of xAI
mentioned
Ilya Sutskever
Co-founder & Former Chief Scientist of OpenAI
mentioned
Dario Amodei
Former OpenAI VP of Research, CEO of Anthropic
mentioned
Mira Murati
Former CTO of OpenAI
mentioned
Geoffrey Hinton
AI Researcher, Mentor to Ilya Sutskever
mentioned
Timnit Gebru
Former Co-lead of Google's Ethical AI Team
mentioned
Sebastian Siemiatkowski
CEO of Klarna
mentioned

Глоссарий
AGI (Artificial General Intelligence)A hypothetical AI system that matches or exceeds human intelligence across all domains; the term has no scientific consensus and is redefined constantly by companies to suit their narrative needs.
Data AnnotationThe labor-intensive process of humans manually labeling data (drawing boxes around objects, correcting text, demonstrating desired outputs) to teach AI models specific capabilities.
Scaling LawsThe hypothesis that AI capabilities improve predictably as models grow larger (more parameters) and consume more data and compute, though this remains scientifically contested.
Statistical EngineA system that learns patterns and correlations from data to make probabilistic predictions, which some researchers believe mirrors how human brains function—a hypothesis not universally accepted in neuroscience.
Reinforcement Learning from Human Feedback (RLHF)A training method where human contractors rate AI outputs to teach models which responses are preferred, enabling systems like ChatGPT to engage in dialogue.

Отказ от ответственности: Это ИИ-сводка видео с YouTube, подготовленная в образовательных и справочных целях. Она не является инвестиционной, финансовой или юридической консультацией. Всегда проверяйте информацию по первоисточникам перед принятием решений. TubeReads не связан с автором контента.