TubeReads

Your Favorite Influencer Might Be AI

The internet has reached a slop tipping point: since November 2024, AI-generated articles have outnumbered those written by humans. Now, the synthetic takeover is coming for influencers. Real-looking avatars — often attractive women selling supplements — are fooling hundreds of thousands of followers. Some accounts rack up brand deals and tens of thousands in monthly revenue without disclosing they're computer-generated. As platforms reward inhuman volumes of content and entrepreneurs outsource influence to algorithms, a deeper question emerges: do audiences even care whether the person they're watching is real?

Duração do vídeo: 44:07·Publicado 10 de abr. de 2026·Idioma do vídeo: English
5–6 min de leitura·7,610 palavras faladasresumido para 1,121 palavras (7x)·

1

Pontos-chave

1

AI avatars like «Malanski» — an Amish wellness influencer with over 300,000 followers — are sophisticated enough to fool most viewers and rarely disclose they're not human.

2

Creating AI influencers is trivially easy and highly profitable: entrepreneurs can spawn entire networks without paying for talent, studios, or product samples, then outsource avatar creation to freelancers.

3

New York's December 2024 disclosure law is the nation's first, but enforcement is nearly impossible against anonymous, often overseas creators operating at inhuman scale.

4

Human influencers are in «panic» mode, but some audiences explicitly say they don't care if content is AI-generated — what matters is whether it makes them feel something.

5

The prevalence of AI has created a «liar's dividend»: real footage (like Netanyahu's proof-of-life video) is now routinely dismissed as fake, eroding societal trust in all media.

Em resumo

AI influencers are not a passing fad — they're a cost-effective solution to the inhuman demands of algorithmic content creation, and legislation won't stop them. The real crisis is not that fake people are selling products, but that audiences may be too exhausted to care whether what they see is real.


2

The Uncanny Amish Wellness Guru

Malanski, an AI Amish influencer, has fooled over 300,000 followers into buying supplements.

Malanski appears to be an Amish mother posting clean-living advice and disparaging supermarket rotisserie chicken. She has hundreds of thousands of followers — and none of them are real. She's a generative AI avatar created to sell supplements, and she never discloses her synthetic origins. Tiffany Hsu's colleague at The New York Times was stunned by the technical sophistication: Costco aisles rendered down to product labels, lighting that mimics golden hour, gestures that feel natural enough to pass at scroll speed.

The creator, Jose Maria Silvestrini, runs a network of AI avatars promoting his supplement brands. He outsources avatar creation to freelancers and treats the whole operation as a cost-effective marketing play. When contacted by reporters, he was cheerful and unapologetic, treating the Times story as «earned media». To him and others in the space, AI influencers are simply a more efficient way to market products — no talent fees, no studio costs, no humans required.


3

The Industrial Logic of Synthetic Influence

📊
The Volume Game
Brands like Wimbledon post thousands of clips in a two-week tournament. Influencers are told to create one video four different ways — at the bus stop, in the bathroom, at the store — to see what the algorithm rewards that day.
🤖
The Bot Farm
Venture capital has backed companies like Double Speed that promise «bulk content creation» and «automating attention». The pitch: «Never pay a human again». AI avatars can be copy-pasted, customized, and deployed at scale.
💰
The Revenue Model
AI influencer Aitana Lopez earns up to €10,000/month with brand deals from Alo Yoga and posts from Paris Fashion Week. Online tutorials promise $30,000/month teaching others how to create AI beauty influencers — no skin required.

4

«People have realized that AI avatars is a great and easy way to make money. And now the scammers are like, hey, let's hop in on that.»

Wellness professor Tim Caulfield on why the unregulated supplement industry is a magnet for synthetic scams.

People have realized that AI avatars is a great and easy way to make money. And now the scammers are like, hey, let's hop in on that.

Tim Caulfield, Professor of Health Science


5

How to Spot an AI Influencer (While You Still Can)

Common tells include unnatural lighting, identical poses across posts, and suspiciously perfect dripping chicken.

1

Check the Grid AI avatars often appear in nearly identical poses across posts, lit with the same golden-hour glow. Human influencers vary their angles and settings more naturally.

2

Examine the Lighting Synthetic characters are frequently lit from all sides rather than one natural direction. Look for shadows that don't match the environment.

3

Inspect the Hairline and Eyes Blurring along the hairline is common. Zoom into the irises: if reflections differ between eyes, it's a strong tell.

4

Listen for Breath and Filler AI voices rarely include natural pauses, breaths, or filler words like «um». Overly smooth speech can be a giveaway.

5

Trust Your Gut Many viewers report a vague sense that something is «off» even when they can't articulate why. That instinct may be the last line of defense.


6

Why Regulation Can't Keep Up

New York's disclosure law is too narrow, too late, and unenforceable at scale.

⚠️

Why Regulation Can't Keep Up

New York's December 2024 law requiring disclosure of «synthetic performers» in ads is the nation's first, but it's nearly meaningless. Creators are often anonymous and operate overseas. Platforms have no incentive to police avatar creation — there's nothing inherently illegal about making a fake person. Even when laws exist, enforcement is a game of turbo whack-a-mole: one account gets banned, another spawns the next day. Legislation always lags the technology, and in this case, the gap is unbridgeable.


7

The Liar's Dividend: When Real Becomes Suspect

Netanyahu's proof-of-life video was dismissed as fake — AI has made all media dubious.

In late 2024, a video of Israeli Prime Minister Benjamin Netanyahu appeared to show him with six fingers — a classic AI tell. Conspiracy theorists declared him dead. Days later, Netanyahu posted a verified proof-of-life video from a Jerusalem cafe, clearly displaying five fingers. Deepfake analysts confirmed it was real. The cafe posted corroborating photos. It didn't matter. Millions of users insisted the proof-of-life video was also AI-generated.

This is the «liar's dividend»: the ubiquity of synthetic media allows people to dismiss any inconvenient footage as fake. A North Carolina official posted an AI-generated image of hurricane devastation and, when called out, replied: «I don't really care where this image came from. It hurts my heart». The feeling is real, even if the image isn't. We have entered a world where evidence no longer persuades, because audiences are too fatigued to parse real from fake — or they simply don't care.


8

Do Audiences Even Want Humans Anymore?

Some influencers pivot to authenticity; others worry audiences prefer the fantasy of AI.

THE RESISTANCE
Real Bodies as a Selling Point
Bra company Aerie pledged in October 2024 to use only real people and never AI bodies. The post became their most popular of the year. Some brands are betting that transparency and humanity will win back exhausted audiences who crave authenticity over algorithmic perfection.
THE APATHY
Influence Without Humanity
But hundreds of thousands follow avatars like Aitana Lopez and Malanski despite — or because of — their synthetic origins. High school students told Hsu they see AI avatars as «a fun thing» and compare the creation process to playing The Sims. For many, the influencer's humanity is irrelevant; what matters is the aspirational lifestyle they project, real or not.

9

Pessoas

Charlie Warzel
Host, Galaxy Brain
host
Tiffany Hsu
Technology Reporter, The New York Times
guest
Zachary Gallia
Social Media Content Strategist
mentioned
Jose Maria Silvestrini
Entrepreneur, AI Influencer Network Operator
mentioned
Ken Bensinger
Reporter, The New York Times
mentioned
Tim Caulfield
Professor of Health Science, Canada
mentioned

Glossário
Liar's DividendThe phenomenon where the prevalence of AI-generated content allows people to dismiss authentic evidence as fake, eroding trust in all media.
AI SlopLow-quality, mass-produced synthetic content generated by AI tools, now flooding the internet in volumes that exceed human-created material.
Proof of Life VideoA video recording meant to verify that a person is alive and well, traditionally used in hostage situations but now deployed to counter deepfake rumors.

Aviso: Este é um resumo gerado por IA de um vídeo do YouTube para fins educacionais e de referência. Não constitui aconselhamento de investimento, financeiro ou jurídico. Verifique sempre as informações com as fontes originais antes de tomar decisões. O TubeReads não é afiliado ao criador do conteúdo.