Your Favorite Influencer Might Be AI
The internet has reached a slop tipping point: since November 2024, AI-generated articles have outnumbered those written by humans. Now, the synthetic takeover is coming for influencers. Real-looking avatars — often attractive women selling supplements — are fooling hundreds of thousands of followers. Some accounts rack up brand deals and tens of thousands in monthly revenue without disclosing they're computer-generated. As platforms reward inhuman volumes of content and entrepreneurs outsource influence to algorithms, a deeper question emerges: do audiences even care whether the person they're watching is real?
Puntos clave
AI avatars like «Malanski» — an Amish wellness influencer with over 300,000 followers — are sophisticated enough to fool most viewers and rarely disclose they're not human.
Creating AI influencers is trivially easy and highly profitable: entrepreneurs can spawn entire networks without paying for talent, studios, or product samples, then outsource avatar creation to freelancers.
New York's December 2024 disclosure law is the nation's first, but enforcement is nearly impossible against anonymous, often overseas creators operating at inhuman scale.
Human influencers are in «panic» mode, but some audiences explicitly say they don't care if content is AI-generated — what matters is whether it makes them feel something.
The prevalence of AI has created a «liar's dividend»: real footage (like Netanyahu's proof-of-life video) is now routinely dismissed as fake, eroding societal trust in all media.
En resumen
AI influencers are not a passing fad — they're a cost-effective solution to the inhuman demands of algorithmic content creation, and legislation won't stop them. The real crisis is not that fake people are selling products, but that audiences may be too exhausted to care whether what they see is real.
The Uncanny Amish Wellness Guru
Malanski, an AI Amish influencer, has fooled over 300,000 followers into buying supplements.
Malanski appears to be an Amish mother posting clean-living advice and disparaging supermarket rotisserie chicken. She has hundreds of thousands of followers — and none of them are real. She's a generative AI avatar created to sell supplements, and she never discloses her synthetic origins. Tiffany Hsu's colleague at The New York Times was stunned by the technical sophistication: Costco aisles rendered down to product labels, lighting that mimics golden hour, gestures that feel natural enough to pass at scroll speed.
The creator, Jose Maria Silvestrini, runs a network of AI avatars promoting his supplement brands. He outsources avatar creation to freelancers and treats the whole operation as a cost-effective marketing play. When contacted by reporters, he was cheerful and unapologetic, treating the Times story as «earned media». To him and others in the space, AI influencers are simply a more efficient way to market products — no talent fees, no studio costs, no humans required.
The Industrial Logic of Synthetic Influence
«People have realized that AI avatars is a great and easy way to make money. And now the scammers are like, hey, let's hop in on that.»
Wellness professor Tim Caulfield on why the unregulated supplement industry is a magnet for synthetic scams.
“People have realized that AI avatars is a great and easy way to make money. And now the scammers are like, hey, let's hop in on that.”
How to Spot an AI Influencer (While You Still Can)
Common tells include unnatural lighting, identical poses across posts, and suspiciously perfect dripping chicken.
Check the Grid AI avatars often appear in nearly identical poses across posts, lit with the same golden-hour glow. Human influencers vary their angles and settings more naturally.
Examine the Lighting Synthetic characters are frequently lit from all sides rather than one natural direction. Look for shadows that don't match the environment.
Inspect the Hairline and Eyes Blurring along the hairline is common. Zoom into the irises: if reflections differ between eyes, it's a strong tell.
Listen for Breath and Filler AI voices rarely include natural pauses, breaths, or filler words like «um». Overly smooth speech can be a giveaway.
Trust Your Gut Many viewers report a vague sense that something is «off» even when they can't articulate why. That instinct may be the last line of defense.
Why Regulation Can't Keep Up
New York's disclosure law is too narrow, too late, and unenforceable at scale.
Why Regulation Can't Keep Up
New York's December 2024 law requiring disclosure of «synthetic performers» in ads is the nation's first, but it's nearly meaningless. Creators are often anonymous and operate overseas. Platforms have no incentive to police avatar creation — there's nothing inherently illegal about making a fake person. Even when laws exist, enforcement is a game of turbo whack-a-mole: one account gets banned, another spawns the next day. Legislation always lags the technology, and in this case, the gap is unbridgeable.
The Liar's Dividend: When Real Becomes Suspect
Netanyahu's proof-of-life video was dismissed as fake — AI has made all media dubious.
In late 2024, a video of Israeli Prime Minister Benjamin Netanyahu appeared to show him with six fingers — a classic AI tell. Conspiracy theorists declared him dead. Days later, Netanyahu posted a verified proof-of-life video from a Jerusalem cafe, clearly displaying five fingers. Deepfake analysts confirmed it was real. The cafe posted corroborating photos. It didn't matter. Millions of users insisted the proof-of-life video was also AI-generated.
This is the «liar's dividend»: the ubiquity of synthetic media allows people to dismiss any inconvenient footage as fake. A North Carolina official posted an AI-generated image of hurricane devastation and, when called out, replied: «I don't really care where this image came from. It hurts my heart». The feeling is real, even if the image isn't. We have entered a world where evidence no longer persuades, because audiences are too fatigued to parse real from fake — or they simply don't care.
Do Audiences Even Want Humans Anymore?
Some influencers pivot to authenticity; others worry audiences prefer the fantasy of AI.
Personas
Glosario
Aviso legal: Este es un resumen generado por IA de un vídeo de YouTube con fines educativos y de referencia. No constituye asesoramiento de inversión, financiero o legal. Verifique siempre la información con las fuentes originales antes de tomar decisiones. TubeReads no está afiliado con el creador de contenido.