YouTube Permanently Bans Screen Culture and Two Major AI Channels, Which Used Fake Movie Trailers to Gain Over 1 Billion Views, Reflecting the Dilemma Between Technology Promotion and Combating Misinformation.
YouTube Permanently Bans Million-Subscriber AI Video Channels
Exclusive report by foreign media Deadline states that YouTube has permanently banned two well-known channels that used artificial intelligence (AI) to produce fake movie trailers: Screen Culture and KH Studio.
These two channels together have over 2 million subscribers, with total views exceeding 1 billion. Currently, visiting their homepage only displays an error message indicating the page is unavailable.
Fake Trailers That Look Real, Evading Platform Censorship Multiple Times
It is understood that Screen Culture and KH Studio frequently release trailers that appear authentic but are entirely fictional, such as “GTA: San Andreas 2025” or “Wonder 4 Superman: First Step,” and even extensively use Disney copyright material.
Although these trailers are fake, their high-quality production causes them to frequently appear in YouTube’s recommended lists.
Image source: Deadline Screen Culture “Wonder 4 Superman: First Step” Fake Movie Trailer Cover Screenshot
As early as the beginning of 2025, YouTube temporarily suspended the monetization of these two AI channels following protests from other YouTubers, requiring them to add warnings such as “Parody” or “Concept Trailer.” However, after resuming monetization, the warning labels on popular videos disappeared again, making it difficult for viewers to distinguish truth from falsehood.
YouTube Bans for Policy Violations, Google Faces AI Dilemma
The reason YouTube chose to directly ban these two AI channels is because they violated policies against spam content and misleading metadata.
But this also highlights the awkward situation faced by YouTube’s parent company, Google, as they actively develop and encourage creators to use generative AI tools like Veo, while simultaneously having to combat the spam content generated by these tools.
Balancing the promotion of new technology with maintaining platform content authenticity is a difficult challenge Google must carefully navigate.
AI-Generated Content Floods, Medical Misinformation Misleads Elderly
The risks of AI videos are not limited to entertainment but have also penetrated the healthcare sector, posing potential dangers to middle-aged and elderly viewers.
Previously reported by Crypto City, there is a YouTube channel called “Fountain of Wisdom” circulating in Taiwan, which appears to feature medical experts discussing health tips, but in fact, these experts cannot be verified and are entirely AI-generated. The channel has accumulated over 16 million views.
Some netizens pointed out that elderly family members love watching this channel and even take notes, raising concerns that it could influence their perceptions.
Image source: YouTube screenshot of Fountain of Wisdom channel with AI-generated health experts, elderly family members mistakenly believing false information
As generative AI technology becomes more widespread, the barrier to creating fake news has significantly lowered, from fictional movie trailers to incorrect medical advice, which are being widely promoted through algorithms.
In an environment where distinguishing truth from falsehood is increasingly difficult, besides relying on platform regulation, improving public media literacy has become an urgent task.
Further reading:
43% of Taiwanese use AI! Confident in media literacy but only 10% frequently verify information
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
YouTube permanently bans two AI channels! Earned 2 million followers and 1 billion views with fake previews
YouTube Permanently Bans Screen Culture and Two Major AI Channels, Which Used Fake Movie Trailers to Gain Over 1 Billion Views, Reflecting the Dilemma Between Technology Promotion and Combating Misinformation.
YouTube Permanently Bans Million-Subscriber AI Video Channels
Exclusive report by foreign media Deadline states that YouTube has permanently banned two well-known channels that used artificial intelligence (AI) to produce fake movie trailers: Screen Culture and KH Studio.
These two channels together have over 2 million subscribers, with total views exceeding 1 billion. Currently, visiting their homepage only displays an error message indicating the page is unavailable.
Fake Trailers That Look Real, Evading Platform Censorship Multiple Times
It is understood that Screen Culture and KH Studio frequently release trailers that appear authentic but are entirely fictional, such as “GTA: San Andreas 2025” or “Wonder 4 Superman: First Step,” and even extensively use Disney copyright material.
Although these trailers are fake, their high-quality production causes them to frequently appear in YouTube’s recommended lists.
Image source: Deadline Screen Culture “Wonder 4 Superman: First Step” Fake Movie Trailer Cover Screenshot
As early as the beginning of 2025, YouTube temporarily suspended the monetization of these two AI channels following protests from other YouTubers, requiring them to add warnings such as “Parody” or “Concept Trailer.” However, after resuming monetization, the warning labels on popular videos disappeared again, making it difficult for viewers to distinguish truth from falsehood.
YouTube Bans for Policy Violations, Google Faces AI Dilemma
The reason YouTube chose to directly ban these two AI channels is because they violated policies against spam content and misleading metadata.
But this also highlights the awkward situation faced by YouTube’s parent company, Google, as they actively develop and encourage creators to use generative AI tools like Veo, while simultaneously having to combat the spam content generated by these tools.
Balancing the promotion of new technology with maintaining platform content authenticity is a difficult challenge Google must carefully navigate.
AI-Generated Content Floods, Medical Misinformation Misleads Elderly
The risks of AI videos are not limited to entertainment but have also penetrated the healthcare sector, posing potential dangers to middle-aged and elderly viewers.
Previously reported by Crypto City, there is a YouTube channel called “Fountain of Wisdom” circulating in Taiwan, which appears to feature medical experts discussing health tips, but in fact, these experts cannot be verified and are entirely AI-generated. The channel has accumulated over 16 million views.
Some netizens pointed out that elderly family members love watching this channel and even take notes, raising concerns that it could influence their perceptions.
Image source: YouTube screenshot of Fountain of Wisdom channel with AI-generated health experts, elderly family members mistakenly believing false information
As generative AI technology becomes more widespread, the barrier to creating fake news has significantly lowered, from fictional movie trailers to incorrect medical advice, which are being widely promoted through algorithms.
In an environment where distinguishing truth from falsehood is increasingly difficult, besides relying on platform regulation, improving public media literacy has become an urgent task.
Further reading:
43% of Taiwanese use AI! Confident in media literacy but only 10% frequently verify information