YouTube Secretly Used AI to Alter Creators Videos - What Happened
YouTube was found applying AI-based enhancements to user-uploaded videos without creators knowledge or consent, sparking concerns about authenticity, creator rights, and the broader implications for AI governance.

Share this post

YouTube was found applying AI-based enhancements to user-uploaded videos without creators' knowledge or consent. These alterations included subtle but noticeable changes such as smoothing textures, altering lighting, and over-sharpening, creating an "oil painting" or artificially enhanced look.
Creators Raise the Alarm
- Rick Beato (5M+ subscribers) noticed his videos looked strange — hair textures altered, makeup-like effects.
- Rhett Shull (700k+ subscribers) documented what he called "non-consensual AI upscaling" that misrepresented his work. He argued it looked AI-generated and distorted his creative intent.
YouTube's Response
After speculation, YouTube confirmed the practice but framed it as a "limited experiment" applied to select Shorts.
According to spokesperson Rene Ritchie, the changes involved traditional machine learning (not generative AI) to denoise, unblur, and enhance clarity — akin to smartphone auto-processing.
YouTube did not clarify how many creators were affected or whether users would have the ability to disable the modifications.
Industry Concerns
The revelations sparked broad backlash over issues of consent, authenticity, and creator rights.
Critics called the practice "theft" and "disrespectful," noting the irony of YouTube cracking down on AI-generated spam while itself applying AI modifications to legitimate content.
Experts warned that this may accelerate AI-driven alterations across platforms, further eroding trust and raising fears that AI could "alter reality forever" without user consent.
⚖️ Analysis in Context of AI & Cybersecurity
This episode highlights three major fault lines:
1. Consent & Transparency Gap
AI was applied invisibly, eroding creator trust. This mirrors broader cybersecurity issues where systems deploy AI without user awareness, undermining accountability.
2. Authenticity & Integrity
The alterations show how AI can distort original data (in this case, video). In cybersecurity, similar risks exist when AI models reshape, sanitize, or redact data streams, leaving enterprises blind to true signals.
3. Asymmetry of Power
Just as enterprises depend on legacy security vendors, creators depend on platforms. When incumbents unilaterally apply AI, both groups face exposure to risks they cannot control.
Conclusion
This case underscores why AI defences must move from post-hoc overlays to built-in governance and control mechanisms — whether in creative ecosystems or cybersecurity infrastructures.