A makeup influencer I follow noticed youtube and instagram are automatically adding filters to his face without permission to his videos. If his content was about lip makeup they make his lips enormous and if it was about eye makeup the filters make his eyes gigantic. They're having AI detecting the type of content and automatically applying filters.
https://www.instagram.com/reel/DO9MwTHCoR_/?igsh=MTZybml2NDB...
The screenshots/videos of them doing it are pretty wild, and insane they are editing creators' uploads without consent!
I can hear the ballpoint pens now…
This is going to be a huge legal fight as the terms of service you agree to on their platform is “they get to do whatever they want” (IANAL). Watch them try to spin this as “user preference” that just opted everyone into.
That’s the rude awakening creators get on these platforms. If you’re a writer or an artist or a musician, you own your work by default. But if you upload it to these platforms, they own it more or less. It’s there in the terms of service.
What if someone else uploads your work?
This is an experiment in data compression.
Totally. Unfortunately it's not lossless and instead of just getting pixelated it's changing the size of body parts lol
If any engineers think that's what they're doing they should be fired. More likely it's product managers who barely know what's going on in their departments except that there's a word "AI" pinging around that's good for their KPIs and keeps them from getting fired.
> If any engineers think that's what they're doing they should be fired.
Seriously?
Then why is nobody in this thread suggesting what they're actually doing?
Everyone is accusing YouTube of "AI"ing the content with "AI".
What does that even mean?
Look at these people making these (at face value - hilarious, almost "cool aid" levels of conspiratorial) accusations. All because "AI" is "evil" and "big corp" is "evil".
Use occam's razor. Videos are expensive to store. Google gets 20 million videos a day.
I'm frankly shocked Google hasn't started deleting old garbage. They probably should start culling YouTube of cruft nobody watches.
Videos are expensive to store, but generative AI is expensive to run. That will cost them more than storage allegedly saved.
What type of compression would change the relative scale of elements within an image? None that I'm aware of, and these platforms can't just make up new codecs on the spot since they rely on browser/hardware decoders.
Excessive smoothing can be explained by compression, sure, but that's not the issue being raised there.
[delayed]
This is ridiculous
Are these AI filters, or just applying high compression/recompressing with new algorithms (which look like smoothing out details)?
edit: here's the effect I'm talking about with lossy compression and adaptive quantization: https://cloudinary.com/blog/what_to_focus_on_in_image_compre...
The result is smoothing of skin, and applied heavily on video (as Youtube does, just look for any old video that was HD years ago) would look this way
It's filters, I posted an example of it below. Here is a link: https://www.instagram.com/reel/DO9MwTHCoR_/?igsh=MTZybml2NDB...
It's very hard to tell in that instagram video, it would be a lot clearer if someone overlaid the original unaltered video and the one viewers on YouTube are seeing.
That would presumably be an easy smoking gun for some content creator to produce.
There are heavy alterations in that link, but having not seen the original, and in this format it's not clear to me how they compare.
you can literally see the filters turn on and off making his eyes and lips bigger as he moves his face. It's clearly a face filter.
The time of giving these corps the benefit of the doubt is over.
The examples shown in the links are not filters for aesthetics. These are clearly experiments in data compression
These people are having a moral crusade against an unannounced Google data compression test thinking Google is using AI to "enhance their videos". (Did they ever stop to ask themselves why or to what end?)
This level of AI paranoia is getting annoying. This is clearly just Google trying to save money. Not undermine reality or whatever vague Orwellian thing they're being accused of.
Why would data compression make his eyes bigger?
Talking about AI, Google, and shady tactics, I wouldn't be surprised if soon we discover they are purposefully adding video glitches (deformed characters and so on) in the first handful of iterations when using Veo video generation just so people gets use to trying 3 or 4 times before they getting a good one.
Well the current models that cost per output sure love wasting those tokens on telling me how I am the greatest human being ever that only asks questions which get to the very heart of $SUBJECT.
The AI filter applied server-side to YouTube Shorts (and only shorts, not regular videos) is horrible, and it feels like it must be a case of deliberate boiling the frog. If everyone gets used to overly smooth skin, weirdly pronounced wrinkles, waxy hair, and strange ringing around moving objects, then AI-generated content will stand out less when they start injecting it into the feed. At first I thought this must be some client-side upscaling filter, but tragically it is not. There's no data savings at all, and there's no way for uploaders or viewers to turn it off. I guess I wasn't cynical enough.
Do you know how much data YouTube is having to store at scale?
Google isn't enhancing anything. They're compressing it.
Compressing videos could save Google an extremely large amount of money at YouTube scale.
Most of these shorts aren't even viewed more than a few times.
I’ve been saying for a while that the end game for addictive short form chum feeds like TikTok and YouTube Shorts is to drop human creators entirely. They’ll be AI generated slop feeds that people will scroll, and scroll, and scroll. Basically just a never ending feed of brain rot and ads.
Yes, but what happens when the AIs themselves begin to brainrot (as happens when they are not fed their usual sustenance of information from humans and the real world)?
They're heating the garbage slightly before serving it? Oh no.
There are entire fake persona videos these days. Leading scientists, economists, politicians, tech guys, are being impersonated wholesale on youtube.
I learned to ignore the AI summaries after the first time I saw one that described the exact OPPOSITE conclusion/stance of the video it purported to summarize.
What’s the point of doing this?
I don't understand the justification for the expense or complexity.
Every YT short looks AI-ified and creepy now
What PM thought this was a good idea? This has to be the result of some braindead we need more AI in the product mandate
I really hate all the AI filters in videos. It makes everyone look like fake humans. I find it hard to believe that anyone would actually prefer this.
The citation chain for these mastodon reposts resolves to the Gamers Nexus piece on youtube https://www.youtube.com/watch?v=MrwJgDHJJoE
I’ve also noticed YouTube has unbanned many channels that were previously banned for overt supremacist and racist content. They get amplified a lot more now. Between that and AI slop, I feel like Google is speed running the changes X made over the last few years.
"Making AI edits to videos" strikes me as as bit of an exaggeration; it might lead you to think they're actually editing videos rather than simply... post-processing them[1].
That being said, I don't believe they should be doing anything like this without the creator's explicit consent. I do personally think there's probably a good use case for machine learning / neural network tech applied to the clean up of low-quality sources (for better transcoding that doesn't accumulate errors & therefore wastes bitrate), in the same way that RTX Video Super Resolution can do some impressive deblocking & upscaling magic[2] on Windows. But clearly they are completely missing the mark with whatever experiment they were running there.
[1] https://www.ynetnews.com/tech-and-digital/article/bj1qbwcklg
[2] compare https://i.imgur.com/U6vzssS.png & https://i.imgur.com/x63o8WQ.jpeg (upscaled 360p)
Please allow me "post-process" your comment a bit. Let me know if I'm doing this right.
> "Making AI edits to videos" strikes me as something particularly egregious; it leads a viewer to see a reality that never existed, and that the creator never intended.
It's not post-processing, they are applying actual filters, here is an example they make his eyes and lips bigger: https://www.instagram.com/reel/DO9MwTHCoR_/?igsh=MTZybml2NDB...
Sure, but that's not YouTube. That's Instagram. He says so at 1:30.
YouTube is not applying any "face filters" or anything of the sort. They did however experiment with AI upscaling the entire image which is giving the classic "bad upscale" smeary look.
Like I said, I think that's still bad and they should have never done it without the clear explicit consent of the creator. But that is, IMO, very different and considerably less bad than changing someone's face specifically.
His followers also added screenshots of youtube shorts doing it. He says he reached out to both platforms and says he will be reporting back with an update from their customer service and is doing some compare an contrast testing for his audience.
Here's some other creators also talking about it happening in youtube shorts: https://www.reddit.com/r/BeautyGuruChatter/comments/1notyzo/...
another example: https://www.youtube.com/watch?v=tjnQ-s7LW-g
https://www.reddit.com/r/youtube/comments/1mw0tuz/youtube_is...
https://www.bbc.com/future/article/20250822-youtube-is-using...
> Here's some other creators also talking about it happening in youtube shorts (...)
If you open the context of the comment, they are specifically talking about the bad, entire-image upscaling that gives the entire picture the oily smeary look. NOT face filters.
EDIT : same thing with the two other links you edited into your comment while I was typing my reply.
Again, I'm not defending YouTube for this. But I also don't think they should be accused of doing something they're not doing. Face filters without consent are a far, far worse offense than bad upscaling.
I would like to urge you to be more cautious, and to actually read what you brandish as proof.