by echelon 3 hours ago

This is an experiment in data compression.

jazzyjackson 2 hours ago | [-0 more]

Totally. Unfortunately it's not lossless and instead of just getting pixelated it's changing the size of body parts lol

glitchc 2 hours ago | [-0 more]

Probably compression followed by regeneration during decompression. There's a brilliant technique called "Seam Carving" [1] invented two decades ago that enables content aware resizing of photos and can be sequentially applied to frames in a video stream. It's used everywhere nowadays. It wouldn't surprise me that arbitrary enlargements are artifacts produced by such techniques.

[1] https://github.com/vivianhylee/seam-carving

jsheard 2 hours ago | [-8 more]

What type of compression would change the relative scale of elements within an image? None that I'm aware of, and these platforms can't really make up new video codecs on the spot since hardware accelerated decoding is so essential for performance.

Excessive smoothing can be explained by compression, sure, but that's not the issue being raised there.

Aurornis 2 hours ago | [-0 more]

> What type of compression would change the relative scale of elements within an image?

Video compression operates on macroblocks and calculates motion vectors of those macroblocks between frames.

When you push it to the limit, the macroblocks can appear like they're swimming around on screen.

Some decoders attempt to smooth out the boundaries between macroblocks and restore sharpness.

The giveaway is that the entire video is extremely low quality. The compression ratio is extreme.

echelon 2 hours ago | [-6 more]

AI models are a form of compression.

Neural compression wouldn't be like HVEC, operating on frames and pixels. Rather, these techniques can encode entire features and optical flow, which can explain the larger discrepancies. Larger fingers, slightly misplaced items, etc.

Neural compression techniques reshape the image itself.

If you've ever input an image into `gpt-image-1` and asked it to output it again, you'll notice that it's 95% similar, but entire features might move around or average out with the concept of what those items are.

justinclift an hour ago | [-0 more]

The resources required for putting AI <something> inline in the input (upload) or output (download) chain would likely dwarf the resources needed for the non-AI approaches.

jsheard 2 hours ago | [-4 more]

Maybe such a thing could exist in the future, but I don't think the idea that YouTube is already serving a secret neural video codec to clients is very plausible. There would be much clearer signs - dramatically higher CPU usage, and tools like yt-dlp running into bizarre undocumented streams that nothing is able to play.

planckscnst 2 hours ago | [-2 more]

If they were using this compression for storage on the cache layer, it could allow more videos closer to where they serve them, but they decide the. Back to webm or whatever before sending them to the client.

I don't think that's actually what's up, but I don't think it's completely ruled out either.

jsheard 2 hours ago | [-1 more]

That doesn't sound worth it, storage is cheap, encoding videos is expensive, caching videos in a more compact form but having to rapidly re-encode them into a different codec every single time they're requested would be ungodly expensive.

LoganDark an hour ago | [-0 more]

Storage gets less cheap for short-form tiktoks where the average rate of consumption is extremely high and the number of niches is extremely large.

echelon 2 hours ago | [-0 more]

A new client-facing encoding scheme would break utilization of hardware encoders, which in turn slows down everyone's experience, chews through battery life, etc. They won't serve it that way - there's no support in the field for it.

It looks like they're compressing the data before it gets further processed with the traditional suite of video codecs. They're relying on the traditional codecs to serve, but running some internal first pass to further compress the data they have to store.

plorg 2 hours ago | [-6 more]

If any engineers think that's what they're doing they should be fired. More likely it's product managers who barely know what's going on in their departments except that there's a word "AI" pinging around that's good for their KPIs and keeps them from getting fired.

echelon 2 hours ago | [-5 more]

> If any engineers think that's what they're doing they should be fired.

Seriously?

Then why is nobody in this thread suggesting what they're actually doing?

Everyone is accusing YouTube of "AI"ing the content with "AI".

What does that even mean?

Look at these people making these (at face value - hilarious, almost "cool aid" levels of conspiratorial) accusations. All because "AI" is "evil" and "big corp" is "evil".

Use occam's razor. Videos are expensive to store. Google gets 20 million videos a day.

I'm frankly shocked Google hasn't started deleting old garbage. They probably should start culling YouTube of cruft nobody watches.

asveikau 2 hours ago | [-1 more]

Videos are expensive to store, but generative AI is expensive to run. That will cost them more than storage allegedly saved.

To solve this problem of adding compute heavy processing to serving videos, they will need to cache the output of the AI, which uses up the storage you say they are saving.

echelon 2 hours ago | [-0 more]

https://c3-neural-compression.github.io/

Google has already matched H.266. And this was over a year ago.

They've probably developed some really good models for this and are silently testing how people perceive them.

hatmanstack 2 hours ago | [-2 more]

If you want insight into why they haven't deleted "old garbage" you might try, The Age of Surveillance Capitalism by Zuboff. Pretty enlightening.

echelon 2 hours ago | [-1 more]

I'm pretty sure those 12 year olds uploading 24 hour long Sonic YouTube poops aren't creating value.

theendisney 15 minutes ago | [-0 more]

1000 years from now those will be very important. A bit like we are now wondering what horrible food average/poor people ate 1000 years ago.

Groxx 2 hours ago | [-0 more]

I largely agree, I think that probably is all that it is. And it looks like shit.

Though there is a LOT of room to subtly train many kinds of lossy compression systems, which COULD still imply they're doing this intentionally. And it looks like shit.

JumpCrisscross 2 hours ago | [-3 more]

> This is an experiment

A legal experiment for sure. Hope everyone involved can clear their schedules for hearings in multiple jurisdictions for a few years.

echelon an hour ago | [-2 more]

As soon as people start paying Google for the 30,000 hours of video uploaded every hour (2022 figure), then they can dictate what forms of compression and lossiness Google uses to save money.

That doesn't include all of the transcoding and alternate formats stored, either.

People signing up to YouTube agree to Google's ToS.

Google doesn't even say they'll keep your videos. They reserve the right to delete them, transcode them, degrade them, use them in AI training, etc.

It's a free service.

theendisney 12 minutes ago | [-0 more]

Its not the same when you publish something on my platform as when i publish something and put your name on it.

It is bad enough we can deepfake anyone. If we also pretend it was uploaded by you the sky is the limit.

habinero an hour ago | [-0 more]

"They're free to do whatever they want with their own service" != "You can't criticize them for doing dumb things"

j45 41 minutes ago | [-0 more]

It could be, but if compression is codecs, usually new codecs get talked about on a blog.

2 hours ago | [-0 more]
[deleted]