by webdevver 10 hours ago

i mean theres kind of no way around it. how else are you gonna get the training data you need? the only way to bootstrap ai is to tag the data with bio-ai first (humans).

different companies 'launder' it differently: with voice, it was done by "accidental" voice assistant activations. i guess with glasses, maybe there will be less window dressing this time. after all, it is clearly pitched to see what you see, at all times of the day.

similar controversy happened with the various roomba products, although arguably that was a combination of data harvesting + lazy engineering.

medi8r 5 hours ago | [-0 more]

Lol! The no way round it defense. I'll have to remember that.

dangus 10 hours ago | [-1 more]

There are lots of ways around it, like adding a transparent “training mode” that a user can enable with consent, legitimately purchasing training data, etc.

The root cause is that meta didn’t want to pay the fair market value for those videos and just stole them from its users by burying it in TOS.

If they were honest about their intentions most people would say no or demand payment for providing something of value.

medi8r 5 hours ago | [-0 more]

That would be good. A YC company is paying people to do just this. You know the data is being uploaded, so you can avoid e.g. your kids coming into frame.

Really it should just be in the UI. Click Upload this and get 10c/minute or whatever for the video. Choose what you upload. That'd be closer in effect to using social media.