by _fzslm 5 hours ago

Does publicly documenting and direct linking vulnerable AI agents (that have goodness-knows-how-much access to sensitive user data) for anyone to exploit feel like responsible disclosure?

This could really ruin some people's day. A private message left on their agents to tip people off that their agents are vulnerable feels a lot less destructive.

monkpit 5 hours ago | [-2 more]

Be the change you want to see… it’s not like this being public changes much, anyone who wanted to exploit this could do it without this site

duskdozer 4 hours ago | [-0 more]

Sure, someone could, if they thought to look and did look and compiled the same list. But this makes the work required to start a lot smaller.

JoBrad 3 hours ago | [-0 more]

But putting all of them in a tidy list definitely changes the value.

solid_fuel 4 hours ago | [-1 more]

Shodan has existed for at least a decade and you can't create a cloud instance anywhere these days without it getting immediately crawled. Literally, I was setting up a VPS last week and within 5 minutes of caddy getting a cert from lets encrypt (which then adds the hostname to the certificate transparency log) the access log lit up with dozens of requests per second, all requesting paths like `/wp-admin` and `/admin.cgi` and all sorts of things, looking for vulnerable software.

I wouldn't call this _responsible_ disclosure, but setting up software that is known to be riddled with security holes and granting it both direct access to the internet and to user data is - frankly - so irresponsible that it borders on negligence. If we had stronger standards for software engineering and IT we would call it malpractice.

4 hours ago | [-0 more]
[deleted]