by eleventyseven 17 hours ago

> Throughout this series, “we” refers to maderix (human) and Claude Opus 4.6 (by Anthropic) working as a pair. The reverse engineering, benchmarking, and training code were developed collaboratively

Sure, "collaboratively." Why would I ever trust a vibe coded analysis? How do I, a non expert in this niche, know that Opus isn't pulling a fast one on both of us? LLMs write convincing bullshit that even fools experts. Have you manually verified each fact in this piece? I doubt it. Thanks for the disclaimer, it saved me from having to read it.

brookst 3 hours ago | [-2 more]

You’d feel better if it was two people you don’t know? Because obviously any random person is 100% accurate, never mistaken, never making shit up?

I don’t understand the mindset, I really don’t. Why are humans held to such a lower standard?

ezst 2 hours ago | [-0 more]

Despite all the anthropomorphizing of LLMs, you must have come across already how each has VERY DISTINCT failure modes?

michaelmrose 2 hours ago | [-0 more]

Humans as a class are error prone but some humans in their respective fields are very very good. It's often not terribly hard to figure out based on resume and credentials who these folks are and as a shortcut we can look for markers in terms of terminology specifics confidence if it's less important like deciding what to read vs cancer care for your mom.

AI can trip all the right searches to fool these shortcuts whilst sometimes being entirely full of shit and they have no resume nor credentials to verify should we desire to check.

If you have such and vouch for it I can consider your trustworthiness rather than its. If you admit you yourself are reliant on it then this no longer holds

maderix 2 hours ago | [-0 more]

Benchmarks all in part 2. Training progress in part 3(upcoming) Also I think AI human collaboration is important for goal management. Sure LLMs bullshit all the time, but that's the role of the human to create good goals and gating criteria to what constitutes as good.

Anonbrit 17 hours ago | [-0 more]

Humans also write endless amounts of convincing bullshit, and have done since time immemorial. False papers and faked results have been a growing scourge in academia before LLMs were a thing, and that's just counting the intentional fraud - the reproducibility crisis in science, especially medical and psychological science, affects even the best designed and well intentioned of studies.

Humans also make mistakes and assumptions while reverse engineering, so it will always need more engineers to go through the results, test things

withinboredom 17 hours ago | [-0 more]

Claude likes to hide bad benchmarks from you, so it will show you where you are clearly winning. You even see some weird benchmarks in the article.

this-is-why 4 hours ago | [-0 more]

Agreed. Now is our chance to start pushing back on this. Don’t patronize this. Just glad author admitted it. Next time they won’t tho.