by reducesuffering 8 hours ago

"AI could never replace the creativity of a human"

"Ok, I guess it could wipe out the economic demand for digital art, but it could never do all the autonomous tasks of a project manager"

"Ok, I guess it could automate most of that away but there will always be a need for a human engineer to steer it and deal with the nuances of code"

"Ok, well it could never automate blue collar work, how is it gonna wrench a pipe it doesn't have hands"

The goalposts will continue to move until we have no idea if the comments are real anymore.

Remember when the Turing test was a thing? No one seems to remember it was considered serious in 2020

blargey 5 hours ago | [-0 more]

> "the creativity of a human"

> "the economic demand for digital art"

You twisted one "goalpost" into a tangential thing in your first "example", and it still wasn't true, so idk what you're going for. "Using a wrench vs preliminary layout draft" is even worse.

If one attempted to make a productive observation of the past few years of AI Discourse, it might be that "AI" capabilities are shaped in a very odd way that does not cleanly overlap/occupy the conceptual spaces we normally think of as demonstrations of "human intelligence". Like taking a 2-dimensional cross-section of the overlap of two twisty pool tubes and trying to prove a Point with it. Yet people continue to do so, because such myopic snapshots are a goldmine of contradictory venn diagrams, and if Discourse in general for the past decade has proven anything, it's that nuance is for losers.

semi-extrinsic 7 hours ago | [-2 more]

> Remember when the Turing test was a thing? No one seems to remember it was considered serious in 2020

To be clear, it's only ever been a pop science belief that the Turing test was proposed as a literal benchmark. E.g. Chomsky in 1995 wrote:

  The question “Can machines think?” is not a question of fact but one of language, and Turing himself observed that the question is 'too meaningless to deserve discussion'.
throw310822 6 hours ago | [-1 more]

The Turing test is a literal benchmark. Its purpose was to replace an ill-posed question (what does it mean to ask if a machine could "think", when we don't know ourselves what this means- and given that the subjective experience of the machine is unknowable in any case) with a question about the product of this process we call "thinking". That is, if a machine can satisfactorily imitate the output of a human brain, then what it does is at least equivalent to thinking.

"I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

staticman2 5 hours ago | [-0 more]

Turing seems to be saying several things. He writes:

>If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd.

This anticipates the very modern social media discussion where someone has nothing substantive to say on the topic but delights in showing off their preferred definition of a word.

For example someone shows up in a discussion of LLMs to say:

"Humans and machines both use tokens".

This would be true as long as you choose a sufficiently broad definition of "token" but tells us nothing substantive about either Humans or LLMs.

Fraterkes 7 hours ago | [-1 more]

The turing test is still a thing. No llm could pass for a person for more than a couple minutes of chatting. That’s a world of difference compared to a decade ago, but I would emphatically not call that “passing the turing test”

Also, none of the other things you mentioned have actually happened. Don’t really know why I bother responding to this stuff

phainopepla2 6 hours ago | [-0 more]

> No llm could pass for a person for more than a couple minutes of chatting

I strongly doubt this. If you gave it an appropriate system prompt with instructions and examples on how to speak in a certain way (something different from typical slop, like the way a teenager chats on discord or something), I'm quite sure it could fool the majority of people

golem14 2 hours ago | [-0 more]
webdood90 8 hours ago | [-2 more]

> blue collar work

I don't think it's fair to qualify this as blue collar work

knollimar 8 hours ago | [-0 more]

I'm double replying to you since the replies are disparate subthreads. This is the necessary step so the robots who can turn wrenches know how to turn them. Those are near useless without perfect automated models.

Anything like this willl have trouble getting adopted since you'd need these to work with imperfect humans, which becomes way harder. You could bankroll a whole team of subcontractors (e.g. all trades) using that, but you would have one big liability.

The upper end of the complexity is similar to EDA in difficulty, imo. Complete with "use other layers for routing" problems.

I feel safer here than in programming. The senior guys won't be automated out any time soon, but I worry for Indian drafting firms without trade knowledge; the handholding I give them might go to an LLM soon.

knollimar 8 hours ago | [-0 more]

It is definitely not. Entry pay is 60k and the senior guys I know make about 200k in HCoL areas. A few wear white dress shirts every day.