This article was clearly written by a human (and AI) but still has a few "LLMisms" such as:
- The key insight - [CoreML] doesn't XXX. It YYY.
With that being said, this is a highly informative article that I enjoyed thoroughly! :)
The article links to their own Github repo: https://github.com/maderix/ANE
What’s the intent of pointing out the presumed provenance in writing, now that LLMs are ubiquitous?
Is it like one of those “Morning” nods, where two people cross paths and acknowledge that it is in fact morning? Or is there an unstated preference being communicated?
Is there any real concern behind LLMs writing a piece, or is the concern that the human didn’t actually guide it? In other words, is the spirit of such comments really about LLM writing, or is it about human diligence?
That begs another question: does LLM writing expose anything about the diligence of the human, outside of when it’s plainly incorrect? If an LLM generates a boringly correct report - what does that tell us about the human behind that LLM?
We've got about a year before so many people are interacting with LLMs on a daily basis that its style starts to reverse infect human speech and writing
Great insight – Would you like to try and identify some specific "AI-isms" that you've noticed creeping into your own writing or your colleagues' emails lately?
People are okay to use delve now.
This said, there were people that talked like this before LLMs, it didn't develop this whole cloth.
The article above doesn't read well, at all.
It's not my subject, but it reads as a list of things. There's little exposition.
Gawd Damn LISTICLES!!!! And all of those articles that list in bullet points at the top of the article the summary of the article. And all of those people saying they don't want to read exposition, just give me the bullet points.
Exactly. LLM's are mimics.
People seem to be going around pointing out that people talk like parrots, when in reality it's parrots talk like people.
I mean, it's both.
Did you develop your own whole language at any point to describe the entire world? No, you, me, and society mimic what is around us.
Humans have the advantage, at least at this point, of being a continuous learning device so we adapt and change with the language use around us.
It's already happened to me. I've started to have dreams where instead of some sort of interpersonal struggle the entire dream is just a chatbot UI viewport and I'm arguing with an LLM streaming the responses in. Which is super trippy when I become aware its a dream. In the old days I'd dream about playing chess against myself and lose which was quite bizzare feeling because my brain was running both players. But thats totally normal compared to having my brain pretend to be an LLM inside a dream.
My honest take? You're probably right
You are absolutely right.
Here is why you are correct:
- I see what you did there.
- You are always right.
Also the Prior Art section, which has telltale repetition of useless verbs like "documenting," "providing insight into," and "confirming" on each line. This was definitely AI-written, at least in part.
Below are the items from that section. How should they be written to not look like an AI?
> hollance/neural-engine — Matthijs Hollemans’ comprehensive community documentation of ANE behavior, performance characteristics, and supported operations. The single best existing resource on ANE.
> mdaiter/ane — Early reverse engineering with working Python and Objective-C samples, documenting the ANECompiler framework and IOKit dispatch.
> eiln/ane — A reverse-engineered Linux driver for ANE (Asahi Linux project), providing insight into the kernel-level interface.
> apple/ml-ane-transformers — Apple’s own reference implementation of transformers optimized for ANE, confirming design patterns like channel-first layout and 1×1 conv preference.
The grammatical structure in the middle two is identical, and they're all similar in that way.
- "- Name - {Noun with modifiers} {comma} {verb-ing with modifiers}."
- "- Name - {Noun with modifiers} {comma} {verb-ing with modifiers}."
The phrasing is the same, which I notice sometimes happens in my own notes, but it's most noticeable when an LLM is asked to summarize items. An LLM written job description (without major prompting) for a resume comes out the same way, in my experience. It's the simplest full-sentence grammar for describing what something is, and then what something does.
If we used the developer's descriptions (from the github repo) to populate the info, it would look like this:
- hollance/neural-engine - Everything we actually know about the Apple Neural Engine (ANE)
- mdaiter/ane - Reverse engineered the Apple Neural Engine, with working Python and Objective C samples
- apple/ml-ane-transformers - eiln/ane - Reverse engineered Linux driver for the Apple Neural Engine (ANE).
- apple/ml-ane-transformers - Reference implementation of the Transformer architecture optimized for Apple Neural Engine (ANE)
IMO It may not be as information-packed as the LLM list, but it is more interesting to read. I can tell, or at least think I can tell, that different individuals wrote each description, and it's what they wanted me to know most about their project.
If I were making a list of software during research (that would eventually turn into a report), the particular details I write down in the moment would be different, depending on the solution I'm looking for or features it has or doesn't have, will add or won't add. I don't try to summarize "the Whole Project" in one clean bullet point, i (or my readers) can re-read the repo for that, or glean it from surrounding context (presuming enough surrounding context was written). But unless I made an effort later to normalize the list, the grammar, length, and subpoints would vary from the form-identifiable "LLM Concise Summary." It's more work for me to write to a standard, and even more work to consciously pick one.
EDIT: Upon re-reading the article, I noticed the "Prior Art" section is written in past-tense, as I would expect. But the list is in present tense. I feel like it jumps from "narrative" to "technical details list" back to "narrative". And the list is 70% of the section! I wouldn't mind reading a whole paragraph describing each project, what worked, what didn't, what they could use and what they couldn't, in the past tense, if it were interestingly-written. Something that tells me the author dove into the previous projects, experimented with them, or if they interacted with the developers. Or something interesting the author noticed while surveying the "prior art". but "interestingly-written" isn't really the LLMs goal, nor its ability. It's maximal information transfers with minimal word count. So the result is a list that smells like the author merely read the repo readme and wrote a summary for the masses in a technical report.
tl;dr The list is just "a list", and that makes it not interesting to read. If it was not interesting to read it was probably not interesting to write, which I take as an LLM writing it.