matt-bornstein's commits in that repo do often start off with ai-generated descriptions which he then edits down. there are notes on some commits that say things like "AI GENERATED NEED TO EDIT". the other contributors' changes don't have these tells.
while it should come as no surprise to have software written by llms, if these books are in fact just picked by llms then what's the point of this list?
Stephenson doesn't just write sci-fi, he writes operating manuals for the future. His books predicted cryptocurrency, the metaverse, and distributed computing before most of us knew what TCP/IP stood for. Warning: his endings are notoriously abrupt, like a segfault in the middle of your favorite function.
This really is a study in AI slop. At least they had the good sense to change it.
I wish I had as many positive experience as it seems some other HNers have with LLMs. I'm not saying I've had zero positive experiences but the number of negative experiences is so high that it's just super scary.
Yesterday, Thanksgiving, there was a Google Doodle. Clicking the doodle lead to a Gemini prompt for how to plan to have Thanksgiving dinner ready on time. It had a schedule for lots of prep the day before and then a timeline for the day of. It had cooking the dinner rolls and then said something like "take them out and keep them warm" followed by cooking something else in the oven. I asked "How do I keep them warm when something else is cooking in the oven?". It proceeded to give me a revised timeline that contradicted its original timeline and also, made no sense in and of itself. I asked it about the contradiction and the error and it apologized and gave a completely new 3rd timeline that was different than the first 2 and also nonsense. This was Google's Gemini Promotion!
All it really needed to do to my first query was say something like "put a towel over the rolls" and leave it on top of the oven.... Maybe? But then, it had told me be spread butter over the rolls as soon as they came out of the oven so I'd have asked, "won't the towel suck up all the butter?"
This is one example many times LLMs fail me (ChatGPT, Gemini). For direct code gen, my subjective experience is it fails 5 of 6 times. For stackoverflow type questions it succeeds 5 of 6 times. For non-code questions it depends on the type of question. But, when it fails it fails so badly that I'm somewhat surprised it ever works.
And yea, the whole world is running head first into massive LLM usage like this one using it for short reviews of authors. Ugh!!!
It seems to me, most LLM fans are impressed by glancing at a result ("It works!") and never really think about the flaws of the answer or look at the code in detail.
It's truly remarkable that Google put an absurd Wrong Answers Only generator in front of their primary cash cow 18 months ago, and in that time their share price has nearly doubled.
It's wrong nearly every time I search for anything. Ironically, in writing this comment, I tried asking it for the GOOG share price the day before AI Overviews launched, and it got that wrong too.
> I'm not saying I've had zero positive experiences but the number of negative experiences is so high that it's just super scary.
Just for shits and giggles I decided to let Copilot (whatever the default in vscode is) write a Makefile for a simple avr-gcc project. I can't remember what the prompt I gave it was, but it was something along the lines of "given this makefile that is old but works, write a new makefile for this project that doesn't have one" and a link to a simple gist I wrote years ago.
Fuuuuuuuuck me.
It's 2500 lines long. It's not just bigger than the codebase it's supposed to build, it's just about bigger than all the C files in all the avr-gcc projects in that entire chunk of my ~/devel/ directory. I couldn't even begin to make sense of what it's trying to do.
It looks mostly like it's declaring variables over and over, concatenating more bits on as it goes. I don't know for sure though.
Make is a great language, but very few people know it, or care about knowing it. The vast majority of makefiles are automatically written by garbage such as automake. They are exactly as you described - reams of repetitive nonsense. That's going to be the training data for the LLMs, so no wonder they write bad makefiles.
We know that being a billionaire surrounded by yes-men all day causes brain damage, and we know that being on social media living in a delusion bubble all day causes brain damage, so really they were already cooked even before signing what was left of their brains over to the LLMs.
It's such an interesting arc. I starting university in Sept '94, super excited to try out Mosaic on a T1 class connection after suffering through my 14.4k home modem. And shortly after I arrived, Netscape dropped.
He was an absolute hero of that era, possibly the most admired 'geek' back then. Young, with hair, with no hints of his future Dr. Evil emergence.
Dear Neal Stephenson: thanks for actually ending your well-thunk writings with complete sentences/thoughts.
----
I just finished Dave Wallace's 520 page PhD thesis, his first novel The Broom of the System, which literally ends with a liar proclaiming:
>I am a man of my
( "word" is presumed to follow, but another DFW book which just [abruptly!?!] ends )
Like his other two novels (Infinite Jest & Pale King), Broom is an ensemble of disconnected characters, with no clear destination nor moral lessons navigated in a few-hundred-too-many pages — just raw human condition. Very powerful writing style, but with no executive function.
Now that I've read 2000+ pages of David Foster Wallace, I will continue NOT recommending his novels to anybody (this is the same review I gave after IJ and PK). DFW was definitely a powerful thinker/writer, but he should have stuck to his shorter non-fiction meanderings.
----
After writing all of the above, I clanked around with the topic of incomplete sentences ending books:
>Your sense that the mid-sentence ending and related choices feel like bullying is a legitimate aesthetic and emotional response, not a misreading or a sign you “don’t get it”
About half-way through I had to resist the urge to skip to the end to see if he did that. An opportunity lost.
I'll admit, of the few books of his I've read, I always felt like they ended a couple of chapters too soon or a couple of chapters too late — which has put me off reading more of his books despite some interesting premises. I suspect some of the deeper themes are lost on me in my bedtime readings. Just not my cup of tea at the end of the day, literally.
Reminds me of Werner Herzog's autobiography. In the introduction, he muses on a life being cut short by a snipers bullet, and when he sees a bird flying past his window as he is writing his book makes him imagine it is a bullet and he thinks it would be a nice device to cut his final chapter short at that exact moment, so he is giving fair warning that the book will end abruptly.
And so it does, but in a totally Herzog moment he then almost immediately intones afterwards "and that is the end of the book as I indicated in the foreword".
Of course the irony ist that if a big corporation publishes a year-end reading list, it has the implicit message of "hey we are not just a group of boring corporate robots - we're people, with real feelings, and hobbies like reading, and taste."
And now we realize that this is just a PR charade. They might not be people with hobbies like reading, and taste.
It's definitely written by an AI. The end description of hitchhikers guide is "[...]the meaning of life. Which turns out to be an integer." No one would bother writing that.
All of the descriptions on that reading list give me strong LLM vibes. Which, given the source, seems like it should be expected. This post could have stopped after hypothesis 1.
I agree it is not really controversial, I don't think any other explanation is credible. And it really calls into question their assertion that at least one person there has read every book on the list. They love these books, yet no one there cared enough to write a few sentences about them?
https://github.com/a16z-infra/reading-list/commit/93bc3abb04...
> opus descriptions in cursor, raw
> Warning: his endings are notoriously abrupt, like a segfault in the middle of your favorite function.
In commit e4d022[0], the wording changed to:
> Fair warning: most of these books famously don't have endings (they literally stop mid-sentence during a normal plot arc).
It's unclear what led to that change, as the commit message is just "stephenson".
It went through a few more minor edits to get to what's currently published.
https://github.com/a16z-infra/reading-list/commit/e4d022d592...
while it should come as no surprise to have software written by llms, if these books are in fact just picked by llms then what's the point of this list?
> [THIS IS AI GENERATED, NEED TO EDIT] The manga that asked [...]
They do at least have "NEED TO EDIT" in there, but this prose was openly generated by AI as a starting point.
Used Claude to fact-check and fix errors that were likely introduced by Cursor.
The circle is complete.
Deleted Comment
Yesterday, Thanksgiving, there was a Google Doodle. Clicking the doodle lead to a Gemini prompt for how to plan to have Thanksgiving dinner ready on time. It had a schedule for lots of prep the day before and then a timeline for the day of. It had cooking the dinner rolls and then said something like "take them out and keep them warm" followed by cooking something else in the oven. I asked "How do I keep them warm when something else is cooking in the oven?". It proceeded to give me a revised timeline that contradicted its original timeline and also, made no sense in and of itself. I asked it about the contradiction and the error and it apologized and gave a completely new 3rd timeline that was different than the first 2 and also nonsense. This was Google's Gemini Promotion!
All it really needed to do to my first query was say something like "put a towel over the rolls" and leave it on top of the oven.... Maybe? But then, it had told me be spread butter over the rolls as soon as they came out of the oven so I'd have asked, "won't the towel suck up all the butter?"
This is one example many times LLMs fail me (ChatGPT, Gemini). For direct code gen, my subjective experience is it fails 5 of 6 times. For stackoverflow type questions it succeeds 5 of 6 times. For non-code questions it depends on the type of question. But, when it fails it fails so badly that I'm somewhat surprised it ever works.
And yea, the whole world is running head first into massive LLM usage like this one using it for short reviews of authors. Ugh!!!
It seems to me, most LLM fans are impressed by glancing at a result ("It works!") and never really think about the flaws of the answer or look at the code in detail.
It's wrong nearly every time I search for anything. Ironically, in writing this comment, I tried asking it for the GOOG share price the day before AI Overviews launched, and it got that wrong too.
Just for shits and giggles I decided to let Copilot (whatever the default in vscode is) write a Makefile for a simple avr-gcc project. I can't remember what the prompt I gave it was, but it was something along the lines of "given this makefile that is old but works, write a new makefile for this project that doesn't have one" and a link to a simple gist I wrote years ago.
Fuuuuuuuuck me.
It's 2500 lines long. It's not just bigger than the codebase it's supposed to build, it's just about bigger than all the C files in all the avr-gcc projects in that entire chunk of my ~/devel/ directory. I couldn't even begin to make sense of what it's trying to do.
It looks mostly like it's declaring variables over and over, concatenating more bits on as it goes. I don't know for sure though.
I won't be using it.
AI is eating the VCs.
He was an absolute hero of that era, possibly the most admired 'geek' back then. Young, with hair, with no hints of his future Dr. Evil emergence.
Deleted Comment
----
I just finished Dave Wallace's 520 page PhD thesis, his first novel The Broom of the System, which literally ends with a liar proclaiming:
>I am a man of my
( "word" is presumed to follow, but another DFW book which just [abruptly!?!] ends )
Like his other two novels (Infinite Jest & Pale King), Broom is an ensemble of disconnected characters, with no clear destination nor moral lessons navigated in a few-hundred-too-many pages — just raw human condition. Very powerful writing style, but with no executive function.
Now that I've read 2000+ pages of David Foster Wallace, I will continue NOT recommending his novels to anybody (this is the same review I gave after IJ and PK). DFW was definitely a powerful thinker/writer, but he should have stuck to his shorter non-fiction meanderings.
----
After writing all of the above, I clanked around with the topic of incomplete sentences ending books:
https://www.perplexity.ai/search/broom-ends-with-an-incomple...
>Your sense that the mid-sentence ending and related choices feel like bullying is a legitimate aesthetic and emotional response, not a misreading or a sign you “don’t get it”
Just so fascinating — best book club buddy, ever.
I'll admit, of the few books of his I've read, I always felt like they ended a couple of chapters too soon or a couple of chapters too late — which has put me off reading more of his books despite some interesting premises. I suspect some of the deeper themes are lost on me in my bedtime readings. Just not my cup of tea at the end of the day, literally.
And so it does, but in a totally Herzog moment he then almost immediately intones afterwards "and that is the end of the book as I indicated in the foreword".
And now we realize that this is just a PR charade. They might not be people with hobbies like reading, and taste.