The whole point of MyST is to provide a markdown-like alternative to rST. It literally has directives, roles, structural semantics, etc. It just doesn't have the unlearnable syntax of rST and the so-called governance of docutils (the de facto rST parser) (see e.g. discussion on https://github.com/sphinx-doc/sphinx/issues/8039 and linking issues)
The lw vibes are strong, I'm still waiting for Ai to escape and kill us (it will get stuck trying to import a library in python)
I think people conflate thinking with sentience, consciousness, and a whole lot of other concerns.
Clearly this website is not for you and your complete lack of curiosity if you call us "sicko freaks".
If you are really curious, I invite you to read this cognitive science paper, "Modern Alchemy: Neurocognitive Reverse Engineering": https://philsci-archive.pitt.edu/25289/1/GuestEtAl2025.pdf
Note the quote at the top from Abeba Birhane: > We can only presume to build machines like us once we see ourselves as machines first.
It reminds me of your comment that
> [LLMs] seem to think more than most people I know
and I have to say that I am really sad that you feel this way. I hope you can find better people to spend your time with.
You might find other recent papers from the first author interesting. Perhaps it will help you understand that there are a lot of deeply curious people in the world that are also really fucking sick of our entire culture being poisoned by intellectual e-waste from Silicon Valley.
Also, I ain’t gonna read your coffee table science book.
You can't even read posts clearly, so don't waste your time trying to finish your first book.
This is what happens when our entire culture revolves around the idea that computer programmers are the most special smartest boys.
If you even entertain even for a second the idea that a computer program that a human wrote is "thinking", then you don't understand basic facts about: (1) computers, (2) humans, and (3) thinking. Our educational system has failed to inoculate you against this laughable idea.
A statistical model of language will always be a statistical model of language, and nothing more.
A computer will never think, because thinking is something that humans do, because it helps them stay alive. Computers will never be alive. Unplug your computer, walk away for ten years, plug it back in. It's fine--the only reason it won't work is planned obsolescence.
No, I don't want to read your reply that one time you wrote a prompt that got ChatGPT to whisper the secrets of the universe into your ear. We've known at least since Joseph Weizenbaum coded up Eliza that humans will think a computer is alive if it talks to them. You are hard-wired to believe that anything that produces language is a human just like you. Seems like it's a bug, not a feature.
Stop commenting on Hacker News, turn off your phone, read this book, and tell all the other sicko freaks in your LessWrong cult to read it too: https://mitpress.mit.edu/9780262551328/a-drive-to-survive/ Then join a Buddhist monastery and spend a lifetime pondering how deeply wrong you were.
I hate to make the comparison between two left-ish people who yell for a living just because they're both British, but it kinda feels like ed is going for a john oliver type of delivery, which only really works well when you have a whole team of writers behind you.
(that @mindcrime shared when they posted this but I had already read elsewhere)
Good thread on paper from author: https://bsky.app/profile/pettertornberg.com/post/3lvpsdimbu2...