Readit News logoReadit News
kesor commented on Phoenix: A modern X server written from scratch in Zig   git.dec05eba.com/phoenix/... · Posted by u/snvzz
grim_io · 2 months ago
Linux on the desktop only took of because Ubuntu, with mixed results and a lot of controversy, decided to standardize and polish the experience for "normies".

The distribution sprawl I largely see as a detriment to the ecosystem.

kesor · 2 months ago
I would argue that Desktop Linux finally took off because of Steam Proton, and because of Windows 10/11 and macOS starting version fartascular or whatever their versions are named.
kesor commented on OpenSCAD is kinda neat   nuxx.net/blog/2025/12/20/... · Posted by u/c0nsumer
jandrese · 2 months ago
> You can even load an existing 3D mesh and operate on it as an SDF. Great for hollowing, chopping, eroding/dilating, etc. existing models.

This has my instant interest. Multiple times I have wanted to take an existing .STL file and cut a hole on it or add another object to it and have never had success.

I've tried things like Meshlab, but while the interface has what appears to be a hundred different functions, attempting to use anything returns some error code that requires a PhD to understand and none of the "repair" functions seem to help.

I mean seriously: Mesh inputs must induce a piecewise constant winding number field.

How the hell am I supposed to accomplish that on a STL file?

kesor · 2 months ago
OpenSCAD can load STLs and cut holes in them.
kesor commented on If you're going to vibe code, why not do it in C?   stephenramsay.net/posts/v... · Posted by u/sramsay
didibus · 2 months ago
> Why would you think its more complex?

Binary code takes more space, and both training and inference is highly capped by memory and context sizes.

Models tokenize to a limited set of tokens, and then learn relations between those. I can't say for sure, but I feel it be more challenging to find tokenization schemes for binary code and learn their relationships.

The model needs to first learn human language really well, because it has to understand the prompt and map it accurately to the binary code. That means the corpus will need to include a lot of human languages that it learns and also binary code, I wonder if the fact they differ so much would conflict the learning.

I think coming up with a corpus of mapped human language to binary code will be really challenging. Unless we can include the original code's comments at appropriate places around the binary code and so on.

Binary code is machine dependent, so it would result in programs that aren't portable between architecture and operating system and so on. The model would need to learn more than one binary code and be able to accurately generate the same program for different target platforms and OS.

> Who said that creating bits efficiently from English to be computed by CPUs or GPUs must be done with transformer architecture?

We've never had any other method ever do as well and by a magnitude. We may invent a whole new way in the future, but as of now, it's the absolute best method we've ever figured out.

> The AI model architecture is not the focus of the discussion. It is the possibilities of how it can look like if we ask for some computation, and that computation appears without all the middle-men layers we have right now, English->Model->Computation, not English->Model->DSL->Compiler->Linker->Computation.

Each layer simplifies the task of the layer above. These aren't like business layer that take a cut of the value out at each level, software layers remove complexity from the layers above.

I don't know why we wouldn't be talking about AI models? Isn't the topic that it may be more optimal for an AI model to be trained on binary code directly and to generate binary code directly? At least it's what I was talking about.

So if I stick to AI models. With LLMs and image/video diffusion and such, we've already observed that inference through smaller steps and chains of inference work way better. Based on that, I feel it's likely going from human language to binary code in a single hop to also work worse.

kesor · 2 months ago
Diffusion models for images are already pretty much binary code generators. And we don't need to treat each bit individually, even in binary code there are whole segments that can be tokenized into a single token.

Regarding training, we have many binaries all around us, for many of them we also have the source code in whichever language. As a first step we can use the original source code and ask a third party model to explain what it does in English. Then use this English to train the binary programmer model. Eventually the binary programmer model can understand binaries directly and translate them to English for its own use, so with time, we might not even need binaries that have source code, we could narrate binaries directly.

kesor commented on If you're going to vibe code, why not do it in C?   stephenramsay.net/posts/v... · Posted by u/sramsay
didibus · 2 months ago
Because the learned function to generate binary code is likely more complex than that for Python.

I admit I can't say for sure until we try it. If someone were to train a model at the same scale on the same amount of raw binary code as we do these models on raw language and code, would it perform better at generating working programs. Thing is, it would now fail to understand human language prompts.

From what I know and understand though, it seems like it would be more complex to achieve.

My meta point is, you shouldn't think of it as what would a computer most likely understand, because we're not talking about a CPU/GPU. You have to think, what would a transformer architecture deep neural net better learn and infer? Python or binary code? And I think from that lens it seems more likely it's Python.

kesor · 2 months ago
Why would you think its more complex? There are less permutations of generating transistor on/off states than there are all the different programming languages in use that result in the exact same bits.

Who said that creating bits efficiently from English to be computed by CPUs or GPUs must be done with transformer architecture? Maybe it can be, maybe there are other ways of doing it that are better. The AI model architecture is not the focus of the discussion. It is the possibilities of how it can look like if we ask for some computation, and that computation appears without all the middle-men layers we have right now, English->Model->Computation, not English->Model->DSL->Compiler->Linker->Computation.

kesor commented on If you're going to vibe code, why not do it in C?   stephenramsay.net/posts/v... · Posted by u/sramsay
29athrowaway · 2 months ago
Do it in Ada, SPARK, Zig, Rust, Pascal, Crystal, etc.

Unless it's an existing project where migration is too costly, C is just entering a time wasting pact along with a lot of other people that like suffering for free.

kesor · 2 months ago
You are missing the whole point of the article.
kesor commented on If you're going to vibe code, why not do it in C?   stephenramsay.net/posts/v... · Posted by u/sramsay
barrister · 2 months ago
This author, like many others on this site, seem to imply that AI generates "good" code, but it absolutely does not---unless he's running some million dollar subscription model I'm unaware of. I've tested every AI using simple Javascript programs and they all produce erroneous spaghetti slop. I did discover that Claude produces sufficiently decent Haskell code. The point is that the iterative process requires you know the language because you're going to need to amend the code. Therefore vibe in the language you know. Anyone that suggests that AI can produce a solid application on its own is a fraud.
kesor · 2 months ago
You, like many others, seem to imply that humans write "good" code, but they absolutely do not--unless they are running some million dollar team with checks and cross checks and years of evolving the code over failures. I've tested every junior developer using simple Javascript leetcode quizes and they all produce erroneous spaghetti slop.

The difference is, we forgive humans for needing iteration. We expect them to get it wrong first, improve with feedback, and learn through debugging. But when AI writes imperfect code, you declare the entire approach fraudulent?

We shouldn't care about flawless one-shot generations. The value is in collapsing the time between idea and execution. If a model can give you a working draft in 3 seconds - even if it's 80% right - that's already a 10x shift in how we build software.

Don't confuse the present with the limit. Eventually, in not that many years, you'll vibe in English, and your AI co-dev will do the rest.

kesor commented on If you're going to vibe code, why not do it in C?   stephenramsay.net/posts/v... · Posted by u/sramsay
gaigalas · 2 months ago
Fascinating.

I would appreciate a post with examples, not just prose. It helps to put things in a more grounded reality.

kesor · 2 months ago
Why you hate on prose? This article has been a joy to read, unlike a lot of the other slop on the internet.
kesor commented on If you're going to vibe code, why not do it in C?   stephenramsay.net/posts/v... · Posted by u/sramsay
didibus · 2 months ago
This is treating the LLM like it is the computer or has some kind of way of thinking. But LLM is a "language" model, I'm pretty sure the easier for human to read, the easier for LLM to learn and generate. Abstractions also benefit the model, it does not need to generate a working 2s complement, just a working call to addition of abstracted types.

And just in my experience, I feel everyone is slowly learning, all models are better at the common thing, they are better at bash, they are better at Python and JS, and so on. Everyone trying to invent at that layer has failed to beat that truth. That bootstrapping challenge is dismissed much too easily in the article in my opinion.

kesor · 2 months ago
Binary bits are also a language. A structured language that transistor-based computers execute into some result we humans find valuable. Why wouldn't a model be able to write these binary instructions directly? Why do we need all these layers in between? We don't.
kesor commented on If you're going to vibe code, why not do it in C?   stephenramsay.net/posts/v... · Posted by u/sramsay
florilegiumson · 2 months ago
I like the idea of reimagining the whole stack so as to make AI more productive, but why stop at languages (as x86 asm is still a language)? Why not the operating system? Why not the hardware layer? Why not LLM optimized verilog, or an AI tuned HDL?
kesor · 2 months ago
Probably because then it wouldn't be software anymore. That is because you will be required to do a physical process (print an integrated circuit) in order to use the functionality you created. It can definitely be done, but it takes it too far away from the idea the author expressed.

But I don't see a reason why the LLM shouldn't be writing binary CPU instructions directly. Or programming some FPGA directly. Why have the assembly language/compiler/linked in between? There is really no need.

We humans write some instructions in English. The LLM generates a working executable for us to use repeatedly in the future.

I also think it wouldn't be so hard to train such a model. We have plenty of executables with their source code in some other language available to us. We can annotate the original source code with a model that understands that language, get its descriptions in English, and train another model to use these descriptions for understanding the executable directly. With enough such samples we will be able to write executables by prompting.

kesor commented on If you're going to vibe code, why not do it in C?   stephenramsay.net/posts/v... · Posted by u/sramsay
wavemode · 2 months ago
> if vibe coding is the future of software development (and it is), then why bother with languages that were designed for people who are not vibe coding? Shouldn’t there be such a thing as a “vibe-oriented programming language?” VOP.

A language designed for vibe coding could certainly be useful, but what that means is the opposite of what the author thinks that means.

The author thinks that such a language wouldn't need to have lots of high-level features and structure, since those are things that exist for human comprehension.

But actually, the opposite is true. If you're designing a language for LLMs, the language should be extremely strict and wordy and inconvenient and verbose. You should have to organize your code in a certain way, and be forced to check every condition, catch every error, consider every edge case, or the code won't compile.

Such a language would aggravate a human, but a machine wouldn't care. And LLMs would benefit from the rigidness, as it would help prevent any confusion or hallucination from causing bugs in the finished software.

kesor · 2 months ago
I don't think there is a need for an output language here at all, the LLM can read and write bits into executables directly to flip transistors on and off. The real question is how the input language (i.e. prompts) look like. There is still a need for humans to describe concepts for the machine to code into the executable, because humans are the consumers of these systems.

u/kesor

KarmaCake day1478April 10, 2012
About
[ my public key: https://keybase.io/kesor; my proof: https://keybase.io/kesor/sigs/jmMGNf3t1txUSFFsyOAMk9Jyjk9UPNI-ZKb4DY49aGw ]
View Original