Readit News logoReadit News
chme commented on European Commission Trials Matrix to Replace Teams   euractiv.com/news/commiss... · Posted by u/Arathorn
uyzstvqs · a month ago
Threema is Swiss, which is a regional EFTA member. It's end-to-end encrypted and the clients are open-source.

Zulip has client-server encryption, which is fine if you control the server.

chme · a month ago
Threema is still vendor lock-in.
chme commented on European Commission Trials Matrix to Replace Teams   euractiv.com/news/commiss... · Posted by u/Arathorn
Arathorn · a month ago
When did you try it? Both Matrix the protocol and implementations like Element X have improved immeasurably over the last year or so.
chme · a month ago
Element X is in some cases still a downgrade from Element. For instance there doesn't seem to be a way to create local key backups anymore. Also, that calls between Element and Element X are incompatible means both apps need to be installed in order to receive calls from all contacts.

Still, I love Matrix and hope that these issues will be resolved in time.

chme commented on A few CPU hardware bugs   taricorp.net/2026/a-few-c... · Posted by u/signa11
chme · a month ago
I had to deal with Intel Quark SoC X1000 on a Galileo board years ago, where the LOCK prefix instruction caused segfaults. Since the SoC is single threaded, the lock prefix could just be patched out from resulting binaries, before the compiler/build system was patched.

https://en.wikipedia.org/wiki/Intel_Quark#Segfault_bug

chme commented on Install.md: A standard for LLM-executable installation   mintlify.com/blog/install... · Posted by u/npmipg
michaelmior · 2 months ago
> What you are saying is that we don't need 'install.md'

I think the point was that install.md is a good way to generate an install.sh.

> validate that, and put it into the repo

The problem being discussed is that the user of the script needs to validate it. It's great if it's validated by the author, but that's already the situation we're in.

chme · 2 months ago
> The problem being discussed is that the user of the script needs to validate it. It's great if it's validated by the author, but that's already the situation we're in.

The user is free to use a LLM to 'validate' the `install.sh` file. Just asking it if the script does anything 'bad'. That should be similarly successful as the LLM generating the script based on a description. Maybe even more successful.

chme commented on Install.md: A standard for LLM-executable installation   mintlify.com/blog/install... · Posted by u/npmipg
imiric · 2 months ago
Here's a proposal: app.md. A structured text file with everything you want your app to do.

That way we can have entire projects with nothing but Markdown files. And we can run apps with just `claude run app.md`. Who needs silly code anyway?

chme · 2 months ago
Well... Maybe just have a BIOS on your system that fetches a markdown, pushes it to a LLM to generate a new and exciting operating system for you on every boot.

Wouldn't that be nice?

chme commented on Install.md: A standard for LLM-executable installation   mintlify.com/blog/install... · Posted by u/npmipg
petekoomen · 2 months ago
It does, and possibly this launch is a little window into the future!

Install scripts are a simple example that current generation LLMs are more than capable of executing correctly with a reasonably descriptive prompt.

More generally, though, there's something fascinating about the idea that the way you describe a program can _be_ the program that tbh I haven't fully wrapped my head around, but it's not crazy to think that in time more and more software will be exchanged by passing prompts around rather than compiled code.

chme · 2 months ago
TBH, I doubt that this will happen...

It is much easier to use LLMs to generate code, validate that code as a developer, fix it, if necessary, and check it into the repo, then if every user has to send prompts to LLMs in order to get the code they can actually execute.

While hoping it doesn't break their system and does what they wanted from it.

Also... that just doesn't scale. How much power would we need, if everyday computing starts with a BIOS sending prompts to LLMs in order to generate a operating system it can use.

Even if it is just about installing stuff... We have CI runners, that constantly install software often on every build. How would they scale if they need LLMs to generate install instructions every time?

chme commented on Install.md: A standard for LLM-executable installation   mintlify.com/blog/install... · Posted by u/npmipg
Szpadel · 2 months ago
imagine such support ticket:

I used minimax M2 (context it's very unreliable) for installation and it didn't work and my document folder is missing, help

how do you even debug this? imagine you some path or behaviour is changed in new os release and model thinks it knows better? if anything goes wrong who is responsible?

chme · 2 months ago
Maybe that is a reason for this approach. It changes the responsibility of errors from the person writing that code, to the one executing it.

Pretty brilliant in a way.

chme commented on Install.md: A standard for LLM-executable installation   mintlify.com/blog/install... · Posted by u/npmipg
dang · 2 months ago
That's a chance to plump for Peter Naur's classic "Programming as Theory Building"!

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

What Naur meant by "theory" was the mental model of the original programmers who understood why they wrote it that way. He argued the real program was is theory, not the code. The translation of the theory into code is lossy: you can't reconstruct the former from the latter. Naur said that this explains why software teams don't do as well when they lose access to the original programmers, because they were the only ones with the theory.

If we take "a great description" to mean a writeup of the thinking behind the program, i.e. the theory, then your comment is in keeping with Naur: you can go one way (theory to code) but not the other (code to theory).

The big question is whether/how LLMs might change this equation.

chme · 2 months ago
Even bringing down the "theory" to paper in prosa will be lossy.

And natural languages are open to interpretation and a lot of context will remain unmentioned. While programming languages, together with their tested environment, contain the whole context.

Instrumenting LLMs will also mean, doing a lot of prompt engineering, which on one hand might make the instructions clearer (for the human reader as well), but on the other will likely not transfer as much theory behind why each decision was made. Instead, it will likely focus on copy&pasta guides, that don't require much understanding on why something is done.

chme commented on Install.md: A standard for LLM-executable installation   mintlify.com/blog/install... · Posted by u/npmipg
nobodywillobsrv · 2 months ago
How about both worlds?

Instead of asking the agent to execute it for you, you ask the agent to write an install.sh based on the install.md?

Then you can both audit whatever you want before running or not.

chme · 2 months ago
So... What you are saying is that we don't need 'install.md'. Because a developer can just use a LLM to generate a 'install.sh', validate that, and put it into the repo?

Good idea. That seems sensible.

Bonus: LLM is only used once, not every time anyone wants to install some software. With some risks of having to regenerate, because the output was nonsensical.

chme commented on Install.md: A standard for LLM-executable installation   mintlify.com/blog/install... · Posted by u/npmipg
catlifeonmars · 2 months ago
I’m not sure I agree with you that code is hard to read. I usually tend to go straight to the source code as it communicates precisely how something will behave. Well written code, like well written prose can also communicate intent effectively.
chme · 2 months ago
TBH. I never read prose that couldn't be in some way misinterpreted or misunderstood. Because much of it is context sensitive.

That is why we have programming languages, they, coupled with a specific interpreter/compiler, are pretty clear on what they do. If someone misunderstands some specific code segment, they can just test their assumptions easily.

You cannot do that with just written prose, you would need to ask the writer of that prose to clarify.

And with programming languages, the context is contained, and clearly stated, otherwise it couldn't be executed. Even undefined behavior is part of that, if you use the same interpreter/compiler.

Also humans often just read something wrong, or skip important parts. That is why we have computers.

Now, I wouldn't trust a LLM to execute prose any better then I trust a random human of reading some how-to guide and doing that.

The whole idea that we now add more documentation to our source code projects, so that dumb AI can make sense of it, is interesting... Maybe generally useful for humans as well... But I would instead target humans, not LLMs. If the LLMs finds it useful as well, great. But I wouldn't try to 'optimize' my instructions so that every LLM doesn't just fall flat on its face. That seems like a futile effort.

u/chme

KarmaCake day998May 24, 2016View Original