Readit News logoReadit News

Deleted Comment

koito17 commented on Last Week on My Mac: Losing confidence   eclecticlight.co/2025/11/... · Posted by u/frizlab
xp84 · 14 days ago
My least favorite thing on the Mac is when they have one of their infamous negative number “error codes” in the alert box, and I get my hopes up that at least it must represent a common thread of errors others have had that may be solved in a forum or subreddit somewhere, then when googling that you discover that all forms of failure from disk full to malware result in the exact same oddly-specific error code and that everyone is talking to each other about it and slowly realizing they have nothing in common but being cursed by Tim Cook.
koito17 · 14 days ago
If I recall correctly, many of those are "Carbon-related" errors and mostly represent legacy baggage of Mac OS.

Not defending the design, but this website is sometimes useful for disambiguating OSStatus error codes: https://www.osstatus.com/

koito17 commented on Shai-Hulud Returns: Over 300 NPM Packages Infected   helixguard.ai/blog/malici... · Posted by u/mrdosija
benjifri · 21 days ago
This is like saying "use MacOS and you won't get viruses" in the 2000s
koito17 · 21 days ago
Bun disables post-install scripts by default and one can explicitly opt-in to trusting dependencies in the package.json file. One can also delay installing updated dependencies through keys like `minimumReleaseAge`. Bun is a drop-in replacement for the npm CLI and, unlike pnpm, has goals beyond performance and storage efficiency.

Not sure what your analogy is trying to imply.

Deleted Comment

koito17 commented on Cloudflare outage on November 18, 2025 post mortem   blog.cloudflare.com/18-no... · Posted by u/eastdakota
dist1ll · a month ago
If that's the case then hats off. What you're describing is definitely not what I've seen in practice. In fact, I don't think I've ever seen a crate or production codebase that documents infallibility of every single slice access. Even security-critical cryptography crates that passed audits don't do that. Personally, I found it quite hard to avoid indexing for graph-heavy code, so I'm always on the lookout for interesting ways to enforce access safety. If you have some code to share that would be very interesting.
koito17 · a month ago
> I don't think I've ever seen a crate or production codebase that documents infallibility of every single slice access.

The smoltcp crate typically uses runtime checks to ensure slice accesses made by the library do not cause a panic. It's not exactly equivalent to GP's assertion, since it doesn't cover "every single slice access", but it at least covers slice accesses triggered by the library's public API. (i.e. none of the public API functions should cause a panic, assuming that the runtime validation after the most recent mutation succeeds).

Example: https://docs.rs/smoltcp/latest/src/smoltcp/wire/ipv4.rs.html...

koito17 commented on Pikaday: A friendly guide to front-end date pickers   pikaday.dbushell.com... · Posted by u/mnemonet
cubefox · a month ago
AM and PM is used in a few languages (mostly English) but many don't have it in their vocabulary at all, which probably includes Japanese.
koito17 · a month ago
In the case of Japanese, there is 午前・午後 for 12-hour time. e.g. 午後9時に着く (arrive at 9 P.M.). If it's obvious from context, then only the hour is said. e.g. in「明日3時にね」, the flow of the conversation disambiguates the hour (it's also unlikely the speaker means 3 A.M.)

There are also other ways to convey 12-hour time. e.g. 朝6時に起きる (wake up at 6 A.M. / wake up at 6 in the morning).

Deleted Comment

koito17 commented on XSLT RIP   xslt.rip/... · Posted by u/edent
koito17 · a month ago
I was hoping the site itself would be an XML document. Thankfully, it is an XML document.

  % curl https://xslt.rip/
  <?xml version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet href="/index.xsl" type="text/xsl"?>
  <html>
    <head>
      <title>XSLT.RIP</title>
    </head>
    <body>
      <h1>If you're reading this, XSLT was killed by Google.</h1>
      <p>Thoughts and prayers.</p>
      <p>Rest in peace.</p>
    </body>
  </html>

koito17 commented on Cerebras Code now supports GLM 4.6 at 1000 tokens/sec   cerebras.ai/code... · Posted by u/nathabonfim59
RealityVoid · a month ago
> Also, for embedded, forget even thinking about testing harnesses (which at least exist in some form with UEFI, it's just difficult to automate the execution and output for an LLM).

I think this doesn't have to be like this and we can do better for this. If LLMs keep this up, good testing infrastructure might become more important.

koito17 · a month ago
One of my expectations for the future is the development of testing tools whose output is "optimized" in some way for LLM consumption. This is already occurring with Bun's test runner, for instance.[0] They are implementing a flag in the test runner so that the output is structured and optimized for token count.

Overall, I agree with your point. LLMs feel a lot more reliable when a codebase has thorough, easy-to-run tests. For a similar reason, I have been drifting towards strong, statically-typed languages. Both Rust and TypeScript have rich type systems that can express many kinds of runtime behavior with just types. When a compiler can make strong guarantees about a program's behavior, I assume that helps nudge the quality of LLM output a bit higher. Tests then help prevent silly regressions from occurring. I have no evidence for this besides my anecdotal experience using LLMs across several programming languages.

In general, I've had the best experience with LLMs when there's plenty of static analysis (and tests) on the codebase. When a codebase can't be easily tested, then I get much less productivity gains from LLMs. So yeah, I'm all for improving testing infrastructure.

[0] https://x.com/jarredsumner/status/1944948478184186366

koito17 commented on Cerebras Code now supports GLM 4.6 at 1000 tokens/sec   cerebras.ai/code... · Posted by u/nathabonfim59
dust42 · a month ago
It just depends on what you are doing. A green field react app in typescript with a CRUD API behind? The LLMs are a mind blowing assistant and 1000t/s is crazy.

You are doing embedded development or anything else not as mainstream as web dev? LLMs are still useful but no longer mind blowing and often produce hallucinations. You need to read every line of their output. 1000t/s is crazy but no longer always in a good way.

You are doing stuff which the LLMs haven't seen yet? You are on your own. There is quite a bit of irony in the fact that the devs of llama.cpp barely use AI - just have a look at the development of support for Qwen3-Next-80B [1].

[1] https://github.com/ggml-org/llama.cpp/pull/16095

koito17 · a month ago
> You are doing embedded development or anything else not as mainstream as web dev? LLMs are still useful but no longer mind blowing and often produce hallucinations.

I experienced this with Claude 4 Sonnet and, to some extent, gpt-5-mini-high.

When able to run tests against its output, Claude produces pretty good Rust backend and TypeScript frontend code. However, Claude became borderline unproductive once I started experimenting with uefi-rs. Other LLMs, like gpt-5-mini-high, did not fare much better, but they were at least capable of admitting lack of knowledge. In particular, GPT-5 would provide output akin to "here is some pseudocode that you may be able to adapt to your choice of UEFI bindings".

Testing in a UEFI environment is quite difficult; the LLM can't just run `cargo test` and verify its output. Things get worse in embedded, because crates like embedded_hal made massive API changes between 0.2 and 1.0 (the latest version), and each LLM I've tried seems to only have knowledge of 0.2 releases. Also, for embedded, forget even thinking about testing harnesses (which at least exist in some form with UEFI, it's just difficult to automate the execution and output for an LLM). In this case, you cannot really trust the output of the LLM. To minimize risk of hallucination, I would try maintaining data sheets and library code in context, but at that point, it took more time to prompt an LLM than handwrite code.

I've been writing a lot of embedded Rust over the past two weeks, and my usage of LLMs in general decreased because of that. Currently planning to resume development on some of my "easier" projects, since I have about 300 Claude prompts remaining in my Zed subscription, and I don't want them to go to waste.

u/koito17

KarmaCake day2660March 16, 2023View Original