Readit News logoReadit News
JoelMcCracken commented on Comptime.ts: compile-time expressions for TypeScript   comptime.js.org/... · Posted by u/excalo
apatheticonion · 24 days ago
I literally just want Rust style macros and proc macros in JavaScript. e.g. using

``` const MyComponent = () => jsx!(<div></div>) ```

rather than a .tsx file.

That or wasm to be usable so I can just write my web apps in Rust

JoelMcCracken · 24 days ago
Every once in a while I get a strong urge to hack on sweet.js to add typescript support
JoelMcCracken commented on A Rust shaped hole   mnvr.in/rust... · Posted by u/vishnumohandas
Expurple · a month ago
> frequently I could not _do_ a very straightforward thing, because, for example, some function is a fnOnce instead of fnMulti, or the other way around [..] It became clear to me eventually that some very minor changes in requirements could necessitate massive changes in how the whole data model is structured. Maybe eventually I'd get good enough at rust that this wouldn't be a huge issue, but I had no way of seeing how to get to that point from where I was.

I understand your frustration, and Rust does get too low-level sometimes (see https://without.boats/blog/notes-on-a-smaller-rust/). But the semantic difference between FnOnce and Fn is actually important. Fn consumes its environment and makes it unavailable later. This is an important property. When you don't want that, "just" use Fn, wrap everything in an Arc and clone everything (I understand that this is more ceremony than in other languages, and that can be unjustified).

> I really don't get when rust folks claim "memory safety" like this; we've had garbage collection since 1959.

Agree 100%! What Rust actually gives you is predictability, reliability and compile time checks, while still allowing to write relatively ergonomic imperative and "impure" code. And a sane ecosystem of tools that are designed to be reliable and helpful. I'm currently writing a post about this.

It also gives compile-time data race protection, which is still missing from some other memory-safe languages.

> I still wonder if I have really missed out on some benefit from learning to think more about data ownership in programs.

Yeah :) Affine types + RAII (ownership) allow you to express some really cool things, such as "Mutex<T> forces you to lock the mutex before accessing the T and automatically unlocks it when you're done", or "commits and rollbacks destroy the DatabaseTransaction and make it statically impossible to interact with", or "you'll never forget to run cleanup code for objects from external C libraries" (https://www.reddit.com/r/programming/comments/1l1nhwz/why_us...)

JoelMcCracken · a month ago
I asked a bad question; what I meant was specifically thinking about ownership in terms of being required everywhere and to manage memory. But there are for sure real benefits I can see for _other_ properties in certain situations, like avoiding data races that might occur in Haskell. GHC added a linear type extension, for that matter (though AFAIK its still not very great to use). But some of that seems distinct from the question "do I benefit in some data modeling way from thinking about the ownership of memory in my program" as opposed to creating/sharing references whenever convenient.
JoelMcCracken commented on A Rust shaped hole   mnvr.in/rust... · Posted by u/vishnumohandas
tptacek · a month ago
Their rubric is literally just 3 items: "native compilation", "abstractions", and "manual memory management". Had they put that little table at the front of the article, there wouldn't even need to be an article: the table basically says "I'm going to use Rust". That's fine!
JoelMcCracken · a month ago
The author really doesn't want to do manual memory management. The table is there to summarize things discussed, but he never says he wants to do manual memory management. I just went back and checked. If you do find something that indicates that, I'd appreciate you pointing it out to me, because idgi
JoelMcCracken commented on A Rust shaped hole   mnvr.in/rust... · Posted by u/vishnumohandas
tines · a month ago
> Wow, a lot of stuff in here surprises me. C definitely can/does have spooky at a distance. Just share a pointer to a resource with something else and enjoy the spooky modifications. Changes are local as long as you program that way, but sometimes it can be a bit not-obvious that this is happening.

I think the author means that the language constructs themselves have well-defined meanings, not that the semantics don't allow surprising things to happen at runtime. Small changes don't affect the meaning of the entire program. (I'm not sure I agree that this isn't the case for e.g. Haskell as well, I'm just commenting on what I think the author means.)

> IME, Rust is actually more difficult than Haskell in a lot of ways. I imagine that once you learn all of the things you need to learn it is different.

Having written code in both, Rust is quite a lot easier than Haskell for a programmer familiar with the "normal" languages like C, C++, Python, whatever. The pure functionality of Haskell is quite a big deal that ends up contorting my programs into weird poses, e.g. once you run into the need to compose Monads the complexity ramps way up.

> The way I've heard to make it "easier" is to just clone/copy data any time you have a need for it, but, what's the point of using Rust, then?

Memory safety. And the fact that this is the example of Rust complexity just goes to show what a higher level Haskell's difficulty is.

JoelMcCracken · a month ago
Thanks for explaining what you think the author meant; i meant to reiterate that I was trying to actually understand as opposed to argue, but I forgot; I think sometimes "hey I don't get what you're saying because of reasons XYZ..." comes across as "I think you're wrong".

Composing monads is another one of those painful parts of haskell. I remember being so frustrated while learning Haskell that there was all of this "stuff" to learn to "use monads" but it seemd to not have anything to _do_ with `Monad`, and people told me what I needed to know was `Monad`. Someday I wanna write all that advice I wish I had received when learning Haskell. A _lot_ of it will be about dealing with general monad "stuff".

The thing that frustrated me in Rust coming from something like Ruby was how frequently I could not _do_ a very straightforward thing, because, for example, some function is a fnOnce instead of fnMulti, or the other way around, or whatever. Here's some of the experience from that time https://joelmccracken.github.io/entries/a-simple-web-app-in-.... It became clear to me eventually that some very minor changes in requirements could necessitate massive changes in how the whole data model is structured. Maybe eventually I'd get good enough at rust that this wouldn't be a huge issue, but I had no way of seeing how to get to that point from where I was.

In contrast, I can generally predict when some requirement is going to necessitate a big change in haskell: does it require a new side effect? if so, it may need a big change. If not, then it probably doesn't. But, I've found it surprisingly easy to make big changes from the nice type system.

I really don't get when rust folks claim "memory safety" like this; we've had garbage collection since 1959. Rust gives you memory safety with tight control over resource usage; memory safety is an advantage that Rust has over C or C++, but not over basically every other language people still talk about.

If you just clone/copy every data structure left and right, then you're at a _worse_ spot than with garbage collection/reference counting when it comes to memory usage. I _guess_ you are getting the ability to avoid GC pauses, but, why not use a reference counted language if that's the problem? copy/clone data all of the time can't be faster than the overhead from a reference counting, can it??

In haskell, I did find that once I understood the various pieces I needed to work with, actually solving problems (e.g. composing monads) is much easier. I don't generally have a hard time actually programming Haskell. All that effort is front-loaded though, and it can be hard to know exactly what you need to learn in order to understand some new unfamiliar thing.

Your preferring Rust over Haskell is totally fine BTW, I'm just trying to draw a distinction between something that's hard to _use_ vs something that's hard to _learn_. Many common languages are much harder to use IME; I feel like I have to think so hard all of the time about every line of code to make sure I'm not missing something, some important side effect that I don't know about that is happening at some function call. With Haskell, I can generally skim the code and find what's important quite quickly because of the type system.

I do plan to learn Rust at some point still whenever the planets align and I need to know something like it. Until then, there are so many other things that interest me, and not enough hours in the day. I still wonder if I have really missed out on some benefit from learning to think more about data ownership in programs.

JoelMcCracken commented on A Rust shaped hole   mnvr.in/rust... · Posted by u/vishnumohandas
JoelMcCracken · a month ago
Wow, a lot of stuff in here surprises me. C definitely can/does have spooky at a distance. Just share a pointer to a resource with something else and enjoy the spooky modifications. Changes are local as long as you program that way, but sometimes it can be a bit not-obvious that this is happening.

regarding redefining functions, what could the author mean? using global function pointers that get redefined? otherwise redefining a function wouldn't effect other modules that are compiled into separate object files. confusing.

C is simple in that it does not have a lot of features to learn, but because of e.g. undefined behavior, I find its very hard to call it a simple language. When a simple bug can cause your entire function to be UB'd out of existence, C doesn't feel very simple.

In haskell, side effects actually _happen_ when the pile of function applications evaluate to IO data type values, but, you can think about it very locally; that's what makes it so great. You could get those nice properties with a simpler model (i.e. don't make the langague lazy, but still have explicit effects), but, yeah.

The main thing that makes Haskell not simple IMO is that it just has such a vast set of things to learn. Normal language feature stuff (types, typeclasses/etc, functions, libraries), but then you also have a ton of other special haskell suff: more advanced type system tomfoolery, various language extensions, some of which are deprecated now, or perhaps just there are better things to use nowadays (like type families vs functional dependencies), hierarchies of unfamiliar math terms that are essentially required to actually do anything, etc, and then laziness/call-by-name/non-strict eval, which is its own set of problems (space leaks!). And yes, unfamiliar syntax is another stumbling block.

IME, Rust is actually more difficult than Haskell in a lot of ways. I imagine that once you learn all of the things you need to learn it is different. The way I've heard to make it "easier" is to just clone/copy data any time you have a need for it, but, what's the point of using Rust, then?

I wonder if the author considered OCaml or its kin, I haven't kept track of whats all available, but I've heard that better tooling is available and better/more familiar syntax. OCaml is a good language and a good gateway into many other areas.

There are some other langs that might fit, like I see nim as an example, or zig, or swift. I'd still like to do more with swift, the language is interesting.

JoelMcCracken commented on Binding Application in Idris   andrevidela.com/blog/2025... · Posted by u/matt_d
kmill · 2 months ago
In Lean's parsed `Syntax`, binders are plain identifiers. The way this works is that identifiers can be annotated with the module it was parsed in as well as a "macro scope", which is a number that's used to make identifiers created by macros be distinct from any previously created identifiers (the current macro scope is some global state that's incremented whenever a macro is being expanded) — an identifier with this annotation is called a hygienic identifier, and when identifiers are tested for equality the annotations are tested too. With this system in place, there's nothing special you need to do to elaborate binders (and it also lets you splice together syntaxes without any regard for hygiene!). For example, `fun x => b x` elaborates by (1) adding a variable `x` to the local scope, (2) elaborating `b x` in that scope, and then (3) abstracting `x` to make the lambda. The key here is that `x` is a hygienic identifier, so an `x` that's from a different module or macro scope won't be captured by the binder `x`.

Yes you can define the syntax that's in the article in Lean. A version of this is the Mathlib `notation3` command, but it's for defining notation rather than re-using the function name (e.g. using a union symbol for `Set.iUnion`), and also the syntax is a bit odd: notation3 "⋃ "(...)", "r:60:(scoped f => iUnion f) => r

The ideas in the article are neat, and I'll have to think about whether it's something Lean could adopt in some way... Support for nested binders would be cool too. For example, I might be able to see something like `List.all (x in xs) (y in ys) => x + y < 10` for `List.all (fun x => List.all (fun y => x + y < 10) ys) xs`.

JoelMcCracken · 2 months ago
ah nice explanation. I've actually read (or tried to) the "macros as a set of scopes" paper that IIUC lean 4's scoping is based upon; I did watch the talk on lean4's macro system. Does it not have some kind of "set of scopes" tracking?
JoelMcCracken commented on An interview question that will protect you from North Korean fake workers   theregister.com/2025/04/2... · Posted by u/dotcoma
solatic · 4 months ago
Historically, these kinds of questions were kept relatively simple, like how many bases are there, how many strikes, how many balls, how many innings, what's the name of the referee (answer: umpire), etc. They're also a product of a different time when baseball was much more popular in the US among US youth, with a much stronger youth monoculture, where the only way you didn't play baseball as a kid would be if you were a loner or in a wheelchair, neither of which were consistent with becoming an officer 80-90 years ago.
JoelMcCracken · 4 months ago
It would seem like a German who spoke perfect American English bc they had grown up here would be able to answer these basic facts
JoelMcCracken commented on Evertop: E-ink IBM XT clone with 100+ hours of battery life   github.com/ericjenott/Eve... · Posted by u/harryvederci
JoelMcCracken · 4 months ago
I use my boox max lumi as a secondary display daily for working in emacs. The eink is great for text/terminal use, the only issue I have is when i sometimes need to do any kind of mouse work (which, is basically never, when I use it for what I said above).

What I really want is a low power linux laptop that is not entirely without CPU/memory power, so I can program some simple things on it. I don't mind if it has _less_ power, I can use ssh for anything that is overly cpu-hungry.

Ive seen several devices that seem like they might suit my need, but I look at them for long enough and just won't pull the trigger. Either it seems overly much like a walled garden (like, I can program on the device, but it doesn't seem like a suitable spot to write blog posts in emacs for my blog or whatever), or its just too underpowered and I'm sure that 99% of the tools I use already won't work on it.

I wish I had the EE knowledge/confidence to start hacking on this kind of thing. I think its very doable; I was just looking at e.g. https://www.waveshare.com/product/displays/e-paper/epaper-1/...

which is just cheap enough that I could see myself risking buying it without being sure that it will work with my other choices.

Nowadays, I feel like I should be able to run most of what I want on an android device that is built for power, and it should have a fairly long lasting battery because of its design; attach a trackpad, keyboard, and eink display, and my perfect device is here. I don't care if its not the thinnest device in the universe, a swappable battery (or, just load the thing with extra batteries) plus perhaps a portable solar charger would be amazing.

JoelMcCracken commented on Any program can be a GitHub Actions shell   yossarian.net/til/post/an... · Posted by u/woodruffw
hnbad · 5 months ago
I agree with most of the points but I would condense #2 and #3 to "Move most things into scripts". Sometimes it's difficult to avoid complex workflows but generally it's a safer bet to have actual scripts you can re-use and use for other environments than GitHub. It's a bad idea to make yourself dependent entirely on one company's CI system, especially if it's free or an add-on feature.

However I'd balk at the suggestion to use Dhall (or any equally niche equivalent) based on a number of factors:

1) If you need this advice, you probably don't know Dhall nor does anyone else who has worked or will work on these files, so everyone has to learn a new language and they'll all be novices at using that language.

2) You're adding an additional dependency that needs to be installed, maintained and supported. You also need to teach everyone who might touch the YAML files about this dependency and how to use it and not to touch the output directly.

3) None of the advice on GitHub Workflows out there will apply directly to the code you have because it is written in YAML so even if Dhall will generate YAML for you, you will need to understand enough YAML to convert it to Dhall correctly. This also introduces a chance for errors because of the friction in translating from the language of the code you read to the language of the code you write.

4) You are relying on the Dhall code to correctly map to the YAML code you want to produce. Especially if you're inexperienced with the language (see above) this means you'll have to double check the output.

5) It's a niche language so it's neither clear that it's the right choice for the project/team nor that it will continue to be useful. This is an extremely high bar considering the effort involved in training everyone to use it and it's not clear at all that the trade-off is worth it outside niche scenarios (e.g. government software that will have to be maintained for decades). It's also likely not to be a transferable skill for most people involved.

The point about YAML being bad also becomes less of an issue if you don't have much code in your YAML because you've moved it into scripts.

JoelMcCracken · 5 months ago
Not too long ago, I went down a rabbit hole of specifying GHA yaml via dhall, and quickly hit some problems; the specific thing I was starting with was the part I was frustrated with, which was the "expresssions" evaluation stuff.

However, I quickly ran into the whole "no recursive data structures in dhall" (at least, not how you would normally think about it), and of course, a standard representation of expressions is a recursively defined data type.

I do get why dhall did this, but it did mean that I quickly ran into super advanced stuff, and realized that I couldn't in good conscience use this as my team of mixed engineers would need to read/maintain it in the future, without any knowledge of how to do recursive definitions in dhall, and without the inclination to care either.

an intro to this: https://docs.dhall-lang.org/howtos/How-to-translate-recursiv...

An example in the standard lib is how it works with JSON itself: https://store.dhall-lang.org/Prelude-v23.1.0/JSON/Type.dhall...

basically, to do recursive definitions, you have to lambda encode your data types, work with them like that, and then finally "reify" them with, like, a concrete list type at the end, which means that all those lambdas evaluate away and you're just left with list data. This is neat and intresting and worthy of learning, but would be wildly overly-complicated for most eng teams I think.

After hitting this point in the search, I decided to go another route: https://github.com/rhysd/actionlint

and this project solved my needs such that I couldn't justify spending more time on it any longer.

JoelMcCracken commented on Any program can be a GitHub Actions shell   yossarian.net/til/post/an... · Posted by u/woodruffw
michaelmior · 5 months ago
> To the greatest extent possible, do not use any actions which install things.

Why not? I assume the concern is making sure development environments and production use the same configuration as CI. But that feels like somewhat of an orthogonal issue. For example, in Node.js, I can specify both the runtime and package manager versions using standard configuration. I think it's a bonus that how those specific versions get installed can be somewhat flexible.

JoelMcCracken · 5 months ago
To me, the issue comes when something weird is going on in CI that isn't happening locally, and you're stuck debugging it with that typical insanity.

Yeah, it may be that you'll get the exact same versions of things installed, but that doesn't help when some other weird thing is going on.

If you haven't experienced this, well, keep doing what you're doing if you want, but just file this reflection away for if/when you do hit this issue.

u/JoelMcCracken

KarmaCake day2669February 27, 2009
About
Pittsburgh dev. Haskell mostly.

Contact me at mccracken.joel@gmail.com

If your reply to me seems unnecessarily combative, pedantic, or uncharitable, I will probably not reply. Its not worth talking with someone who appears to be intentionally misconstruing my words. Ain't nobody got time for that.

View Original