Readit News logoReadit News
jbreckmckye · a month ago
> If you figure out how to do this completely, please contact me—I must know!

I think you want to use a TypeScript compiler extension / ts-patch

This is a bit difficult as it's not very well documented, but take a look at the examples in https://github.com/nonara/ts-patch

Essentially, you add a preprocessing stage to the compiler that can either enforce rules or alter the code

It could quietly transform all object like types into having read-only semantics. This would then make any mutation error out, with a message like you were attempting to violate field properties.

You would need to decide what to do about Proxies though. Maybe you just tolerate that as an escape hatch (like eval or calling plain JS)

Could be a fun project!

Cthulhu_ · a month ago
One "solution" is to use Object.freeze(), although I think this just makes any mutations fail silently, whereas the objective with this is to make it explicit and a type error.
giancarlostoro · a month ago
I used to have code somewhere that would recursively call Object.freeze on a given object and all its children, till it couldn't "freeze" anymore.
dunham · a month ago
I thought Object.freeze threw an exception on mutation. Digging a little more, it looks like we're both right. Per MDN, it throws if it is in "use strict" mode and silently ignores the mutation otherwise.
drob518 · a month ago
It’s interesting to watch other languages discover the benefits of immutability. Once you’ve worked in an environment where it’s the norm, it’s difficult to move back. I’d note that Clojure delivered default immutability in 2009 and it’s one of the keys to its programming model.
bastawhiz · a month ago
I don't think the benefits of immutability haven't been discovered in js. Immutable.js has existed for over a decade, and JavaScript itself has built in immutability features (seal, freeze). This is an effort to make vanilla Typescript have default immutable properties at compile time.
the_gipsy · a month ago
It doesn't make sense to say that. Other languages had it from the start, and it has been a success. Immutable.js is 10% as good as built-in immutability and 90% as painful. Seal/freeze,readonly, are tiny local fixes that again are good, but nothing like "default" immutability.

It's too late and you can't dismiss it as "been tried and didn't get traction".

iLemming · a month ago
Javascript DOES NOT in fact have built-in immutability similar to Clojure's immutable structures - those are shallow, runtime-enforced restrictions, while Clojure immutable structures provide deep, structural immutability. They are based on structural sharing and are very memory/performance efficient.

Default immutability in Clojure is pretty big deal idea. Rich Hickey spent around two years designing the language around them. They are not superficial runtime restrictions but are an essential part of the language's data model.

agos · a month ago
one thing that it's missing in JS to fully harness the benefits of immutability is some kind of equality semantics where two identical objects are treated the same
iLemming · a month ago
Also, interestingly Clojurescript compiler in many cases emits safer js code despite being dynamically typed. Typescript removes all the type info from emmitted js, while Clojure retains strong typing guarantees in compiled code.
hden · a month ago
Mutability is overrated.
recursive · a month ago
Immutability is also overrated. I mostly blame react for that. It has done a lot to push the idea that all state and model objects should be immutable. Immutability does have advantages in some contexts. But it's one tool. If that's your only hammer, you are missing other advantages.
Random09 · a month ago
It's redundant in single thread environment. Everyone moved to mobile while pages are getting slower and slower, using more and more memory. This is not the way. Immutability has its uses, but it's not good for most web pages.
iLemming · a month ago
You're just waving off the whole bag of benefits:

Yes, js runs in a single-threaded environment for user code, but immutability still provides an immense value: predictability, simpler debugging, time-travel debugging, react/framework optimizations.

Modern js engines are optimized for short-lived objects, and creating new objects instead of mutating uses more memory only temporarily. Performance impact of immutability is so absolutely negligible compared to so many other factors (large bundles, unoptimized images, excessive DOM manipulation)

You're blaming the wrong thing for overblown memory. I don't know a single website that is bloated and slow only because the makers decided to use immutable datastructures. In fact, you might be exactly incorrect - maybe web pages getting slower and slower because we're now trying to have more logic in them, building more sophisticated programs into them, and the problem is exactly that - we are reaching the point that is no longer simple to reason about them? Reasoning about the code in an immutable-first PL is so much simpler, you probably have no idea, otherwise you wouldn't be saying "this is not the way"

pjmlp · a month ago
If we are pointing dates, ML did it in 1973, or if you prefer the first mature implementation SML, in 1983.

The Purely Functional Data Structures book, that Clojure data structures are based on, is from 1996.

This is how far back we're behind the times.

drob518 · a month ago
Cool. I didn’t realize ML had such a focus on immutability as well. I have never done any serious work in ML and it’s a hole in my knowledge. I have to go back and do a project of some sort using it (and probably one in Ocaml as well). What data structures does ML use under the hood to keep things efficient? Clojure uses Bagwell’s Hashed Array-Mapped Tries (HAMT), but Bagwell only wrote the first papers on that in about 2000. Okasaki’s book came out in 1998, and much of the work around persistent data structures was done in the late 1980s and 1990s. But ML predates most of that, right?
marcelr · a month ago
programming with immutability has been best practices in js/ts for almost a decade

however, enforcing it is somewhat difficult & there are still quite a bit lacking with working with plain objects or maps/sets.

gspencley · a month ago
We shouldn't forget that there are trade-offs, however. And it depends on the language's runtime in question.

As we all know, TypeScript is a super-set of JavaScript so at the end of the day your code is running in V8, JSCore or SpiderMonkey - depending on what browser the end user is using, as an interpreted language. It is also a loosely typed language with zero concept of immutability at the native runtime level.

And immutability in JavaScript, without native support that we could hopefully see in some hypothetical future version of EcmaScript, has the potential to impact runtime performance.

I work for a SaaS company that makes a B2B web application that has over 4 million lines of TypeScript code. It shouldn't surprise anyone to learn that we are pushing the browser to its limits and are learning a lot about scalability. One of my team-mates is a performance engineer who has code checked into Chrome and will often show us what our JavaScript code is doing in the V8 source code.

One expensive operation in JavaScript is cloning objects, which includes arrays in JavaScript. If you do that a lot.. if, say, you're using something like Redux or ngrx where immutability is a design goal and so you're cloning your application's runtime state object with each and every single state change, you are extremely de-optimized for performance depending on how much state you are holding onto.

And, for better or worse, there is a push towards making web applications as stateful as native desktop applications. Gone are the days where your servers can own your state and your clients just be "dumb" presentation and views. Businesses want full "offline mode." The relationship is shifting to one where your backends are becoming leaner .. in some cases being reduced to storage engines, while the bulk of your application's implementation happens in the client. Not because we engineers want to, but because the business goal necessitates it.

Then consider the spread operator, and how much you might see it in TypeScript code:

const foo = {

  ...bar, // clones bar, so your N-value of this simple expression is pegged to how large this object is

  newPropertyValue,
};

// same thing, clones original array in order to push a single item, because "immutability is good, because I was told it is"

const foo = [...array, newItem];

And then consider all of the "immutable" Array functions like .reduce(), .map(), .filter()

They're nice, syntactically ... I love them from a code maintenance and readability point of view. But I'm coming across "intermediate" web developers who don't know how to write a classic for-loop and will make an O(N) operation into an O(N^3) because they're chaining these together with no consideration for the performance impact.

And of course you can write performant code or non-performant code in any language. And I am the first to preach that you should write clean, easy to maintain code and then profile to discover your bottlenecks and optimize accordingly. But that doesn't change the fact that JavaScript has no native immutability and the way to write immutable JavaScript will put you in a position where performance is going to be worse overall because the tools you are forced to reach for, as matter of course, are themselves inherently de-optimized.

iLemming · a month ago
> We shouldn't forget that there are trade-offs

Like @drob518 noted already - the only benefit of mutation is performance. That's all. That's the only, distinct, single, valid point for it. Everything else is nothing but problems. Mutable shared state is the root of many bugs, especially in concurrent programs.

"One of the most difficult elements of program design is reasoning about the possible states of complex objects. Reasoning about the state of immutable objects, on the other hand, is trivial." - Brian Goetz.

So, if immutable, persistent collections are so good, and the only problem is that they are slower, then we just need to make them faster, yes?

That's the only problem that needs to be solved in the runtime to gain countless benefits, almost for free, which you are acknowledging.

But, please, don't call it a "trade-off" - that implies that you're getting some positive benefits on both sides, which is inaccurate and misleading - you should be framing mutation as "safety price for necessary performance" - just like Rust describes unsafe blocks.

drob518 · a month ago
Yes, if your immutability is implemented via simple cloning of everything, it’s going to be slow. You need immutable, persistent data structures such as those in Clojure.
phplovesong · a month ago
Sounds easier to just use some other compile to js languge, its not like there are no other options out there.
k__ · a month ago
I'm still mad about Reason/ReScript for fumbling the bag here.
phplovesong · a month ago
Rescript/reasonml is still in development, and a more seasoned dev team can easily pick it as an better alternative to typescript.

Its a bummer haxe did not promote itself more for the web, as its a amazinlgy good piece of tech. The languge shows age, but has an awesome typesystem and metaprogramming capabilities.

That said, haxe 5 is on the horizon.

petejodo · a month ago
Agreed. Gleam is a great one that targets JavaScript and outputs easy to read code
phplovesong · a month ago
Yup. Also rescript if your not a fan of the elm architecture.
epolanski · a month ago
Not if you want to use typescript.
phplovesong · a month ago
Typescript is the obvious choice if all you know/want to learn is JS. But the languge is still garbage because of "valid js is valid ts".

And yes, i know that is what made it popular.

Too · a month ago
Rust compiles to wasm right?
vips7L · a month ago
ScalaJs!
hackthemack · a month ago
I am a fan of immutability. I was toying around with javascript making copies of arguments (even when they are complex arrays or objects). But, strangely, when I made a comment about it, it just got voted down.

https://news.ycombinator.com/item?id=45771794

I made a little function to do deep copies but am still experimenting with it.

  function deepCopy(value) {
    if (typeof structuredClone === 'function') {
      try { return structuredClone(value); } catch (_) {}
    }
    try {
      return JSON.parse(JSON.stringify(value));
    } catch (_) {
      // Last fallback: return original (shallow)
      return value;
    }
  }

bilekas · a month ago
There's something in here for sure, switch over to TS with strict typing and you've got generics to help you out more, at least for validation.

A deep clone isn't a bad approach but given TS' typing, I don't know if they allow a pure 'eval' by default.. Still playing with this in my free time though and it's still tricky.

hackthemack · a month ago
One thought I recently had, since using deepCopy is going to slow things down, is if the source code for QuickJS could be changed just make copies. Then load up quickJs as a replacement for the browsers javascript by invoking it as wasm.
bilekas · a month ago
This has really irrationally interested me now, Im sure there is something there with the internal setters on TS but damn I need to test now. My thinking is that overriding the setter to evaluate if its mutable or not, the obvious approach.
WorldMaker · a month ago
Yeah there's a lot you could do with property setter overrides in conditional types, but the tricky magic trick is somehow getting Typescript to do it by default. I've got a feeling that `object` and `{}` are just too low-level in Typescript's type system today to do those sorts of things. The `Object` in lib.d.ts is mostly for adding new prototype methods, not as much changing underlying property behavior.
hyperrail · a month ago
Aside: Why do we use the terms "mutable" and "immutable" to describe those concepts? I feel they are needlessly hard to say and too easily confused when reading and writing.

I say "read-write" or "writable" and "writability" for "mutable" and "mutability", and "read-only" and "read-only-ness" for "immutable" and "immutability". Typically, I make exceptions only when the language has multiple similar immutability-like concepts for which the precise terms are the only real option to avoid confusion.

forty · a month ago
Read only does not carry (to me) the fact that something cannot change, just that I cannot make it change. For example you could make a read only facade to a mutable object, that would not make it immutable.
iLemming · a month ago
> Why do we use the terms "mutable" and "immutable" to describe those concepts?

Mutable is from Latin 'mutabilis' - (changeable), which derives from 'mutare' (to change)

You can't call them read-only/writable/etc. without confusing them with access permissions. 'Read-only' typically means something read-only to local scope, but the underlying object might still be mutable and changed elsewhere - like a const pointer in C++ or a read-only db view that prevents you from writing, but the underlying data can still be changed by others. In contrast, an immutable string (in java, c#) cannot be changed by anyone, ever.

Computer science is a branch of mathematics, you can't just use whatever words you think more comfortable to you - names have implications, they are a form of theorem-stating. It's like not letting kids call multiplication a "stick-piling". We don't do that for reasons.

Waterluvian · a month ago
Same reason doors say PUSH and PULL instead of PUSH and YANK. We enjoy watching people faceplant into doors... er... it's not a sufficiently real problem to compel people to start doing something differently.
afandian · a month ago
"read-only-ness" is much more of a mouthful than "immutable"!

Generally immutability is also a programming style that comes with language constructs and efficient data structures.

Whereas 'read-only' (to me) is just a way of describing a variable or object.

tyleo · a month ago
This is tangential but one thing that bothers me about C# is that you can declare a `readonly struct` but not a `readonly class`. You can also declare an `in` param to specify a passed-in `struct` can’t be mutated but again there’s nothing for `class`.

It may be beside the point. In my experience, the best developers in corporate environments care about things like this but for the masses it’s mutable code and global state all the way down. Delivering features quickly with poor practices is often easier to reward than late but robust projects.

WorldMaker · a month ago
`readonly class` exists in C# today and is called (just) `record`.

`in` already implies the reference cannot be mutated, which is the bit that actually passes to the function. (Also the only reason you would need `in` and not just a normal function parameter for a class.) If you want to assert the function is given only a `record` there's no type constraint for that today, but you'd mostly only need such a type constraint if you are doing Reflection and Reflection would already tell you there are no public setters on any `record` you pass it.

bilekas · a month ago
I'm not sure if it's what you mean, but can't you have all your properties without a setter, and only init them inside the constructor for example ?

Would your 'readonly' annotation dictate that at compile time ?

eg

class Test {

  private readonly string _testString {get;}


  public Test(string tstSrting) 
      => _testString = tstSrting ;
}

We may be going off topic though. As I understand objects in typescript/js are explicitly mutable as expected to be via the interpertor. But will try and play with it.

vips7L · a month ago
I think you would want to use an init only property for your example

    class Test {
        public string Test { get; init; }
    }

I'm not a C# expert though, and there seems to be many ways to do the same thing.

edem · a month ago
For immutability to be effective you'd also need persistent data structures (structural sharing). Otherwise you'll quickly grind to a halt.
epolanski · a month ago
Why would you quickly grind to a halt.
iLemming · a month ago
Without persistent data structures (structural sharing) - every change requires copying the entire data structure, memory usage explodes, time complexity suffers, GC pressure increases dramatically.

With persistent data structures - only the changed parts are new; unchanged parts are shared between versions; adding to a list might only create a few new nodes while reusing most of the structure; it's memory efficient, time efficient, multiple versions can coexist cheaply. And you get countless benefits - fearless concurrency, easier reasoning, elimination of whole class of bugs.