I don't really understand why people like signals again. I used a signals-like reactive programming model in VueJS a while ago and hated that you could never be sure exactly where things were being changed. Thankfully it seems that the React creators and maintainers are not hopping on the signals train and are instead adamant about unidirectional dataflow with explicit mapping of state to UI, which, as has been the reason for why React had been invented in the first place, makes reasoning about the application much easier [0][1].
This is a good thread about their drawbacks [2]. Apparently, both the below examples actually do different things:
That example makes a lot of sense once you realize how JSX works in SolidJS (and its very different from React):
1. The function is called only once at component construction, and never again. Unlike React, it is _not_ called for every render.
2. Every expression in `{}` is compiled to a thunk function - not to a direct JS expression.
For example, the expression
<>{props.count * 2}</>
compiles to
memo(() => props.count * 2)
Another interesting bit - as the article above rightfully points out, `useState` and `useMemo` are signals. The main difference is that you have to track and specify your dependencies manually, whereas Solid auto-tracks dependencies based on their usage while the computation runs.
Yes, it's more of a design problem of some of these libraries though.
Ideally, signalling shouldn't be retrofitted on normal variables.
It assumes then that you overload variable access (assignment and reads). Then it's hard to know which variables are reactive and which ones are not. The problem stems from the overloading of semantics.
A UI is mutable so it's not about mutable state being wrong but the unit of variability are not program variables but rendered props which are basically DOM Element properties.
In react, it just happens that it uses variables whose values are injected in the DOM elements via jsx but that's an issue (especially when dealing with exports, prop drilling etc)
So until a plain variable and a reactive variable are made to look different everywhere in code, it will remain confusing. And even then, it's not optimal.
Disclaimer: I'm currently implementing a cross-platform UI framework in Go and that's the reason why I had to think about it since traditional languages do not have reactive variables.
If those two pieces of code do different things, it’s a bad library plain and simple lol.
I’ve implemented this pattern with mobx and those two pieces of code are equivalent.
In my opinion:
- Unidirectional data flow patterns prevent a class of problems junior developers tend to make but require much deeper discipline to keep the app understandable
- Conventional bi-directional data flow patterns are susceptible to junior developer mistakes but generally are hard to make unreadable
Controversial only because they were hard to learn and those that invested in learning lost that investment. The new architecture is so much better and not hard to explain.
I'm not a JS/Vue/React programmer but I would definitely expect function One to return <div>Count: 4</div> if props.count == 2 when One is called. For function Two I would be more suspicious and would not be overly surprised if it returned <div>Count: {props.count * 2}</div> to be evaluated at a later stage.
Because it's nicer. The mental model of reading the functions using signals is; it's all about initialize|constructor|new function. And then those off-jsx stuff starts to make sense since you won't re-initialize those things. They need to exist somewhere in particular execution model. In Solidjs, it's effects (compiled from jsx where signals are read). I think those who are familiar with static type languages, naturally understand how signals needs to be unwrap/wrap .. like a monoid, not a big deal.
I haven't had much success using signals as a software composition mechanism at scale ... Despite being familiar with Conal Elliott's "functional reactive animations" concepts and Racket's FrTime.
"patch languages" like PureData, Max/MSP/Jitter, QuartzComposer are much better interfaces to work with signals ... Or rather with signal processors.
Those two code examples are different because that's the tradeoff that Solid made. It allows Solid to track changes in a uniform way even if the data isn't inside a component. Because in Solid components are just a way to organize code, not a unit of rendering (like in React).
And on top of that if you think that React still somehow has unidirectional data flow, it doesn't. Hooks make it anything but unidirectional. And not that different from signals: https://res.cloudinary.com/practicaldev/image/fetch/s--upMC6... But often less predictable.
Dan Abramov replied to those as well. Personally, the purity of the React model with bringing in only local rather than global state in every function is what appeals to me. Even hooks are still local state, if you change the value in useState, you're not going to suddenly have something change elsewhere, where it wasn't explicit where you changed it.
My impression is that Javascript is just an horrible language to create reactive systems, and that default purity with explicit effects is extremely underrated. (But then, I was already biased.)
It is hard to even make sense of the discussion without imagining the details of how those frameworks are implemented. And the discussion on the level of normal usage, not something advanced or development oriented.
The terminology of signals (and slots) has been used in Qt [0] since the 1990s I believe.
Still, it is unfortunate that the article begins with “Let's start with an extremely broad definition” (of signals), then continues with what sounds like signals in general, mentioning React only as an example, but then turns out wanting to only talk about signals in the particular reactive UI sense common in web development, without properly introducing that narrowing of focus.
I guess that focus can be expected from a blog on a site about a TypeScript signals library, but from the way the first few paragraphs are written, it’s certainly confusing when reading the article without further context, coming from HN.
“Signal” was used for a time-varying value in a reactive system for a very long time (I thought it was as far back as Fran in 1998, but I was remembering wrong—I was only able to find usage in Grapefruit in 2009, older references welcome). One way or another, in a functional reactive programming system, you need distinct names for:
(a) A thing that happens (or might happen but ultimately doesn’t) and is associated with a value of some sort, such as the mouse coordinates of first left-button click the program sees;
(b) A list of the above with monotonically increasing times, such as all the left-button click coordinates in order;
(c) The above plus an initial value, representing a piecewise constant function of time, such as the state of a toggle;
(d) A (conceptually) piecewise continuous function of time, such as the position of an animated object.
[To avoid “time leaks”, i.e. accidentally retaining all change history from the beginning of time, you also need
(e) A computation that reads the current time, such as the mouse coordinates of the next left-button click the program will see;
but that’s not supposed to be obvious—it took more than a decade to figure out.]
Possible names include: event occurrence [for (a)], event stream [for (b)], event [either (a) or (b)]; behaviour [(c) or (d)—the library may not distinguish them in the API or only provide (d)]; reactive [Conal Elliot’s name for (c) as opposed to (d)]; signal [(b), (c), or both]. All of these make sense in isolation. I don’t think you can win here.
[Perhaps the worst word to use is “stream“, because people start trying to fit streams with backpressure in there. FRP only makes sense as far as you can assume any recalculations happen instantaneously, meaning at the very least before the next input change happens. If you forget that, you get Rx instead of FRP, and that’s not amenable to human comprehension—or indeed sanity.]
The term signal was used, because RxJS had already taken the term "Observable" to refer to their primitives, which are actually more like streams.
Because of this, Ryan Carniato of SolidJS referred to his own observable primitives as "signals" and this has become the default terminology to talk about this type of pattern within the context of web development.
Really, signals should be thought of as a type of observable.
I felt similarly disappointed, as I was expecting a cool intro to DSP or something similar. But nope, another BS buzzword from the magical world of webdev.
Signal isn't overloaded. It's use here is the same as it's use elsewhere. It just requires contextual information to know what it means. A Qt programmer might think of signals and slots, a UNIX programmer might think of UNIX signals, etc. Jargon sure. Overloaded? I don't think so. Only thing here that bothers me is how vague the title is.
signals represent continuous time varying quantities, like an electrical voltage, an audio signal or the current mouse coordinates. streams (event streams) represent a sequence of discrete events, like key presses or network packets or financial transactions.
the key difference is in backpressure strategy: signals are canonically lazy, they don’t compute or do work until sampled, and only the latest value is relevant (nobody cares where the mouse was a moment ago when nobody was looking). streams are eager, you can’t skip a keyboard event or a financial transaction, even if the pipes are backed up – instead you have to tell upstream to slow down so you can catch up. The benefit of event streams is the guarantee that you'll see every event, which means streams are suitable for driving sequences of side effects (keyboard event -> network request -> database transaction).
signals are a good fit for rendering because you only want to render at up to say 60fps (even if the mouse updates faster, which it does). and you only want to render what’s onscreen, and only when the tab is focused. rendering (say dom effects) is indeed effectful but not in the discrete way; the dom is a resource, it has a mount/unmount object lifecycle, and due to this symmetry it is a good fit for rendering whereas isolated effects (without a corresponding undo operation) are a terrible fit for signals because backpressure will drop events and corrupt the system state.
You can use streams for rendering too, but it's dis-optimal, and potentially by a lot. If your app chokes on a burst of events, you want to skip ahead and render the final state without bothering to render all the intermediate historical states. Signal laziness is what enables this "work skipping"; a stream would have to process each individual event in sequence.
I have no idea if JS projects get the backpressure right, can anyone confirm?
It’s kinda funny to see “signals” coming round into fashion or popular discussion in front-end tech Twitter. Mobx is a signals state management library has been around for 7 years. Notion used signals internally for state management and often when engineers joined they’d ask why we use signals over Redux, can we switch to Redux because it’s more modern, etc. Now it’s 4 years later and for some reason this pattern is all the rage. I’m glad I didn’t waste time on Redux just in time for trends to move on.
To discuss more specifically TLDraw’s Signia library, I think the ability to do manual diff tracking & incremental updates for computed stores is an interesting “escape hatch”. Docs here: https://signia.tldraw.dev/docs/incremental
Most signal libraries I’ve seen try to lean hard into magic auto-tracking of dependencies which means they make it really easy for the developer to correctly observe a lot of dependencies, cool on correctness, but then have a very limited set of tools to deal with the performance implication of some computation needing to re-run a whole bunch. The differential tracking here means that if you see such a hotspot, you can get really manual optimization of recomputation without needing to squeeze into the libraries pre-packaged observable collection API.
Downside of this API is it seems quite easy to get it wrong.
Another thing I like about Signia is the use of logical time. I saw this first in Jotai internals, then in Starbeam. I haven’t dug into the source of the library yet but I think logical time is a good approach and makes inspecting internals make a bit more sense than inspecting systems (like Notion’s) that rely purely on update notification listeners.
Similarly Angular 1 allowed component oriented development far before react. But people seem to think React invented it.
The problem with angular 1 components (isolated scope directives) was simply that not many understood it as a best practice until far too late, and the framework allowed you to do all kinds of terrible designs like shared scope/multiple controllers per file, which was the more common way to develop. Also performance was poor.
Feel the same way about signals. Vue has been doing it for years (just not tied into render optimizations). I used a similar model via angular 1 back in 2014 even, via watch chains. Much less performant though. Can also do it via redux by defining a special third arg which defines state depenencies (which we do here).
Nothing really novel about a tree-like series of derived states. Interesting that social media is ablaze like its something new
Same here, I have been spreading mobx everywhere I have gone. There's usually some resistance but once people got used to automatic dependency tracking, they never go back.
I've seen so many projects get bogged down in props hell, then gobs of context api and performance problems when too many things react at once.
Mobx has been solving these problems for a long time now!
I recently started ripping Mobx out of my app. Mobx solved exactly 1 problem for me, which was passing data horizontally (to siblings/cousins) and even to non-React components.
But now `useSyncExternalStore` solves that. No context needed. Can read and write state anywhere, even outside components. No "observe" or "track" shenanigans. Efficient updates (can listen to a subsection of the tree).
Building 'signal graphs' because views subscribe to err signals is an old hat for re-frame users who are benefiting from a clean and simple mechanism to react to data changes since forever.
Subscriptions grow from the leaf up to the root (which is a single app "db") and so computation is minimal. If something writes to the "db", only relevant subscriptions are recomputed and only views subscribing to those are "repainted".
This is a good thread about their drawbacks [2]. Apparently, both the below examples actually do different things:
Like the tweeter says, signals are mutable state. I'll stick with unidirectional data flow in React.[0] https://twitter.com/jordwalke/status/1629663133039214593
[1] https://twitter.com/dan_abramov/status/1629539600489119744
[2] https://twitter.com/devongovett/status/1629540226589663233
1. The function is called only once at component construction, and never again. Unlike React, it is _not_ called for every render.
2. Every expression in `{}` is compiled to a thunk function - not to a direct JS expression.
For example, the expression
compiles to Another interesting bit - as the article above rightfully points out, `useState` and `useMemo` are signals. The main difference is that you have to track and specify your dependencies manually, whereas Solid auto-tracks dependencies based on their usage while the computation runs.Ideally, signalling shouldn't be retrofitted on normal variables. It assumes then that you overload variable access (assignment and reads). Then it's hard to know which variables are reactive and which ones are not. The problem stems from the overloading of semantics.
A UI is mutable so it's not about mutable state being wrong but the unit of variability are not program variables but rendered props which are basically DOM Element properties. In react, it just happens that it uses variables whose values are injected in the DOM elements via jsx but that's an issue (especially when dealing with exports, prop drilling etc)
So until a plain variable and a reactive variable are made to look different everywhere in code, it will remain confusing. And even then, it's not optimal.
Disclaimer: I'm currently implementing a cross-platform UI framework in Go and that's the reason why I had to think about it since traditional languages do not have reactive variables.
I’ve implemented this pattern with mobx and those two pieces of code are equivalent.
In my opinion:
- Unidirectional data flow patterns prevent a class of problems junior developers tend to make but require much deeper discipline to keep the app understandable
- Conventional bi-directional data flow patterns are susceptible to junior developer mistakes but generally are hard to make unreadable
https://elm-lang.org/news/farewell-to-frp
If you don't like what Solid is doing you may consider it a self-inflicted wound, nothing is forcing Solid to work that way.
I love signals in VueJS, and it doesn’t even preclude a redux-like/lite state- management library like Pinia.
https://vuejs.org/guide/extras/reactivity-in-depth.html#conn...
"patch languages" like PureData, Max/MSP/Jitter, QuartzComposer are much better interfaces to work with signals ... Or rather with signal processors.
Deleted Comment
- The Evolution of Signals in JavaScript, https://dev.to/this-is-learning/the-evolution-of-signals-in-...
- React vs Signals: 10 Years Later, https://dev.to/this-is-learning/react-vs-signals-10-years-la...
- Making the Case for Signals in JavaScript , https://dev.to/this-is-learning/making-the-case-for-signals-...
Those two code examples are different because that's the tradeoff that Solid made. It allows Solid to track changes in a uniform way even if the data isn't inside a component. Because in Solid components are just a way to organize code, not a unit of rendering (like in React).
And on top of that if you think that React still somehow has unidirectional data flow, it doesn't. Hooks make it anything but unidirectional. And not that different from signals: https://res.cloudinary.com/practicaldev/image/fetch/s--upMC6... But often less predictable.
See also The Cost of Consistency in UI Frameworks, https://dev.to/this-is-learning/the-cost-of-consistency-in-u... which is slightly related to this.
It is hard to even make sense of the discussion without imagining the details of how those frameworks are implemented. And the discussion on the level of normal usage, not something advanced or development oriented.
Then I thought for a moment it was talking about a "signal" in the information theoretical sense
All I came away with was that they've invented a new name for event driven architectures and/or data flow programming.
I know naming is hard but we need to stop overloading terms.
Still, it is unfortunate that the article begins with “Let's start with an extremely broad definition” (of signals), then continues with what sounds like signals in general, mentioning React only as an example, but then turns out wanting to only talk about signals in the particular reactive UI sense common in web development, without properly introducing that narrowing of focus.
I guess that focus can be expected from a blog on a site about a TypeScript signals library, but from the way the first few paragraphs are written, it’s certainly confusing when reading the article without further context, coming from HN.
[0] https://en.wikipedia.org/wiki/Signals_and_slots
Was confusing then and is still confusing now.
(a) A thing that happens (or might happen but ultimately doesn’t) and is associated with a value of some sort, such as the mouse coordinates of first left-button click the program sees;
(b) A list of the above with monotonically increasing times, such as all the left-button click coordinates in order;
(c) The above plus an initial value, representing a piecewise constant function of time, such as the state of a toggle;
(d) A (conceptually) piecewise continuous function of time, such as the position of an animated object.
[To avoid “time leaks”, i.e. accidentally retaining all change history from the beginning of time, you also need
(e) A computation that reads the current time, such as the mouse coordinates of the next left-button click the program will see;
but that’s not supposed to be obvious—it took more than a decade to figure out.]
Possible names include: event occurrence [for (a)], event stream [for (b)], event [either (a) or (b)]; behaviour [(c) or (d)—the library may not distinguish them in the API or only provide (d)]; reactive [Conal Elliot’s name for (c) as opposed to (d)]; signal [(b), (c), or both]. All of these make sense in isolation. I don’t think you can win here.
[Perhaps the worst word to use is “stream“, because people start trying to fit streams with backpressure in there. FRP only makes sense as far as you can assume any recalculations happen instantaneously, meaning at the very least before the next input change happens. If you forget that, you get Rx instead of FRP, and that’s not amenable to human comprehension—or indeed sanity.]
I'm not sure what you mean by "Rx" in this context.
My background is much more in the systems (and indeed, "signals" in the theoretical sense).
I still think the name "signal" here is quite bad, since it's the abstract concept of something-that-carries-information
Because of this, Ryan Carniato of SolidJS referred to his own observable primitives as "signals" and this has become the default terminology to talk about this type of pattern within the context of web development.
Really, signals should be thought of as a type of observable.
the key difference is in backpressure strategy: signals are canonically lazy, they don’t compute or do work until sampled, and only the latest value is relevant (nobody cares where the mouse was a moment ago when nobody was looking). streams are eager, you can’t skip a keyboard event or a financial transaction, even if the pipes are backed up – instead you have to tell upstream to slow down so you can catch up. The benefit of event streams is the guarantee that you'll see every event, which means streams are suitable for driving sequences of side effects (keyboard event -> network request -> database transaction).
signals are a good fit for rendering because you only want to render at up to say 60fps (even if the mouse updates faster, which it does). and you only want to render what’s onscreen, and only when the tab is focused. rendering (say dom effects) is indeed effectful but not in the discrete way; the dom is a resource, it has a mount/unmount object lifecycle, and due to this symmetry it is a good fit for rendering whereas isolated effects (without a corresponding undo operation) are a terrible fit for signals because backpressure will drop events and corrupt the system state.
You can use streams for rendering too, but it's dis-optimal, and potentially by a lot. If your app chokes on a burst of events, you want to skip ahead and render the final state without bothering to render all the intermediate historical states. Signal laziness is what enables this "work skipping"; a stream would have to process each individual event in sequence.
I have no idea if JS projects get the backpressure right, can anyone confirm?
To discuss more specifically TLDraw’s Signia library, I think the ability to do manual diff tracking & incremental updates for computed stores is an interesting “escape hatch”. Docs here: https://signia.tldraw.dev/docs/incremental
Most signal libraries I’ve seen try to lean hard into magic auto-tracking of dependencies which means they make it really easy for the developer to correctly observe a lot of dependencies, cool on correctness, but then have a very limited set of tools to deal with the performance implication of some computation needing to re-run a whole bunch. The differential tracking here means that if you see such a hotspot, you can get really manual optimization of recomputation without needing to squeeze into the libraries pre-packaged observable collection API.
Downside of this API is it seems quite easy to get it wrong.
Another thing I like about Signia is the use of logical time. I saw this first in Jotai internals, then in Starbeam. I haven’t dug into the source of the library yet but I think logical time is a good approach and makes inspecting internals make a bit more sense than inspecting systems (like Notion’s) that rely purely on update notification listeners.
The problem with angular 1 components (isolated scope directives) was simply that not many understood it as a best practice until far too late, and the framework allowed you to do all kinds of terrible designs like shared scope/multiple controllers per file, which was the more common way to develop. Also performance was poor.
Feel the same way about signals. Vue has been doing it for years (just not tied into render optimizations). I used a similar model via angular 1 back in 2014 even, via watch chains. Much less performant though. Can also do it via redux by defining a special third arg which defines state depenencies (which we do here).
Nothing really novel about a tree-like series of derived states. Interesting that social media is ablaze like its something new
I've seen so many projects get bogged down in props hell, then gobs of context api and performance problems when too many things react at once.
Mobx has been solving these problems for a long time now!
But now `useSyncExternalStore` solves that. No context needed. Can read and write state anywhere, even outside components. No "observe" or "track" shenanigans. Efficient updates (can listen to a subsection of the tree).
I have stopped taking seriously people that rely on novelty to sell you their next training.
Subscriptions grow from the leaf up to the root (which is a single app "db") and so computation is minimal. If something writes to the "db", only relevant subscriptions are recomputed and only views subscribing to those are "repainted".
Works like magic.
PReact's signal() as a concept is better - more pure. https://preactjs.com/guide/v10/signals/
Nope.