At least from my side, if I have to work on a laptop outside, I'll just find a good table, plug it in an outlet and, you know, work on it, with almost no changes of body position.
Not to be dismissive but I genuinely don't understand what's the problem with your laptop being plugged in when working from a coffee shop.
The only real annoying thing I've found with the P14s is the Crowdstrike junk killing battery life when it pins several cores at 100% for an hour. That never happened in MacOS. These are corporate managed devices I have no say in, and the Ubuntu flavor of the corporate malware is obviously far worse implemented in terms of efficiency and impact on battery life.
I recently built myself a 7970X Threadripper and it's quite good perf/$ even for a Threadripper. If you build a gaming-oriented 16c ryzen the perf/$ is ridiculously good.
No personal experience here with Frameworks, but I'm pretty sure Jon Blow had a modern Framework laptop he was ranting a bunch about on his coding live streams. I don't have the impression that Framework should be held as the optimal performing x86 laptop vendor.
Weirdly, the choice pool seems to have shrunken in the last few years.
It's made worse on the Strix Halo platform, because it's a performance first design, so there's more resource for Chrome to take advantage of.
The closest browser to Safari that works on Linux is Falkon. It's compatability is even less than Safari, so there's a lot of sites where you can't use it, but on the ones where you can, your battery usage can be an order of magnitude less.
I recommend using Thorium instead of Chrome; it's better but it's still Chromium under the hood, so it doesn't save much power. I use it on pages that refuse to work on anything other than Chromium.
Chrome doesn't let you suspend tabs, and as far as I could find there aren't any plugins to do so; it just kills the process when there aren't enough resources and reloads the page when you return to it. Linux does have the ability to suspend processes, and you can save a lot of battery life, if you suspend Chrome when you aren't using it.
I don't know of any GUI for it, although most window managers make it easy to assign a keyboard shortcut to a command. Whenever you aren't using Chrome but don't want to deal with closing it and re-opening it, run the following command (and ignore the name, it doesn't kill the process):
killall -STOP google-chrome
When you want to go back to using it, run: killall -CONT google-chrome
This works for any application, and the RAM usage will remain the same while suspended, but it won't draw power reading from or writing to RAM, and its CPU usage will drop to zero. The windows will remain open, and the window manager will handle them normally, but whats inside won't update, and clicks won't do anything until resumed.Auto Tab Discard exists and works fine, but I am not sure it's what people call "suspending tabs". They need to reload when you click them and they objectively free the memory they used (I watch my memory usage closely).
If its tens of TB in a stream, then I've not personally stored that much data in a stream, but I don't see why it wouldn't handle it. Note that Jetstream has a maximum message size of 1MB (this is because Jetstream uses NATS for its client/server protocol, which has that limit), which was a real problem for one use case I had. Redpanda has essentially no upper limit.
Note that number of NATS servers isn't the same as the replication factor. You can have 3 servers and a replication factor of 2 if you want, which allows more flexibility. Both consumers and streams have their own replication factors.
The other option I have considered in the past is EMQX, which is a clustered MQTT system written in Erlang. It looks nice, but I've never used it in production, and it's one of those projects that nobody seems to be talking about, at least not in my part of the industry.
Do you have any other recommendations? The time is right for us and I'll soon start evaluating. I only have NATS Jestream and MQTT on my radar so far.
Kafka I already used and rejected for the reasons above ("entire batch or nothing / later").
As for data, I meant tens of terabytes of traffic on busy days, sorry. Most of the time it's a few hundred gigs. (Our area is prone to spikes and the business hours matter a lot.) And again, that's total traffic. I don't think we'd have more than 10-30GB stored in our queue system, ever. Our background workers aggressively work through the backlog and chew data into manageable (and much smaller chunks) 24/7.
And as one of the seniors I am extremely vigilant of payload sizes. I had to settle on JSON for now but I push back, hard, on any and all extra data; anything and everything that can be loaded from DB or even caches is delegated as such with various IDs -- this also helps us with e.g. background jobs that are no longer relevant as certain entity's state moved too far forward due to user interaction and the enriching job no longer needs to run; when you have only references in your message payload, this enables and even forces the background job to load data exactly at the time of its run and not assume a potentially outdated state.
Anyhow, I got chatty. :)
Thank you. If you have other recommendations, I am willing to sacrifice a little weekend time to give them a cursory research. Again, utmost priority is 100% durability (as much as that is even possible of course) and mega ultra speed is not of the essence. We'll never have even 100 consumers per stream; I haven't ever seen more than 30 in our OTel tool dashboard.
EDIT: I should also say that our app does not have huge internal traffic; it's a lot (Python wrappers around AI / OCR / others is one group of examples) but not huge. As such, our priorities for a message queue are just "be super reliable and be able to handle an okay beating and never lose stuff" really. It's not like in e.g. finance where you might have dozens of Kafka clusters and workers that hand off data from one Kafka queue to another with a ton of processing in the meantime. We are very far from that.
NATS/Jetstream is amazing if it fits your use case and you don't need extreme scalability. As I said before, it offers a lot more flexibility. You can process a stream sequentially but also nack messages, so you get the best of both worlds. It has deduping (new messages for the same subject will mark older ones as deleted) and lots of other convenience goodies.
We've looked at Redis streams but me and a few others are skeptical as Redis is not known for good durability practices (talking about the past; I've no idea if they pivoted well in the last years) and sadly none of us has any experience with MQTT -- though we heard tons of praise on that one.
But our story is: some tens of terabytes of data, no more than a few tens of millions of events / messages a day, aggressive folding of data in multiple relational DBs, and a very dynamic and DB-heavy UI (I will soon finish my Elixir<=>Rust SQLite3 wrapper so we're likely going to start sharding the DB-intensive customer data to separate SQLite3 databases and I'm looking forward to spearheading this effort; off-topic). For our needs NATS Jetstream sounds like the exactly perfect fit, though time will tell.
I still have the nagging feeling of missing out on still having not tried MQTT though...
Core NATS is an ephemeral message broker. Clients tell the server what subjects they want messages about, producers publish. NATS handles the routing. If nobody is listening, messages go nowhere. It's very nice for situations where lots of clients come and go. It's not reliable; it sheds messages when consumers get slow. No durability, so when a consumer disconnects, it will miss messages sent in its absence. But this means it's very lightweight. Subjects are just wildcard paths, so you can have billions of them, which means RPC is trivial: Send out a message and tell the receiver to post a reply to a randomly generated subject, then listen to that subject for the answer.
NATS organizes brokers into clusters, and clusters can form hub/spoke topologies where messages are routed between clusters by interest, so it's very scalable; if your cluster doesn't scale to the number of consumers, you can add another cluster that consumes the first cluster, and now you have two hubs/spokes. In short, NATS is a great "message router". You can build all sorts of semantics on top of it: RPC, cache invalidation channels, "actor" style processes, traditional pub/sub, logging, the sky is the limit.
Jetstream is a different technology that is built on NATS. With Jetstream, you can create streams, which are ordered sequences of messages. A stream is durable and can have settings like maximum retention by age and size. Streams are replicated, with each stream being a Raft group. Consumers follow from a position. In many ways it's like Kafka and Redpanda, but "on steroids", superficially similar but just a lot richer.
For example, Kafka is very strict about the topic being a sequence of messages that must be consumed exactly sequentially. If the client wants to subscribe to a subset of events, it must either filter client-side, or you have some intermediary that filters and writes to a topic that the consumer then consumes. With NATS, you can ask the server to filter.
Unlike Kafka, you can also nack messages; the server keeps track of what consumers have seen. Nacking means you lose ordering, as the nacked messages come back later. Jetstream also supports a Kafka-like strictly ordered mode. Unlike Kafka, clients can choose the routing behaviour, including worker style routing and deterministic partitioning.
Unlike Kafka's rigid networking model (consumers are assigned partitions and they consume the topic and that's it), as with NATS, you can set up complex topologies where streams get gatewayed and replicated. For example, you can streams in multiple regions, with replication, so that consumers only need to connect to the local region's hub.
While NATS/Jetstream has a lot of flexibility, I feel like they've compromised a bit on performance and scalability. Jetstream clusters don't scale to many servers (they recommend max 3, I think) and large numbers of consumers can make the server run really hot. I would also say that they made a mistake adopting nacking into the consuming model. The big simplification Kafka makes is that topics are strictly sequential, both for producing and consuming. This keeps the server simpler and forces the client to deal with unprocessable messages. Jetstream doesn't allow durable consumers to be strictly ordered; what the SDK calls an "ordered consumer" is just an ephemeral consumer. Furthermore, ephemeral consumers don't really exist. Every consumer will create server-side state. In our testing, we found that having more than a few thousand consumers is a really bad idea. (The newest SDK now offers a "direct fetch" API where you can consume a stream by position without registering a server-side consumer, but I've not yet tried it.)
Lastly, the mechanics of the server replication and connectivity is rather mysterious, and it's hard to understand when something goes wrong. And with all the different concepts — leaf nodes, leaf clusters, replicas, mirrors, clusters, gateways, accounts, domains, and so on — it's not easy to understand the best way to design a topology. The Kafka network model, by comparison, is very simple and straightforward, even if it's a lot less flexible. With Kafka, you can still build hub/spoke topologies yourself by reading from topics and writing to other topics, and while it's something you need to set up yourself, it's less magical, and easier to control and understand.
Where I work, we have used NATS extensively with great success. We also adopted Jetstream for some applications, but we've soured on it a bit, for the above reasons, and now use Redpanda (which is Kafka-compatible) instead. I still think JS is a great fit for certain types of apps, but I would definitely evaluate the requirements carefully first. Jetstream is different enough that it's definitely not just a "better Kafka".
What are your impressions of Red panda?
We're particularly interested in NATS' feature of working with individual messages and have been bitten by Kafka's "either process the entire batch or put it back for later processing", which doesn't work for our needs.
Interested if Redpanda is doing better than either.
1. The dm layer gives you cow/snapshots for any filesystem you want already and has for more than a decade. Some implementations actually use it for clever trickery like updates, even. Anyone who has software requirements in this space (as distinct from "wants to yell on the internet about it") is very well served.
2. Compression seems silly in the modern world. Virtually everything is already compressed. To first approximation, every byte in persistent storage anywhere in the world is in a lossy media format. And the ones that aren't are in some other cooked format. The only workloads where you see significant use of losslessly-compressible data are in situations (databases) where you have app-managed storage performance (and who see little value from filesystem choice) or ones (software building, data science, ML training) where there's lots of ephemeral intermediate files being produced. And again those are usages where fancy filesystems are poorly deployed, you're going to throw it all away within hours to days anyway.
Filesystems are a solved problem. If ZFS disappeared from the world today... really who would even care? Only those of us still around trying to shout on the internet.
O_o
Apparently I've been living under a rock, can you please show us a link about this? I was just recently (casually) looking into bolting ZFS/BTRFS-like partial snapshot features to simulate my own atomic distro where I am able to freely roll back if an update goes bad. Think Linux's Timeshift with something little extra.