The standard unix regular expression implementations (grep, sed, ed, vi) are very tied to "lines" of text, and most of the time this is exactly what you want. but sometimes it gets in the way. And when you start to have trouble with your regex because it is conflicting with line returns, then it is the sam editor and it's structural regex engines time to shine. They would probably be great elsewhere, but nobody else uses them.
That being said, I've been recently employing comby (https://comby.dev/) in my workflow, which solves similar problem, but understands certain languages to simplify the usage.
1) The Raspberry Pi 4 is THE cheapest 2Gflops/W computer ever made and probably that will ever be made in the future too! Peak of energy/resources/lithography/architectures and velocity of money against that, will most likely make it so.
2) You can scale the Raspberry cluster as you want it, only power the nodes you need, it's modular, one breaks you still have a few left, same for the SD cards which BTW while being so slow a Raspberry 2 (2W!) can saturate them they are SURPRISINGLY sturdy (my original SanDisc (every other brand has been a complete scam) are on their 7th year of 99.999% uptime, down when my power company cut the electricity for an hour).
3) The Raspberry cluster is smaller, cooler and silent (if passively cooled, it's the most powerful device that can be fully passively cooled at 100% CPU (7W) without becoming too hot to wear early) and wont fail because of failing fans!
4) For battery backup there is nothing better because beyond total of 100W for 24 hours you start to see the limits of what is practical to manage on a individual basis.
I post this picture every time: http://move.rupy.se/file/final_pi_2_4_hybrid.png (this is how you cool a Raspberry 2/4 hybrid cluster)
Can you expand on this one? I was curious why you think there probably will not be a cheaper one in the future with similar or better specs.
* A fanless Chromebook with decent screen for travel use
* An Intel NUC that is hooked to a big monitor, which is also the device I'm typing this on
* A beefy Ryzen desktop that sits in the corner of my balcony, which I usually connect via ssh and perform all the heavy tasks
To me I'm getting all the benefits of each computer, and the combined cost is still less than a so-called macbook pro :)
I adore Haskell, but my personal problems with it:
* Records! Lenses are fine, but a huge complexity increase. I love the recently accepted record syntax RFC though, this will make things a lot nicer.
* I feel like the plethora of (partially incompatible) extensions make the language very complicated and messy. There is no single Haskell. Each file can be GHC Haskell with OverloadedStrings or GADTs or ....
* Library ecosystem: often I didn't find libraries I needed. Or they were abandoned, or had no documentation whatsoever, or used some fancy dependency I didn't understand. Or all of the above...
* Complexity. I can deal with monads, but some parts of the ecosystem get much more type-theory heavy than that. Rust is close enough to common programming idioms that most of it can be understood fairly quickly
* Build system ( Cabal, Stackage, Nix packages, ... ? ), tooling (IDE support etc)
> F#
I admittedly haven't tried F# since .net core. I just remember it being very Windows-centric and closely tied to parts of the C# ecosystem, which brings similar concerns as in my sibling comments about Java.
> if you need a c library, you can build a 'trusted' wrapper around it as easily in rust as with any other language.
Sure, but if that wrapper does not exist, you have to build it yourself. I can say from experience that writing an idiomatic, safe Rust wrapper for a C library is far from trivial, so you lose the "I don't have to worry about memory unsafety" property.
Aren't macros, especially proc macros these days in Rust having the same effect? Personally I feel like this is a tradeoff every language has to play with: you either limit to a special way of writing, or adding some sort of ad-hoc system that enables rewriting syntax and even to a degree, semantics.
> Idris 1 is implemented in Haskell, but that has little (if anything) to do with the difference.
But latter they also go on to say:
> Idris 2 benefits from a robust, well-engineered and optimised run time system, by compiling to Chez Scheme.
I must say I'm slightly confused here. Yes a rewrite might also enable to avoid all the legacy part that might slow down the code, but what is also possible, is that a new language and a new runtime could enable new optimizations that are not possible before. The author did mention Chez's profiling tools help a lot in the rewrite. So I was curious: is it really true, that we cannot attribute some part of the speedup to language differences?
Also I was interested in the rationale behind using Scheme to replace Haskell, but I failed to find some reasoning behind this, anyone can shed some light on this?
There's something unique about the workload of ninja launching a bunch of clang processes that draws this out.
On my machine, a clean build of the llvm-project would consistently fail to complete, so that may be a reasonable workload to A/B test with if you're looking into this.
The user quoted above was running gentoo builds on specific p-cores to test various solutions, ultimately finding that the p-core limit was the only fix that yielded stability.