Readit News logoReadit News
rowanG077 commented on Framework Raises DDR5 Memory Prices by 50% for DIY Laptops   phoronix.com/news/Framewo... · Posted by u/mikece
dijit · 2 days ago
perhaps the most hackernews take in this thread.

desalination isn’t just expensive, it’s existentially costly in terms of energy consumption, and I don’t see any dyson spheres in production.

rowanG077 · 2 days ago
With modern desalination facilities it costs literally on the order of cents per liter. It's an inconvenience at worst in the modern world.

It costs approx 3kwh of energy to desalinate one cubic meter of water.

rowanG077 commented on Framework Raises DDR5 Memory Prices by 50% for DIY Laptops   phoronix.com/news/Framewo... · Posted by u/mikece
iteria · 3 days ago
Fresh water is a finite resource. It replenishes extremely slowly in certain forms. Like ground water. Lakes and rivers can run dry if you pull too much from them, see: Iran. AI data centers are making the problem of overuse worse. We were already pulling too much water in areas. With these data centers, some places that didn't have a problem are starting to.
rowanG077 · 3 days ago
Fresh water is not a finite resource. You can simply make more by taking sea water and pumping in energy. It's not cheap but it's doable.
rowanG077 commented on Framework Raises DDR5 Memory Prices by 50% for DIY Laptops   phoronix.com/news/Framewo... · Posted by u/mikece
walterbell · 3 days ago
Apple also raised iPad Pro prices when OLED screens were introduced.
rowanG077 · 3 days ago
That's a new product, not a raise of he existing product as is discussed.
rowanG077 commented on Show HN: AI system 60x faster than ChatGPT – built by combat vet with no degree    · Posted by u/thebrokenway
rowanG077 · 4 days ago
Even if this is good stuff I already see it as evil because they are trying to get patents on it.
rowanG077 commented on Rust in the kernel is no longer experimental   lwn.net/Articles/1049831/... · Posted by u/rascul
bayindirh · 6 days ago
Personally, I observe that Rust is forced everywhere in the Linux ecosystem. One of my biggest concerns is uutils, mostly because of the permissive license it bears. The Linux kernel and immediate userspace shall be GPL licensed to protect the OS in my opinion.

I have a personal principle of not using LLVM-based languages (equal parts I don't like how LLVM people behave against GCC and I support free software first and foremost), so I personally watch gccrs closely, and my personal ban on Rust will be lifted the day gccrs becomes an alternative compiler.

This brings me to the second biggest reservation about Rust. The language is moving too fast, without any apparent plan or maturing process. Things are labeled unstable, there's no spec, and apparently nobody is working on these very seriously, which you also noted.

I don't understand why people are hostile against gccrs? Can you point me to some discussions?

> It feels somewhat like a corporate takeover, not something where good and benign technology is most important.

As I noted above, the whole Rust ecosystem feels like it's on a crusade, esp. against C++. I write C++, and I play with pointers a lot, and I understand the gotchas, and also the team dynamics and how it's becoming harder to write good software with larger teams regardless of programming language, but the way Rust propels itself forward leaves a very bad taste in the mouth, and I personally don't like to be forced into something. So, while gccrs will remove my personal ban, I'm not sure I'll take the language enthusiastically. On the other hand, another language Rust people apparently hate, Go, ticks all the right boxes as a programming language. Yes, they made some mistakes and turned back from some of them at the last moment, but the whole ordeal looks tidier and better than Rust.

In short, being able to borrow-check things is not a license to push people around like this, and they are building themselves a good countering force with all this enthusiasm they're pumping around.

Oh, I'll only thank them for making other programming languages improve much faster. Like how LLVM has stirred GCC devs into action and made GCC a much better compiler in no time.

rowanG077 · 6 days ago
The language is moving too fast? The language is moving extremely slowly imo, too slowly for a lot of features. C++ is moving faster at this point.
rowanG077 commented on Evidence from the One Laptop per Child program in rural Peru   nber.org/papers/w34495... · Posted by u/danso
rowanG077 · 8 days ago
I'm not sure why people here call it a failure. The children who got a laptop have reached superior skill in using computers, while it seems like not sacrificing any other capability. That seems like a great result, skilled computer use is a highly valuable skill.
rowanG077 commented on The state of Schleswig-Holstein is consistently relying on open source   heise.de/en/news/Goodbye-... · Posted by u/doener
CerryuDu · 9 days ago
The problem with this is that the decisionmakers fucked up 10-20 years ago, and now when those decisions are being righted, some poor public servant is paying the price.
rowanG077 · 9 days ago
And 10-20 years ago it would have also been a public servant paying the price. You are just salty it's now you. At least be happy your work is impacted for a noble cause.
rowanG077 commented on Bikeshedding, or why I want to build a laptop   geohot.github.io//blog/je... · Posted by u/cspags
chungy · 9 days ago
The blog is premised on the idea that Apple and MacBook are not doing fine.
rowanG077 · 9 days ago
The reason of course being the awful software, not the hardware options. He makes that abundantly clear in the text.
rowanG077 commented on Thoughts on Go vs. Rust vs. Zig   sinclairtarget.com/blog/2... · Posted by u/yurivish
kibwen · 11 days ago
> languages where "trivial" things "just require" rapidly become "not so trivial" in the aggregate

Sure. And in C and Zig, it's "trivial" to make a global mutable variable, it "just requires" you to flawlessly uphold memory access invariants manually across all possible concurrent states of your program.

Stop beating around the bush. Rust is just easier than nearly any other language for writing concurrent programs, and it's not even close (though obligatory shout out to Erlang).

rowanG077 · 11 days ago
This is really it to me. It's like saying, "look people it's so much easier to develop and build an airplane when you don't have to adhere to any rules". Which of course is true. But I don't want to fly in any of those airplanes, even if they are designed and build by the best and brightest on earth.
rowanG077 commented on Zig's new plan for asynchronous programs   lwn.net/SubscriberLink/10... · Posted by u/messe
hansvm · 13 days ago
Making it dead simple to have different tokens is exactly the goal. A smattering of examples recently on my mind:

As a background, you might ask why you need different runtimes ever. Why not just make everything async and be done with it, especially if the language is able to hide that complexity?

1. In the context of a systems language that's not an option. You might be writing an OS, embedded code, a game with atypical performance demands requiring more care with the IO, some kernel-bypass shenanigan, etc. Even just selecting between a few builtin choices (like single-threaded async vs multi-threaded async vs single-threaded sync) doesn't provide enough flexibility for the range of programs you're trying to allow a user to write.

2. Similarly, even initializing a truly arbitrary IO effect once at compile-time doesn't always suffice. Maybe you normally want a multi-threaded solution but need more care with respect to concurrency in some critical section and need to swap in a different IO. Maybe you normally get to interact with the normal internet but have a mode/section/interface/etc where you need to send messages through stranger networking conditions (20s ping, 99% packet loss, 0.1kbps upload on the far side, custom hardware, etc). Maybe some part of your application needs bounded latency and is fine dropping packets but some other part needs high throughput and no dropped packets at any latency cost. Maybe your disk hardware is such that it makes sense for networking to be async and disk to be sync. And so on. You can potentially work around that in a world with a single IO implementation if you can hack around it with different compilation units or something, but it gets complicated.

Part of the answer then is that you need (or really want) something equivalent to different IO runtimes, hot-swappable for each function call. I gave some high-level ideas as to why that might be the case, but high-level observations often don't resonate, so let's look at a concrete case where `await` is less ergonomic:

1. Take something like TLS as an example (stdlib or 3rd-party, doesn't really matter). The handshake code is complicated, so a normal implementation calls into an IO abstraction layer and physically does reads and writes (as opposed to, e.g., a pure state-machine implementation which returns some metadata about which action to perform next -- I hacked together a terrible version of that at one point [0] if you want to see what I mean). What if you want to run it on an embedded device? If it were written with async it would likely have enough other baggage that it wouldn't fit or otherwise wouldn't work. What if you want to hide your transmission in other data to sneak it past prying eyes (steganography, nowadays that's relatively easy to do via LLMs interestingly enough, and you can embed arbitrary data in messages which are human-readable and purport to discuss completely other things without exposing hi/lo-bit patterns or other such things that normally break steganography)? Then the kernel socket abstraction doesn't work at all, and "just using await" doesn't fix the problem. Basically, any place you want to use that library (and, arguably, that's the sort of code where you should absolutely use a library rather than rolling it yourself), if the implementer had a "just use await" mentality then you're SOL if you need to use it in literally any other context.

I was going to write more concrete cases, but this comment is getting to be too long. The general observation is that "just use await" hinders code re-use. If you're writing code for your own consumption and also never need those other uses then it's a non-issue, but with a clever choice of abstraction it _might_ be possible (old Zig had a solution that didn't quite hit the mark IMO, and time will tell if this one is good enough, but I'm optimistic) to enable the IO code people naturally write to be appropriately generic by default and thus empower future developers via a more composable set of primitives.

They really nailed that with the allocator interface, and if this works then my only real concern is a generic "what next" -- it's pushing toward an effect system, but integrating those with a systems language is mostly an unsolved problem, and adding a 3rd, 4th, etc explicit parameter to nearly every function is going to get unwieldy in a hurry (back-of-the-envelope idea I've had stewing if I ever write a whole "major" language is to basically do what Zig currently does and pack all those "effects" into a single effect parameter that you pass into each function, still allowing you to customize each function call, still allowing you to inspect which functions require allocators or whatever, but making the experience more pleasant if you have a little syntactic sugar around sub-effects and if the parent type class is comptime-known).

[0] https://github.com/hmusgrave/rayloop/blob/d5e797967c42b9c891...

rowanG077 · 13 days ago
The case I'm making is not that different Io context are good. The point I'm making is that mixing them is almost never what is needed. I have seen valid cases that do it, but it's not in the "used all the time" path. So I'm more then happy with the better ergonomics of traditional async await in the style of Rust , that sacrifices super easy runtime switching. Because the former is used thousands of times more.

u/rowanG077

KarmaCake day3279January 4, 2019View Original