Readit News logoReadit News
jltsiren commented on The time I didn't meet Jeffrey Epstein   scottaaronson.blog/?p=953... · Posted by u/pfdietz
gscott · 3 days ago
People seem to forget how many companies Bill Gates put out of business by using their designs. It takes years to sue and win damages minus lawyer fees. Then to try to whitewash his reputation by giving the money away.
jltsiren · 3 days ago
I think it's the opposite. People remember how Bill Gates got rich. They remember that the damage he caused mostly affected capitalists and professionals in developed countries. His businesses mostly didn't abuse labor in developing countries. He didn't cause that much environmental damage. He didn't undermine democracy and the society that much.

People remember that Bill Gates played the game and won, and the damage he caused was mostly limited to the economic sphere and to other people playing the same game. That's why they are willing to give Gates a chance to redeem himself by using his money for good.

jltsiren commented on 1 kilobyte is precisely 1000 bytes?   waspdev.com/articles/2026... · Posted by u/surprisetalk
fsckboy · 6 days ago
musicians use numbering systems that are actually far more confused than anything discussed here. how many notes in an OCTave? "do re mi fa so la ti do" is eight, but that last do is part of the next octave, so an OCTave is 7 notes. (if we count transitions, same thing, starting at the first zero do, re is 1, ... again 7.

the same and even more confusion is engendered when talking about "fifths" etc.

jltsiren · 6 days ago
You can blame the Romans for that, as they practiced inclusive counting. Their market days occurring once every 8 days were called nundinae, because the next market day was the ninth day from the previous one. (And by the same logic, Jesus rose from the dead on the third day.)
jltsiren commented on 1 kilobyte is precisely 1000 bytes?   waspdev.com/articles/2026... · Posted by u/surprisetalk
pif · 6 days ago
I understand the usual meaning, but I use the correct meaning when precision is required.
jltsiren · 6 days ago
When precision is required, you either use kibibytes or define your kilobytes explicitly. Otherwise there is a real risk that the other party does not share your understanding of what a kilobyte should mean in that context. Then the numbers you use have at most one significant figure.
jltsiren commented on Data Processing Benchmark Featuring Rust, Go, Swift, Zig, Julia etc.   github.com/zupat/related_... · Posted by u/behnamoh
pron · 7 days ago
What you're saying may (sometimes) be true, but that's not why Java's performance is hard to beat, especially as programs evolve (I was programming in C and C++ since before Java even existed).

In a low-level language, you pay a higher performance cost for a more general (abstract) construct. E.g. static vs. dynamic dispatch, or the Box/Rc/Arc progression in Rust. If a certain subroutine or object requires the more general access even once, you pay the higher price almost everywhere. In Java, the situation is opposite: You use a more general construct, and the compiler picks an appropriate implementation per use site. E.g. dispatch is always logically dynamic, but if at a specific use site the compiler sees that the target is known, then the call will be inlined (C++ compilers sometimes do that, too, but not nearly to the same extent; that's because a JIT can perform speculative optimisations without proving they're correct); if a specific `new Integer...` doesn't escape, it will be "allocated" in a register, and if it does escape it will be allocated on the heap.

The problem with Java's approach is that optimisations aren't guaranteed, and sometimes an optimisation can be missed. But on average they work really well.

The problem with a low-level language is that over time, as the program evolves and features (and maintainers) are added, things tend to go in one direction: more generality. So over time, the low-level program's performance degrades and/or you have to rethink and rearchitect to get good performance back.

As to memory locality, there's no issue with Java's approach, only with a missing feature of flattening objects into arrays. This feature is now being added (also in a general way: a class can declare that it doesn't depend on identity, and the compiler then transparently decides when to flatten it and when to box it).

Anyway, this is why it's hard, even for experts to match Java's performance without a significantly higher effort that isn't a one-time thing, but carries (in fact, gets worse) over the software's lifetime. It can be manageable and maybe worthwhile for smaller programs, but the cost, performance, or both suffer more and more with bigger programs as time goes on.

jltsiren · 7 days ago
From my perspective, the problem with Java's approach is memory, not computation. For example, low-level languages treat types as convenient lies you can choose to ignore at your own peril. If it's more convenient to treat your objects as arrays of bytes/integers (maybe to make certain forms of serialization faster), or the other way around (maybe for direct access to data in a memory-mapped file), you can choose to do that. Java tends to make solutions like that harder.

Java's performance may be hard to beat in the same task. But with low-level languages, you can often beat it by doing something else due to having fewer constraints and more control over the environment.

jltsiren commented on Leaked chats expose the daily life of a scam compound's enslaved workforce   wired.com/story/the-red-b... · Posted by u/smurda
mapt · 7 days ago
Slavery was replaced by wage labor because it was more productive in the long run - that's part of the economists founding narrative. But I think they tend not to emphasize that it was also simply because it was a lot more flexible for a business in a competitive market to rent than to own, ceter paribus.

Quasi-slave status persisted in many situations for a long time, being a local maxima for various management situations. Penal slaves in the postwar American South were in many cases treated worse than their chattel slave parents/grandparents partially because they were rented out by their owners, who didn't pay for them, to managers who rented and didn't have any stake in their survival.

jltsiren · 7 days ago
Slavery effectively disappeared in most of Christian Europe towards the end of the Middle Ages, because the Church opposed keeping Christian slaves. (Similarly, Islamic Europe had banned Muslim slaves.) As Christianity spread, slaves were no longer conveniently available, and the society had to adapt.

In densely populated areas, that meant systems like serfdom. Agricultural land was a scarce resource mostly owned by the elite. Most peasants were nominally free but tied to the land, with obligations towards whoever owned the land. Peasants farmed land owned by the local lord and paid rent with labor. And if the lord sold the land, the peasants and their obligations went with it.

jltsiren commented on Data Processing Benchmark Featuring Rust, Go, Swift, Zig, Julia etc.   github.com/zupat/related_... · Posted by u/behnamoh
pron · 8 days ago
> it's probably because you are trying to write Java in C++ or Rust

Well, sure. In principle, we know that for every Java program there exists a C++ program that performs at least as well because HotSpot is such a program (i.e. the Java program itself can be seen as a C++ program with some data as input). The question is can you match Java's performance without significantly increasing the cost of development and especially evolution in a way that makes the tradeoff worthwhile? That is quite hard to do, and gets harder and harder the bigger the program gets.

> I was not familiar with the term "object flattening", but apparently it just means storing data by value inside a struct. But data layout is exactly the thing you should be thinking about when you are trying to write performant code.

Of course, but that's why Java is getting flattened objects.

> As a first approximation, performance means taking advantage of throughput and avoiding latency, and low-level languages give you more tools for that

Only at the margins. These benefits are small and they're getting smaller. More significant performance benefits can only be had if virtually all objects in the program have very regular lifetimes - in other words, can be allocated in arenas - which is why I think it's Zig that's particularly suited to squeezing out the last drops of performance that are still left on the table.

Other than that, there's not much left to gain in performance (at least after Java gets flattened objects), which is why the use of low-level languages has been shrinking for a couple of decades now and continues to shrink. Perhaps it would change when AI agents can actually code everything, but then they might as well be programming in machine code.

What low-level languages really give you through better hardware control is not performance, but the ability to target very restricted environments with not much memory (as one of Java's greatest performance tricks is the ability to convert RAM to CPU savings on memory management) assuming you're willing to put in the effort. They're also useful, for that reason, for things that are supposed to sit in the background, such as kernels and drivers.

jltsiren · 7 days ago
> The question is can you match Java's performance without significantly increasing the cost of development and especially evolution in a way that makes the tradeoff worthwhile?

This question is mostly about the person and their way of thinking.

If you have a system optimized for frequent memory allocations, it encourages you to think in terms of small independently allocated objects. Repeat that for a decade or two, and it shapes you as a person.

If you, on the other hand, have a system that always exposes the raw bytes underlying the abstractions, it encourages you to consider the arrays of raw data you are manipulating. Repeat that long enough, and it shapes you as a person.

There are some performance gains from the latter approach. The gains are effectively free, if the approach is natural for you and appropriate to the problem at hand. Because you are processing arrays of data instead of chasing pointers, you benefit from memory locality. And because you are storing fewer pointers and have less memory management overhead, your working set is smaller.

jltsiren commented on Data Processing Benchmark Featuring Rust, Go, Swift, Zig, Julia etc.   github.com/zupat/related_... · Posted by u/behnamoh
pron · 8 days ago
> Is there something they are doing wrong?

Yes. The most common issues are heap misconfiguration (which is more important in Java than any compiler configuration in other languages) and that the benchmarks don't simulate realistic workloads in terms of both memory usage and concurrency. Another big issue is that the effort put into the program is not the same. Low-level languages do allow you to get better performance than Java if you put significant extra work to get it. Java aims to be "the fastest" for a "normal" amount of effort at the expense of losing some control that could translate to better performance in exchange for significantly more work, bot at initial development time, but especially during evolution/maintenance.

E.g. I know of a project at one of the world's top 5 software companies where they wanted to migrate a real Java program to C++ or Rust to get better performance (it was probably Rust because there's some people out there who really want to to try Rust). Unsurprisingly, they got significantly worse performance (probably because low-level languages are not good at memory management when concurrency is at play, or at concurrency in general). But they wanted the experiment to be a success, so they put in a tonne of effort - I'm talking many months - hand-optimising the code, and in the end they managed to match Java's performance or even exceed it by a bit (but admitted it was ultimately wasted effort).

If the performance of your Java program doesn't more-or-less match or even exceed the performance of a C++ (or other low level language) program then the cause is one of: 1. you've spent more effort optimising the other program, 2. you've misconfigured the Java program (probably a bad heap-size setting), or 3. the program relies on object flattening, which means the Java program will suffer from costly cache misses (until Valhalla arrives, which is expected to be very soon).

jltsiren · 8 days ago
In my experience, if your C++ or Rust code does not perform as well as Java, it's probably because you are trying to write Java in C++ or Rust. Java can handle a large number of small heap-allocated objects shared between threads really well. You can't reasonably expect to meet its performance in such workloads with the rudimentary tools provided by the C++ or Rust standard library. If you want performance, you have structure the C++/Rust program in a fundamentally different way.

I was not familiar with the term "object flattening", but apparently it just means storing data by value inside a struct. But data layout is exactly the thing you should be thinking about when you are trying to write performant code. As a first approximation, performance means taking advantage of throughput and avoiding latency, and low-level languages give you more tools for that. If you get the layout right, efficient code should be easy to write. Optimization is sometimes necessary, but it's often not very cost-effective, and it can't save you from poor design.

jltsiren commented on Prism   openai.com/index/introduc... · Posted by u/meetpateltech
i2km · 12 days ago
This is going to be the concrete block which finally breaks the back of the academic peer review system, i.e. it's going to be a DDoS attack on a system which didn't even handle the load before LLMs.

Maybe we'll need to go back to some sort of proof-of-work system, i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...

jltsiren · 12 days ago
Or it makes gatekeepers even more important than before. Every submission to a journal will be desk-rejected, unless it is vouched for by someone one of the editors trusts. And people won't even look at a new paper, unless it's vouched for by someone / published in a venue they trust.
jltsiren commented on The bachelor tax – what it costs in taxes to be single   bachelor-tax.vercel.app/... · Posted by u/wkaisertexas
triceratops · 13 days ago
> Why is taxing households together the correct thing

Hypothetically if the household splits up due to a divorce its assets are divided 50:50 (this varies by jurisdiction). Usually (again depending on the jurisdiction) the lower-earning spouse also gets alimony to even up the difference in income resulting from the new situation, at least for a few years.

Clearly then the state believes assets owned and income earned by either one of the couple belong equally to both (something I agree with personally: it's called a partnership). If that's the case, how could it be wrong to tax the household as a single entity?

jltsiren · 12 days ago
The fundamental question is whether the primary unit of the society is a household or an individual. If you assume that the society consists of individuals, people should be taxed individually, spouses should be allowed to choose in advance how their assets would be divided in a divorce, and alimony should only be paid to support underage children.
jltsiren commented on India and EU announce landmark trade deal   bbc.com/news/articles/crr... · Posted by u/Palmik
fooker · 13 days ago
Sure, if they want to pay decent salaries.

But no, you can make 3-4x in the US. That’s not an exaggeration. And before someone says ‘free healthcare’, big-tech employers in the US provide pretty nice insurance for employees that caps maximum out of pocket expenses to about a week of your salary.

EU (except Zurich and London) tech salaries have sort of stagnated to a point that you make about the same in Bangalore, and spend significantly more.

jltsiren · 13 days ago
Those "decent salaries" have caused a lot of trouble in the US. They are probably not that good for the society, even if they attract foreign talent.

There is not much difference in labor share of GDP between the US and the EU. People who work for living get a similar share of the value they create in both blocks on the average (maybe a bit less in the US), but it's less evenly distributed in the US.

Top 10% earners are now responsible for ~50% of consumer spending. That doesn't mean billionaires and capitalists, but upper middle class professionals and other high earners. The economy is great on the average, but most people don't feel it.

u/jltsiren

KarmaCake day5422May 14, 2017View Original