the same and even more confusion is engendered when talking about "fifths" etc.
the same and even more confusion is engendered when talking about "fifths" etc.
In a low-level language, you pay a higher performance cost for a more general (abstract) construct. E.g. static vs. dynamic dispatch, or the Box/Rc/Arc progression in Rust. If a certain subroutine or object requires the more general access even once, you pay the higher price almost everywhere. In Java, the situation is opposite: You use a more general construct, and the compiler picks an appropriate implementation per use site. E.g. dispatch is always logically dynamic, but if at a specific use site the compiler sees that the target is known, then the call will be inlined (C++ compilers sometimes do that, too, but not nearly to the same extent; that's because a JIT can perform speculative optimisations without proving they're correct); if a specific `new Integer...` doesn't escape, it will be "allocated" in a register, and if it does escape it will be allocated on the heap.
The problem with Java's approach is that optimisations aren't guaranteed, and sometimes an optimisation can be missed. But on average they work really well.
The problem with a low-level language is that over time, as the program evolves and features (and maintainers) are added, things tend to go in one direction: more generality. So over time, the low-level program's performance degrades and/or you have to rethink and rearchitect to get good performance back.
As to memory locality, there's no issue with Java's approach, only with a missing feature of flattening objects into arrays. This feature is now being added (also in a general way: a class can declare that it doesn't depend on identity, and the compiler then transparently decides when to flatten it and when to box it).
Anyway, this is why it's hard, even for experts to match Java's performance without a significantly higher effort that isn't a one-time thing, but carries (in fact, gets worse) over the software's lifetime. It can be manageable and maybe worthwhile for smaller programs, but the cost, performance, or both suffer more and more with bigger programs as time goes on.
Java's performance may be hard to beat in the same task. But with low-level languages, you can often beat it by doing something else due to having fewer constraints and more control over the environment.
Quasi-slave status persisted in many situations for a long time, being a local maxima for various management situations. Penal slaves in the postwar American South were in many cases treated worse than their chattel slave parents/grandparents partially because they were rented out by their owners, who didn't pay for them, to managers who rented and didn't have any stake in their survival.
In densely populated areas, that meant systems like serfdom. Agricultural land was a scarce resource mostly owned by the elite. Most peasants were nominally free but tied to the land, with obligations towards whoever owned the land. Peasants farmed land owned by the local lord and paid rent with labor. And if the lord sold the land, the peasants and their obligations went with it.
Well, sure. In principle, we know that for every Java program there exists a C++ program that performs at least as well because HotSpot is such a program (i.e. the Java program itself can be seen as a C++ program with some data as input). The question is can you match Java's performance without significantly increasing the cost of development and especially evolution in a way that makes the tradeoff worthwhile? That is quite hard to do, and gets harder and harder the bigger the program gets.
> I was not familiar with the term "object flattening", but apparently it just means storing data by value inside a struct. But data layout is exactly the thing you should be thinking about when you are trying to write performant code.
Of course, but that's why Java is getting flattened objects.
> As a first approximation, performance means taking advantage of throughput and avoiding latency, and low-level languages give you more tools for that
Only at the margins. These benefits are small and they're getting smaller. More significant performance benefits can only be had if virtually all objects in the program have very regular lifetimes - in other words, can be allocated in arenas - which is why I think it's Zig that's particularly suited to squeezing out the last drops of performance that are still left on the table.
Other than that, there's not much left to gain in performance (at least after Java gets flattened objects), which is why the use of low-level languages has been shrinking for a couple of decades now and continues to shrink. Perhaps it would change when AI agents can actually code everything, but then they might as well be programming in machine code.
What low-level languages really give you through better hardware control is not performance, but the ability to target very restricted environments with not much memory (as one of Java's greatest performance tricks is the ability to convert RAM to CPU savings on memory management) assuming you're willing to put in the effort. They're also useful, for that reason, for things that are supposed to sit in the background, such as kernels and drivers.
This question is mostly about the person and their way of thinking.
If you have a system optimized for frequent memory allocations, it encourages you to think in terms of small independently allocated objects. Repeat that for a decade or two, and it shapes you as a person.
If you, on the other hand, have a system that always exposes the raw bytes underlying the abstractions, it encourages you to consider the arrays of raw data you are manipulating. Repeat that long enough, and it shapes you as a person.
There are some performance gains from the latter approach. The gains are effectively free, if the approach is natural for you and appropriate to the problem at hand. Because you are processing arrays of data instead of chasing pointers, you benefit from memory locality. And because you are storing fewer pointers and have less memory management overhead, your working set is smaller.
Yes. The most common issues are heap misconfiguration (which is more important in Java than any compiler configuration in other languages) and that the benchmarks don't simulate realistic workloads in terms of both memory usage and concurrency. Another big issue is that the effort put into the program is not the same. Low-level languages do allow you to get better performance than Java if you put significant extra work to get it. Java aims to be "the fastest" for a "normal" amount of effort at the expense of losing some control that could translate to better performance in exchange for significantly more work, bot at initial development time, but especially during evolution/maintenance.
E.g. I know of a project at one of the world's top 5 software companies where they wanted to migrate a real Java program to C++ or Rust to get better performance (it was probably Rust because there's some people out there who really want to to try Rust). Unsurprisingly, they got significantly worse performance (probably because low-level languages are not good at memory management when concurrency is at play, or at concurrency in general). But they wanted the experiment to be a success, so they put in a tonne of effort - I'm talking many months - hand-optimising the code, and in the end they managed to match Java's performance or even exceed it by a bit (but admitted it was ultimately wasted effort).
If the performance of your Java program doesn't more-or-less match or even exceed the performance of a C++ (or other low level language) program then the cause is one of: 1. you've spent more effort optimising the other program, 2. you've misconfigured the Java program (probably a bad heap-size setting), or 3. the program relies on object flattening, which means the Java program will suffer from costly cache misses (until Valhalla arrives, which is expected to be very soon).
I was not familiar with the term "object flattening", but apparently it just means storing data by value inside a struct. But data layout is exactly the thing you should be thinking about when you are trying to write performant code. As a first approximation, performance means taking advantage of throughput and avoiding latency, and low-level languages give you more tools for that. If you get the layout right, efficient code should be easy to write. Optimization is sometimes necessary, but it's often not very cost-effective, and it can't save you from poor design.
Maybe we'll need to go back to some sort of proof-of-work system, i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...
Hypothetically if the household splits up due to a divorce its assets are divided 50:50 (this varies by jurisdiction). Usually (again depending on the jurisdiction) the lower-earning spouse also gets alimony to even up the difference in income resulting from the new situation, at least for a few years.
Clearly then the state believes assets owned and income earned by either one of the couple belong equally to both (something I agree with personally: it's called a partnership). If that's the case, how could it be wrong to tax the household as a single entity?
But no, you can make 3-4x in the US. That’s not an exaggeration. And before someone says ‘free healthcare’, big-tech employers in the US provide pretty nice insurance for employees that caps maximum out of pocket expenses to about a week of your salary.
EU (except Zurich and London) tech salaries have sort of stagnated to a point that you make about the same in Bangalore, and spend significantly more.
There is not much difference in labor share of GDP between the US and the EU. People who work for living get a similar share of the value they create in both blocks on the average (maybe a bit less in the US), but it's less evenly distributed in the US.
Top 10% earners are now responsible for ~50% of consumer spending. That doesn't mean billionaires and capitalists, but upper middle class professionals and other high earners. The economy is great on the average, but most people don't feel it.
People remember that Bill Gates played the game and won, and the damage he caused was mostly limited to the economic sphere and to other people playing the same game. That's why they are willing to give Gates a chance to redeem himself by using his money for good.