The literature on this is too hard to summarize in a post, but basically in turns into an empirical-scientific question, of making predictions about model features and testing these predictions scientifically.
There are infinite things I don’t understand, some because I’m too young, some too old, but mostly just because I haven’t yet gone down that path in life. For what it’s worth, I was born in 1982.
Imagine this guy's post if you substitute "too old" with "black" or "female" (or "male"). It would be cut down quickly and yet here we are expected to laugh at things.
What I hate about ageism most of all is it makes it impossible to have any kind of discussion about the real merits and demerits of things if there's some kind of new versus established nature to it. In certain circles, there seems to be a false, pervasive assumption that what's new and popular among the young is better, and that uptake is just inhibited by creaky old folks; in other circles there seems to be an assumption that's what's old and established among the older crowd is that way because it's superior.
The reality is that some established products are established because they are so great; and other products are great because they address limitations of existing products. But once you bring age of critics or advocates into the mix, it's all over because someone starts slyly looking at their pals over their shoulder and dismissing the discourse as due to youth or age.
I've been on both sides of this, as someone the same age as the author, and it's infuriating. There are products that my generation grew that I never adopted because of concerns, and now it's the young trendy thing to do to abandon them. There are new products that are overhyped imho because they solve problems that never really existed, but the wheel gets reinvented anyway because of the constant need for people to brand themselves as innovators. On the other hand, there are new products that finally exist that I wish everyone would take up, but don't because of old products that should have never become as popular as they did, or because of the vagaries of network effects, fads, and so on.
So this person doesn't get Facebook Stories or whatever the hell it is. Fine. Is there anything wrong with that? No. Can't we talk about that? Why does it have to become about age, even if he's doing it through self-deprecating (humblebragging?) humor?
More competition is required, which to my mind suggests introducing public pharmaceuticals, but also deregulating drugs in general.
It always comes from one direction: engineering.
Benchmarks have multiple audiences and multiple uses. What servers one customer (perhaps a microarchitect tuning a pipeline) does not serve another (a company building a product that has to choose a particular component) nor another (a professor looking at historical trends).
Of course if you try to turn a screw with a hammer it's not going to work, so choose the right benchmark for your analysis.
My overall sense is that there's been a pull back from general benchmarking compared to say, 15 years ago, and it's unfortunate, because it leaves the benchmarking to developers of languages, compilers, and whatnot. This provides an opportunity to show of the best-case scenarios for the languages, but also for them to hide the areas of weakness -- and those hidden areas are often the mine traps for those deciding whether to invest resources in a new language.
Having a standard, comprehensive set of problems helps address this "hiding." I also think there's value in naive benchmark programs as well as "expert" tuned ones: not everyone is going to optimize every single scenario in every language.
The one thing I've never seen implemented well is some measure of "ergonomics" or "high-level" versus "low-level" aspects of a language, which also seems important to me. Some of that is going to be subjective but some of it not.
The idea behind IQ is that lots of different cognitive tasks are correlated.
The short answer is that it's difficult to say from their results, and they don't explicitly test that, but it looks like it.
Yes, it is sold as such. But the appeal is fading fast.
Academia nowadays is something between a Beauty Pageant and the Hunger Games where people struggle with underpayment for years for maybe one day getting the famed tenured position.
We know that COVID-19 has a high mortality rate (and a high rate of requiring ICU care). "High" doesn't mean 50% or higher; it means high compared to influenza or high compared to the capacity the US health system is intended to handle.
This article shouldn't be reassuring. Read this line in particular:
If I were at home with similar symptoms, I probably would have gone to work as usual.
That should scare the fuck out of you.
I'm not even going to try this time, I'm just going to say to everybody reading this: superdeterminism is not at all the same thing as determinism. It is a far stronger assumption with far far more unintuitive consequences for our understanding of nature. If you're reading this and just thinking "superdeterminism is okay because there's no free will", then you've been suckered by this article into believing a massive oversimplification.
I think it's worth thinking through and delineating superdeterminism to its utmost limits even if I wouldn't necessarily say I find it compelling.
I do wonder why the authors are so quick to reject nonreductionism though, as nonreductionism seems fairly reasonable to me. Maybe I have a different idea of nonreductionism, but it seems to me that rejecting nonreductionism is akin to accepting Laplace's demon which as far as I understand has been disproved. Basically, at some point the information in a system supercedes that of any system that might represent it faithfully, in part because of measurement effects -- there's a lot of parallels with QM issues.