Readit News logoReadit News
MITSardine commented on Why export templates would be useful in C++ (2010)   warp.povusers.org/program... · Posted by u/PaulHoule
jcranmer · a month ago
It's worth pointing out that export templates were removed from C++ based on feedback from the most expert implementers of C++ on implementing the feature, which unanimously resolved to "don't": https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n14....

It's also worth noting that the main claim of this article as to its advantage is in fact listed as "phantom advantage #1" in the paper arguing for its removal. The paper doesn't do a good job of explaining this for lay people, but the basic problem is that export templates actually end up working entirely backwards from the way you think they work--rather than letting you hide the implementation details from other compilation units, they actually force you to expose all of the implementation details, even implementation details not normally fronted to the user, in ways that cause the common slight differences to snowball into catastrophic errors.

The feature people want is to not put the template body in a header. But the problem with C++ templates is that template bodies are instantiated at their first point of use. As the post notes, extern templates let you have a dedicated translation unit provide the instantiations with usefully hidden definition, so long as you manually list all of the instantiations that you might need. And this is possible in modern C++, and is used in some cases where the universe of possible templated values is relatively small and self-contained (say, string libraries or numeric libraries).

Export templates theoretically allow you to delegate the expansion of a template without manually listing all of the possible expansions. But the compilation unit that contains the body doesn't know--it can't know--all of the possible expansions. So the compilation unit that uses the exported template, the only one that knows the expansions it needs, has to generate the expansion. From the body contained in the other compilation unit. Which means the first compilation unit needs to export all of the details necessary to recreate the body for an arbitrary template instantiation. And, were the feature to have lasted long enough for multiple versions of C++ to come into play, a current C++ compiler would have to figure out how to instantiate a template which is half C++26 and half C++98, and I don't mean "C++98 source code compiled with C++26", I mean "code compiled in C++98 mode with C++98 semantics."

At the end of the day, it turns out to be easier to literally include the entire source code of the template as an included header file than it is to do all of the magic required to make export templates work.

MITSardine · a month ago
Wouldn't a decent compromise be to extend the capabilities of the C preprocessor, so that we could iterate over types at preprocessing time?

That way, we could instantiate for a great number of types more easily, while keeping implementations in source files. You can always do an instantiation macro, but it still becomes painful when the number of template arguments (and their possible values) increases.

I know there's the Boost PP library but I personally find it incomprehensible once you try to nest two+ loops.

Surely there's a reason the CPP hasn't budged?

MITSardine commented on AMD continues to chip away at Intel's x86 market share   tomshardware.com/pc-compo... · Posted by u/speckx
bee_rider · a month ago
I imagine it would be kind of hard to switch away from Intel in the workstation/cluster space.

Like you have to replace OneAPI, which sounds easy because it’s just one thing, but like do you really want to replace BLAS, LAPACK, MPI, ifort/icc… and then you still need to find a sparse matrix solver…

MITSardine · a month ago
What do you mean by this? I've been using those libraries on mac ARM and AMD processors, are you referring to intel-specific implementations? How about the sparse matrix solver, what do you use?
MITSardine commented on Weighting an average to minimize variance   johndcook.com/blog/2025/1... · Posted by u/ibobev
pvillano · a month ago
What's the goal of this article?

There exists a problem in real life that you can solve in the simple case, and invoke a theorem in the general case.

Sure, it's unintuitive that I shouldn't go all in on the smallest variance choice. That's a great start. But, learning the formula and a proof doesn't update that bad intuition. How can I get a generalizable feel for these types of problems? Is there a more satisfying "why" than "because the math works out"? Does anyone else find it much easier to criticize others than themselves and wants to proofread my next blog post?

MITSardine · a month ago
This all hinges on the fact the variance is homogeneous to X^2, not X. If we look at the standard deviation instead, we have the expect homogeneity: stddev(tX) = abs(t) stddev(X). However, it is *not linear*, rather stddev(sum t_i X_i) = sqrt(sum t_i stddev(X_i)) assuming independent variables.

Quantitatively speaking, t^2 and (1-t)^2 are always < 1 iff |t| < 1 and t != 0. As such, the standard deviation of a convex combination of variables is *always strictly smaller* than the convex combination of the standard deviations of the variables. In other words, stddev(sum_i t_i X_i) < sum_i t_i stddev(X_i) for all t != 0, |t|<1.

What this means in practice is that the convex combination (that is, with positive coeffs < 1) of any number of random variables is always smaller than the standard deviation of any of those variables.

MITSardine commented on Think in math, write in code (2019)   jmeiners.com/think-in-mat... · Posted by u/alabhyajindal
lxe · a month ago
I think the author makes a good point about understanding structure over symbol manipulation, but there's a slippery slope here that bothers me.

In practice, I find it much more productive to start with a computational solution - write the algorithm, make it work, understand the procedure. Then, if there's elegant mathematical structure hiding in there, it reveals itself naturally. You optimize where it matters.

The problem is math purists will look at this approach and dismiss it as "inelegant" or "brute force" thinking. But that's backwards. A closed-form solution you've memorized but don't deeply understand is worse than an iterative algorithm you've built from scratch and can reason about clearly.

Most real problems have perfectly good computational solutions. The computational perspective often forces you to think through edge cases, termination conditions, and the actual mechanics of what's happening - which builds genuine intuition. The "elegant" closed-form solution often obscures that structure.

I'm not against finding mathematical elegance. I'm against the cultural bias that treats computation as second-class thinking. Start with what works. Optimize when the structure becomes obvious. That's how you actually solve problems.

MITSardine · a month ago
Math isn't about memorizing closed-form solutions, but analyzing the behavior of mathematical objects.

That said, I mostly agree with you, and I thought I'd share an anecdote where a math result came from a premature implementation.

I was working on maximizing the minimum value of a set of functions f_i that depend on variables X. I.e., solve max_X min_i f_i(X).

The f_i were each cubic, so F(X) = min_i f_i(X) was piecewise cubic. X was dimension 3xN, N arbitrarily large. This is intractable to solve as, F being non-smooth (derivatives are discontinuous), you can't well throw it at Newton's method or a gradient descent. Non-differentiable optimization was out of the question due to cost.

To solve this, I'd implemented an optimizer that moved one variable at a time x, such that F(x) was now a 1d piecewise cubic function that I could globally maximize with analytical methods.

This was a simple algorithm where I intersected graphs of the f_i to figure out where they're minimal, then maximize the whole thing analytically section by section.

In debugging this, something jumped out: coefficients corresponding to second and third derivative were always zero. What the hell was wrong with my implementation?? Did I compute the coefficients wrong?

After a lot of head scratching and code back and forth, I went back to the scratchpad, looked at these functions more closely, and realized they're cubic of all variables, but linear of any given variable. This should have been obvious, as it was a determinant of a matrix whose columns or rows depended linearly on the variables. Noticing this would have been 1st year math curriculum.

This changed things radically as I could now recast my maxmin problem as a Linear Program, which has very efficient numerical solvers (e.g. Dantzig's simplex algorithm). These give you the global optimum to machine precision, and are very fast on small problems. As a bonus, I could actually move three variables at once --- not just one ---, as those were separate rows of the matrix. Or I could even move N at once, as those were separate columns. This could beat all the differentiable optimization based approaches that people had been doing on all counts (quality of the extrema and speed), using regularizations of F.

The end result is what I'd consider one of the few things not busy work in my PhD thesis, an actual novel result that brings something useful to the table. To say this has been adopted at all is a different matter, but I'm satisfied with my result which, in the end, is mathematical in nature. It still baffles me that no-one had stumbled on this simple property despite the compute cycles wasted on solving this problem, which coincidentally is often stated as one of the main reasons the overarching field is still not as popular as it could be.

From this episode, I deduced two things. Firstly, the right a priori mathematical insight can save a lot of time in designing misfit algorithms, and then implementing and debugging them. I don't recall exactly, but this took me about two months or so, as I tried different approaches. Secondly, the right mathematical insight can be easy to miss. I had been blinded by the fact no-one had solved this problem before, so I assumed it must have had a hard solution. Something as trivial as this was not even imaginable to me.

Now I try to be a little more careful and not jump into code right away when meeting a novel problem, and at least consider if there isn't a way it can be recast to a simpler problem. Recasting things to simpler or known problems is basically the essence of mathematics, isn't it?

MITSardine commented on Why effort scales superlinearly with the perceived quality of creative work   markusstrasser.org/creati... · Posted by u/eatitraw
conorbergin · a month ago
Very funny to put a bibtex citation under such a small piece of work
MITSardine · a month ago
We'd be fortunate if everything that gets cited were this short (it certainly often has similar amount of useful information)
MITSardine commented on Work after work: Notes from an unemployed new grad watching the job market break   urlahmed.com/2025/11/05/w... · Posted by u/linkregister
wiz21c · a month ago
When I hire, I always look at personal github projects. They are a hint that the person loves coding and loves creating software. I'm not looking for 1000 stars projects, and I don't even look at the kind of project, just the fact that the candidate has done some work in his spare time.

If there's no github project, I ask the candidate what website, web communities he watches/participates in regularly. I check if they are related to programming or building software (bonus point if you read HN :-)).

Both are good signs that there is an interest in the job that goes beyond paycheck.

MITSardine · a month ago
While those are certainly indicators of interest, is their absence an indicator of lack of interest? In other words, "has side projects" is sufficient to prove "likes programming", but I don't think it's necessary.

There's only 24 hours in a day, and mastering any serious curriculum already takes the majority of those.

Then there's family: some have parents to support, and/or a spouse they can't just ignore all evening to sit in front of a terminal typing git push.

Lastly, plenty of people learn or tinker on their own, but they don't all have the (socioeconomically loaded) reflex of marketing themselves with a github repo.

Of my whole Bachelor's, Master's and PhD cohorts, I haven't known one person to have a github repo outside of employment. Some were building little things on the side but sharing them informally, never in public.

What you're looking for is people with no social life, no responsibilities towards anyone or anything (even just being an immigrant can be a huge time and energy sink), and with the social background to confidently market themselves by putting what's objectively noise online.

MITSardine commented on Work after work: Notes from an unemployed new grad watching the job market break   urlahmed.com/2025/11/05/w... · Posted by u/linkregister
margorczynski · a month ago
The most baffling thing is that even now the H1Bs, etc. are still pouring in. How can you say there is a shortage of IT talent and you need to import them where most grads can't find any work?
MITSardine · a month ago
Are junior / new grads getting sponsored for H1Bs, or senior people?
MITSardine commented on Work after work: Notes from an unemployed new grad watching the job market break   urlahmed.com/2025/11/05/w... · Posted by u/linkregister
MITSardine · a month ago
I just wanted to comment on the "out of distribution" solution the author proposes, partly for the young grads on this forum.

Going "out of distribution" in abilities also means your job prospects go "out of distribution". When you specialize, so too does the kind of position you'd be the better fit for. This can mean radically fewer possibilities, and strong geographic restrictions.

To give an example, my PhD topic concerned something "that's everywhere" but, when you look at things more closely, there's only < 10 labs (by lab, I mean between 1 and 3 permanent researchers and their turnover staff) in the world working on it, and around that many companies requiring skills beyond gluing existing solutions together, in which case they'd just as well hire a cheaper (and more proficient) generalist with some basic notions.

This isn't even a very abstract, very academic field, it's something that gets attacked within academia for being too practical/engineering-like on occasion.

I understand the "belly of the curve" gets automated away, but consider that the tail end of the curve - producing knowledge and solutions to novel problems - has been for a long time, since Gutenberg's invention of the printing press, if not oral communication. The solutions scale very well.

A researcher's job is, almost by definition, to work themselves out of a job, and this has been the case since long before AI. Once the unknown has been known, a new unknown must be found and tackled. There's very, very few places in the world that truly innovate (not implementing a one-off novel solution produced in some academic lab) and value those skills.

I don't mean to be overly bleak, but it doesn't necessarily follow from this automation that the freed salary mass will go towards higher-level functions; just as likely (if not more), this goes towards profits first.

MITSardine commented on Paradise Lost   alexandermigdal.com/parad... · Posted by u/adharmad
soco · 5 months ago
But is the difference mostly in the society, or in their lost youth? I reckon it's mostly in the second, as my parents didn't do rivers of schnapps (not a vodka land there) or fistfights at the times when I was doing them. Where I will agree though, is that my youth ran much different than my kid's (aka much wilder and that was quite common) but again, isn't this difference always showing between generations? Shortly put, aren't we witnessing more of the same history?
MITSardine · 5 months ago
I think it's both.

You can see it with smoking. Only 20 years ago (I'll take France as the example), you could smoke indoors in restaurants and many public places and, a few years before that, even in classrooms.

Recently, it was forbidden in public parks and similar places. A perhaps still minoritary segment of the population would like it outlawed even in e.g. restaurant and bar terraces.

Since France remains behind on this compared to the US, you can get a time-traveling experience regarding this right now, by boarding a plane from Paris to e.g. Boston or back. You won't find a terrace where you're allowed to smoke (or even vape) in Boston (at least I didn't) or, if you're American, you might find the French very comfortable smoking right under your nose.

There is no doubt these are good policies as far as public health is concerned. But they outline how, in the tug-of-war between personal freedom and safety, there has been a tendency lately for safety to win.

Other examples, as far as I'm aware, there weren't even speed limits on non-agglo roads before the 70s (France, again). Drinking (under the limit of manifest ebriety, and the tolerance must have been pretty lenient) and driving was not illegal until 1970. You could drive a motor scooter/moped including on high speed roads (though they wouldn't break 40km/h unless unbridled) from the age of 14 without any paperwork, nowadays it requires a minimal license. I'm very doubtful people didn't know this was a dangerous situation, or that they didn't care, I think they simply prioritized freedom and simplicity over living in a zero-risk society.

I point to these safety regulations because they're where the compromise between social order and personal freedom is most evident, but they're only manifestations of a broader change of the Zeitgeist, in my mind.

Another clear tendency seems to be an obsession with so-called insecurity which, according to politicians and a worryingly large portion of the French (judging by election results), is at an unbearable all time high; in fact, violent crimes have never been so rare (for comparison, murder rate is 1/5th that of the US).

Everyone seems obsessed with risk and danger, be it the right of being mugged, or the left of precarity (which I can better sympathize with). It seems one half of the population wants to live in a police state with no personal freedom, and the other wants to live in an absolute socialist state with no financial freedom.

Just to conclude, it also seems the newer generation has been found less promiscuous, which, to me, is not a very adventurous attitude.

MITSardine commented on Sumo – Simulation of Urban Mobility   eclipse.dev/sumo/... · Posted by u/Stevvo
MITSardine · 5 months ago
This looks really polished. I've always found crowd and traffic simulation fascinating.

The Projects page is worth looking at too.

u/MITSardine

KarmaCake day193October 16, 2024View Original