Readit News logoReadit News
sidkshatriya · 3 years ago
Geometric mean of (time + gzipped source code size in bytes) seems statistically wrong.

What if you shifted time to nanoseconds ? Or source code size in terms of Megabytes. The rankings could change. The culprit is the '+'

I would think Geometric mean of (time x gzipped source code size) is the correct way to compare languages together. It would not matter what the units of time or size are in that case.

[Here the geometric mean is the geometric mean of (time x gzipped size) of all benchmark programs of a particular language.]

ntoskrnl · 3 years ago
Yep this is correct. Adding disparate units is almost always nonsensical. You can confirm with a scientific calculator like insect:

  $ insect '5s + 10MB'
    Conversion error:

      Cannot convert unit MB (base units: bit)
                  to unit s

  $ insect '5s * 10MB'
  50 s·MB

smegsicle · 3 years ago
units, frink, insect oh my
tuukkah · 3 years ago
I think the summed numbers might be unitless. At least all the other numbers are relative to the fastest/smallest entry. That is, what would make sense is score(x) = time(x) / time(fastest) + size(x) / size(smallest) instead of score(x) = (time(x) + size(x)) / score(best)
dwattttt · 3 years ago
It's not necessarily wrong to add disparate units like this. It's implicitly weighting one unit to the other. Changing to nanoseconds just gives more weight to the time metric in the unified benchmark. You could instead explicitly weight them without changing units, if you cared about the size more you could add a multiplier to it.
sidkshatriya · 3 years ago
You really don’t know what weight is the right weight to balance time and gripped size. Multiplying them together sidesteps the whole issue and puts time and size on par with each other regardless of the individual unit scaling.

The whole point of benchmarks is to protect against accidental bias in your calculations. Adding them seems totally against my intuition. If you did want to give time more weight then I would raise it to some power. Example: geometric mean of (time x time x source size) would give time much more importance in an arguably more principled way.

igouy · 3 years ago
> The culprit is the '+'

That annotation does seem to have caused much frothing and gnashing.

Here's how the calculation is made — "How not to lie with statistics: The correct way to summarize benchmark results."

[pdf] http://www.cse.unsw.edu.au/~cs9242/11/papers/Fleming_Wallace...

yorwba · 3 years ago
That paper is only about the reasoning behind taking the geometric mean, it doesn't have anything to say on the "time + gzipped source code size in bytes" part.
agentgt · 3 years ago
I really wish they aggregated the metric of build time (+ whatever).

That is a huge metric I care about.

You can figure out it somewhat by clicking on each language benchmark but it is not aggregated.

BTW as biased guy in the Java world I can tell you this is one area Java is actually mostly the winner even beating out many scripting languages apparently.

igouy · 3 years ago
Do Java "build time" measurements include class loading and JIT compilation? :-)
kaba0 · 3 years ago
Does C “build time” include the time it takes to load the binary from disk?
_b · 3 years ago
I'd be interested to see "C compiled with Clang" added as another language to the benchmark games. In part, digging into Clang vs gcc benchmarks is always interesting, and in part, as Rust & Clang share the same LLVM backend, it would shed light on how much of the C vs Rust difference is from frontend language stuff vs backend code gen stuff.
IshKebab · 3 years ago
With a totally arbitrary conversion of 1 second = 1 gzipped byte.

This is basically meaningless. I don't see why you'd even need to do this. You can easily show code size and performance on the same graph.

NeutralForest · 3 years ago
This presentation is pretty bad, there should be more context, some kind of color scheme or labels instead of text in the background, spacing between the languages represented, other benchmarks than the geometric mean, etc.
gus_massa · 3 years ago
> other benchmarks than the geometric mean

The text is not clear enough, but "geometric mean" is not the benchmark. The 11 problems are listed in https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

The results of the 11 problems are combined using the "geometric mean" into a single number. Some people prefer the "geometric mean", other people prefer the "arithmetic mean" to combine the numbers, other people prefer the maximum, and there rare many other methods (like the average excluding both borders).

NeutralForest · 3 years ago
>The text is not clear enough, but "geometric mean" is not the benchmark.

Thanks that makes more sense, that's another issue for context then. I don't have anything against geometric means but there should be basic statistics like average, max, min,... available as well.

kibwen · 3 years ago
For comparing multiple implementations of a single benchmark in a single language, this sort of data would be interesting as a 2D plot, to see how many lines it takes to improve performance by how much. But for cross-language benchmarking this seems somewhat confounding, as the richness of standard libraries varies between languages (and counting the lines of external dependencies sounds extremely annoying, not only because you have to decide whether to include standard libraries (including libc), you also need to find a way not to penalize those for having many lines devoted to tests).
benstrumental · 3 years ago
Not exactly what you're looking for, but here are some 2D plots of code size vs. execution time with geometric means of fastest entries and smallest code size entries of each language:

https://twitter.com/ChapelLanguage/status/152442889069266944...

simion314 · 3 years ago
And when you want to make the code readable you try to space things out, split things in small functions, use longer and clear variables name. I guess they are asking for running the code trough a minifier so their implementation gains some points.
benstrumental · 3 years ago
I can't find the documentation for it, but you can see here that they measure the size of the source file after gzip compression, which reduces advantage of code-golf solutions:

https://salsa.debian.org/benchmarksgame-team/benchmarksgame/...

kibwen · 3 years ago
On other benchmarks they measure the size of source code after it's been run through compression, as a way to normalize that. Not sure if that's been done here, but it should be.
mrtranscendence · 3 years ago
I'm not sure I see the problem. What does it matter that program A is shorter than program B because language A has a richer standard library? Program A still required less code.
kibwen · 3 years ago
Because that's not what's being measured here, you're also mixing in performance, and it's impossible to tell at a glance whether a score is attributable to one or the other or both.
Thaxll · 3 years ago
The thing they should change is to forbid the nonsense like:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

Actually if you look at all the top net core submissions the only one fast are the one using low level intrinsics etc ...

spullara · 3 years ago
All of the languages now have that trash in them. I'd like a "naive" benchmarks game where you write the code straight forwardly in a normal style for the language.
igouy · 3 years ago
igouy · 3 years ago
> Actually if you look at all the top net core submissions the only one fast are the one using low level intrinsics etc ...

Do you mean "fast" like a C program using low level intrinsics?

arunc · 3 years ago
Just curious, why does this benchmark not include D language? I remember seeing it a few years ago. Was it removed recently?