Geometric mean of (time + gzipped source code size in bytes) seems statistically wrong.
What if you shifted time to nanoseconds ? Or source code size in terms of Megabytes. The rankings could change. The culprit is the '+'
I would think Geometric mean of (time x gzipped source code size) is the correct way to compare languages together. It would not matter what the units of time or size are in that case.
[Here the geometric mean is the geometric mean of (time x gzipped size) of all benchmark programs of a particular language.]
I think the summed numbers might be unitless. At least all the other numbers are relative to the fastest/smallest entry. That is, what would make sense is score(x) = time(x) / time(fastest) + size(x) / size(smallest) instead of score(x) = (time(x) + size(x)) / score(best)
It's not necessarily wrong to add disparate units like this. It's implicitly weighting one unit to the other. Changing to nanoseconds just gives more weight to the time metric in the unified benchmark. You could instead explicitly weight them without changing units, if you cared about the size more you could add a multiplier to it.
You really don’t know what weight is the right weight to balance time and gripped size. Multiplying them together sidesteps the whole issue and puts time and size on par with each other regardless of the individual unit scaling.
The whole point of benchmarks is to protect against accidental bias in your calculations. Adding them seems totally against my intuition. If you did want to give time more weight then I would raise it to some power. Example: geometric mean of (time x time x source size) would give time much more importance in an arguably more principled way.
That paper is only about the reasoning behind taking the geometric mean, it doesn't have anything to say on the "time + gzipped source code size in bytes" part.
I really wish they aggregated the metric of build time (+ whatever).
That is a huge metric I care about.
You can figure out it somewhat by clicking on each language benchmark but it is not aggregated.
BTW as biased guy in the Java world I can tell you this is one area Java is actually mostly the winner even beating out many scripting languages apparently.
I'd be interested to see "C compiled with Clang" added as another language to the benchmark games. In part, digging into Clang vs gcc benchmarks is always interesting, and in part, as Rust & Clang share the same LLVM backend, it would shed light on how much of the C vs Rust difference is from frontend language stuff vs backend code gen stuff.
This presentation is pretty bad, there should be more context, some kind of color scheme or labels instead of text in the background, spacing between the languages represented, other benchmarks than the geometric mean, etc.
The results of the 11 problems are combined using the "geometric mean" into a single number. Some people prefer the "geometric mean", other people prefer the "arithmetic mean" to combine the numbers, other people prefer the maximum, and there rare many other methods (like the average excluding both borders).
>The text is not clear enough, but "geometric mean" is not the benchmark.
Thanks that makes more sense, that's another issue for context then.
I don't have anything against geometric means but there should be basic statistics like average, max, min,... available as well.
For comparing multiple implementations of a single benchmark in a single language, this sort of data would be interesting as a 2D plot, to see how many lines it takes to improve performance by how much. But for cross-language benchmarking this seems somewhat confounding, as the richness of standard libraries varies between languages (and counting the lines of external dependencies sounds extremely annoying, not only because you have to decide whether to include standard libraries (including libc), you also need to find a way not to penalize those for having many lines devoted to tests).
Not exactly what you're looking for, but here are some 2D plots of code size vs. execution time with geometric means of fastest entries and smallest code size entries of each language:
And when you want to make the code readable you try to space things out, split things in small functions, use longer and clear variables name. I guess they are asking for running the code trough a minifier so their implementation gains some points.
I can't find the documentation for it, but you can see here that they measure the size of the source file after gzip compression, which reduces advantage of code-golf solutions:
On other benchmarks they measure the size of source code after it's been run through compression, as a way to normalize that. Not sure if that's been done here, but it should be.
I'm not sure I see the problem. What does it matter that program A is shorter than program B because language A has a richer standard library? Program A still required less code.
Because that's not what's being measured here, you're also mixing in performance, and it's impossible to tell at a glance whether a score is attributable to one or the other or both.
All of the languages now have that trash in them. I'd like a "naive" benchmarks game where you write the code straight forwardly in a normal style for the language.
What if you shifted time to nanoseconds ? Or source code size in terms of Megabytes. The rankings could change. The culprit is the '+'
I would think Geometric mean of (time x gzipped source code size) is the correct way to compare languages together. It would not matter what the units of time or size are in that case.
[Here the geometric mean is the geometric mean of (time x gzipped size) of all benchmark programs of a particular language.]
The whole point of benchmarks is to protect against accidental bias in your calculations. Adding them seems totally against my intuition. If you did want to give time more weight then I would raise it to some power. Example: geometric mean of (time x time x source size) would give time much more importance in an arguably more principled way.
That annotation does seem to have caused much frothing and gnashing.
Here's how the calculation is made — "How not to lie with statistics: The correct way to summarize benchmark results."
[pdf] http://www.cse.unsw.edu.au/~cs9242/11/papers/Fleming_Wallace...
That is a huge metric I care about.
You can figure out it somewhat by clicking on each language benchmark but it is not aggregated.
BTW as biased guy in the Java world I can tell you this is one area Java is actually mostly the winner even beating out many scripting languages apparently.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
This is basically meaningless. I don't see why you'd even need to do this. You can easily show code size and performance on the same graph.
The text is not clear enough, but "geometric mean" is not the benchmark. The 11 problems are listed in https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
The results of the 11 problems are combined using the "geometric mean" into a single number. Some people prefer the "geometric mean", other people prefer the "arithmetic mean" to combine the numbers, other people prefer the maximum, and there rare many other methods (like the average excluding both borders).
Thanks that makes more sense, that's another issue for context then. I don't have anything against geometric means but there should be basic statistics like average, max, min,... available as well.
https://twitter.com/ChapelLanguage/status/152442889069266944...
https://salsa.debian.org/benchmarksgame-team/benchmarksgame/...
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Actually if you look at all the top net core submissions the only one fast are the one using low level intrinsics etc ...
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Do you mean "fast" like a C program using low level intrinsics?
2009