I don’t see anything that would even point into that direction.
Curious to understand where these thoughts are coming from
I don't see how anyone can't see the problem.
I don’t see anything that would even point into that direction.
Curious to understand where these thoughts are coming from
I don't see how anyone can't see the problem.
This seems to be a result of using overly simplistic models of progress. A company makes a breakthrough, the next breakthrough requires exploring many more paths. It is much easier to catch up than find a breakthrough. Even if you get lucky and find the next breakthrough before everyone catches up, they will probably catch up before you find the breakthrough after that. You only have someone run away if each time you make a breakthrough, it is easier to make the next breakthrough than to catch up.
Consider the following game:
1. N parties take turns rolling a D20. If anyone rolls 20, they get 1 point.
2. If any party is 1 or more points behind, they get only need to roll a 19 or higher to get one point. That is being behind gives you a slight advantage in catching up.
While points accumulate, most of the players end up with the same score.
I ran a simulation of this game for 10,000 turns with 5 players:
Game 1: [852, 851, 851, 851, 851]
Game 2: [827, 825, 827, 826, 826]
Game 3: [827, 822, 827, 827, 826]
Game 4: [864, 863, 860, 863, 863]
Game 5: [831, 828, 836, 833, 834]
Deleted Comment
The problem with the comment is it does not make ownership of the next steps clear. The maintainer should've either taken the ball ("I will report this to...") or made it clear that the reporter should hold the ball ("We can't fix this here. You should report it to..."). Instead they used the passive voice ("The issue should be reported to..."), which means no one owns it, which means it won't be done, which is a bad way to handle a bug report.
For me it was surprising that so many projects used this naive implementation. Nonnaive implementation is faster and much smaller.
I find it useful to characterise algorithmic performance, e.g. O(1), O(n) O(n^2), etc.
What do you find useless about it - and do you know of something that works better?
Much better - research (not always possible)
Best - benchmark it using realistic data from your use case
This is quite a bold claim. Yeah "in the real world" you often need to take into consideration the constant factors... but to go this far to say that Big O is useless? It's very useful to figure out what would likely work (or not) for your problem, given that you know the data size - I used it for exactly that, many many times.
Big O says how fast complexity grows, but it doesn't say what that complexity exactly is.
Does something like that happens in real world? Yes, but of course not in such drastic ways.
It's not the only way to optimize things but there's a reason no one sorts with bubble sort.
Complexity estimations or just benchmarks are much better at that.
You should never limit yourself to big O notation when comparing algorithms.
Uh, we also assume that before we start, we can look at the design of "the machine" and know how many states there can be and from the speed of the machine how many new states are visited each second. So, we will know how long we have to wait before we see either (a) or (b).
Big O notation also suffers from this - it's almost useless for real world problems.
Examples:
- algorithm with O(2^n) complexity might be faster for your problem than O(1) algorithm
- the same algorithm may be O(2^n) or O(n³) depending how you define size function
This is not straightforward.