That is why it is important that new ideas can be discussed freely, which wasn't the case in Galileo's time.
That is why it is important that new ideas can be discussed freely, which wasn't the case in Galileo's time.
Asking out of ignorance. What tangible impact did Plato's work have on the world?
I can list how all the ways people that you listed made an impact on the world.
If someone today solves an obscure problem of similar difficulty to Dijkstra's graph search, they won't be called great anything. It is the impact that matters, not the difficulty of the problem or the IQ of the inventor. Great scientists, philosophers and artists had the foresight, luck, and ability to develop something that subsequently had a massive impact on the world.
A point in the article that I agree with is that great philosophers shouldn't be treated as flawless geniuses. Great people of the past made massively impactful contributions, but that doesn't make them the ultimate authority on their field. Answers to today's problems aren't going to be found by disecting footnotes of Aristotle, Turing or Einstein, but in work of people who will make contributions today that will shape the future.
So, these are just the top X searches from some time period, presumably after some processing and filtering (NSFW results eliminated, capitalization and spelling corrected, maybe some categories omitted, etc).
Perhaps, but originally we weren't discussing 0, we are starting with an arbitrarily large number. This problem is exacerbated when you say:
> I disagree that increasing "exponentially"... is always faster ... Both versions have the same worst-case running time
Worst case, sure, but how do you compute average? You could take a selection of n number of integers from 0 to infinity and the number of guesses until you found the correct answer, but of course you can't do this because you can't easily get a random number to test with that is between 0 and infinity. That said, it is fairly clear that x * x is faster than 2x is faster than x+1, but you can only prove this to be true once you pick an arbitrary limit less than infinity -- and the benefits of the better algorithm are only evident as you approach infinity.
Seems like a catch 22. Perhaps there is some expert in set theory as applied within computer science that can provide the appropriate formal context for determining the better algorithm in this context.
As for O notation, it is an interesting tool but one that has no application within any realm of actual programming in which I have worked, and I have never bothered to learn it simply to dazzle people in algorithm-based interviews (perhaps because I do more nuts and bolts type work rather than optimization).
------------------
1. In the initial probing for the upper bound, there is no point to grow faster than 2x each time
If we double the probe each time, we'll find the upper bound in O(log N) time. Then, we'll need O(log N) additional time to find the real answer. That makes the entire algorithm O(log N).
Suppose instead, in the initial probing we grow the bound faster, say by squaring each time. We'll find the upper bound faster, but we'll still need additional O(log N) time to find the real answer. So, we didn't really make the algorithm (asymptotically) faster - it is still O(log N).
(I glossed over some details in the explanation, but even if you work out the math exactly - which is not that hard to do - the conclusion holds.)
------------------
2. There is no point starting from a number greater than 1
There is no way to pick a "good" starting number - should it be 1,000? 1,000,000,000? 10^100? (10^100)^100? You might as well start from 1. That way, you guarantee that you'll find small numbers fast and large numbers still in an asymptotically optimal time.
------------------
As someone with fairly strong theoretical computer science backgound, I can see the intended meaning behind the interviewer's question and answer. But, it is a theoretical question. There definitely are a lot of highly valuable software developers out there who couldn't answer it.
Evolution tuned us to survive in a world where food is scarce and periods of starvation are frequent. In the rare occasion that you have excess food, you eat it and build up fat reserves. The saved up fat will help you survive the period of starvation that undoubtedly follows.
any thoughts on why?
Look at the presenter's experience:
> Over a six month period, I lead the project to rewrite a top 100 website using a new software stack. Doing so, we used HAProxy, Varnish, Nginx, PHP-FPM, Symfony2, Syslog-ng, Redis and MySQL to create a platform that handles 100 million page views per day and has room to grow.
There are tons of companies out there eager to hire someone with that experience.
There are some alternatives like continue.dev or Jetbrains own AI offering but no Cursor or Claude Code ( Sonnet 3.7/4) you can get through Jetbrains plugin or others, but Anthropic does not provide support same with cursor.