> This article is about optimizing a tiny bit of Python code by replacing it with its C++ counterpart.
So it's C++ rather than Python.
> This article is about optimizing a tiny bit of Python code by replacing it with its C++ counterpart.
So it's C++ rather than Python.
The question was about monetization, not about your engineer syndrome.
I do have some links in the post that link to the product, but yeah I agree the context is a bit vague for the HN crowd. I added a small notice to the post hopefully to fix it.
FYI, I'm not trying to promote the product, it's the story I want to share.
Cheers!
I also share the feeling it could have been a Tell HN instead of a Show HN.
But let's be honest, of course you want to promote your product. It's not a bad thing to admit that.
"The mirrors guiding this light, made of sandwiched layers of silicon and molybdenum, are ground so precisely that, if scaled to the size of Germany, they would have no bumps bigger than a millimetre" <https://www.economist.com/business/2020/02/29/how-asml-becam...>
I have mentioned it elsewhere on HN, but I've handle dry ice and liquid nitrogen quite a bit, first in college and then just later for fun. Several years back, I noticed a very large price hike in dry ice, as well as a drop in availability (less available in my city, vanishing from small towns). I was told by someone in my supply chain that federal regulations around dry ice had changed, resulting in only a few players being left in the game and of course the price going up.
I wonder if now, a decade or so later, the law of unindented consequences has reared its often invisible head.
Also patient 1 in Italy was a healthy marathon runner who at least ended up in intensive care, not sure if dead yet or not.
I must confess, when I tried to answer the question I got it wrong...! (I feel silly). I only realised I got it wrong when I plugged it into GPT-4o and it came back with the correct answer:
https://chatgpt.com/share/6eb5fa36-e0fd-4417-87d1-64caf06c34...
Worth noting that the prompts from the experiment include "To answer the question, DO NOT OUTPUT ANY TEXT EXCEPT following format that contains final answer: ### Answer:" so it appears that they are stopping the models from 'thinking out loud'. If I add that to the prompt, GPT4o gets it consistently wrong...
https://chatgpt.com/share/7e6a7201-dd2b-43c6-8427-76e5b003ca...
Also worth noting that there are more complex examples where GPT4o seems to fall down such as:
> Alice has 3 sisters. Her mother has 1 sister who does not have children - she has 7 nephews and nieces and also 2 brothers. Alice's father has a brother who has 5 nephews and nieces in total, and who has also 1 son. How many cousins does Alice's sister have?
However I can't honestly say that this is THAT simple or that most people would get this right...