Or we would have had 27 bit addresses and ran into problems sooner.
https://www.internetsociety.org/blog/2016/09/final-report-on...
Some more interesting history reading here:
Or we would have had 27 bit addresses and ran into problems sooner.
https://www.internetsociety.org/blog/2016/09/final-report-on...
Some more interesting history reading here:
To compete, other countries need their own VC system which is a bit tricky. It requires likely a level of government funding or other incentives to get it off the ground and ramping up. Then also, you need to incentivize VCs to stay in your country.
At least my 2cents.
> If you don't understand the code, your only recourse is to ask AI to fix it for you, which is like paying off credit card debt with another credit card.
Invariably, after using the brain, the real fix was usually quite simple - but, also invariably - was hidden behind 2-3 levels of indirection in reasoning.
On the other hand, I had rather pleasant results when “pair-debugging”, my demanding to explain why or just correcting it in the places when it was about to go astray certainly had effect - in return I got some really nice spotting of “obvious” but small things I might have missed otherwise.
That said, definition of “going astray” varies - from innocently jumping into what looked like unsupported conclusions to blatantly telling me something was equal to true right after ingesting the log with the printout showing the opposite.
In my case I can’t even remember last time Claude 3.7/4 has given me wrong info as it seems very intent on always doing a web search to verify.
A not-so-subtle example from yesterday: Claude Code claiming to me yesterday assertion Foo was true, right after ingesting the logs with the “assertion Foo: false” in it.
But now, you're wondering if the answer the AI gave you is correct or something it hallucinated. Every time I find myself putting factual questions to AIs, it doesn't take long for it to give me a wrong answer. And inevitably, when one raises this, one is told that the newest, super-duper, just released model addresses this, for the low-low cost of $EYEWATERINGSUM per month.
But worse than this, if you push back on an AI, it will fold faster than a used tissue in a puddle. It won't defend an answer it gave. This isn't a quality that you want in a teacher.
So, while AIs are useful tools in guiding learning, they're not magical, and a healthy dose of scepticism is essential. Arguably, that applies to traditional learning methods too, but that's another story.
This phrase is now an inner joke used as a reply to someone quoting LLMs info as “facts”.
All take quite an effort to master, until then they might slow one down or outright kill.
Some (for me) useful pointers to that regard for both:
1. https://www.agner.org/optimize/instruction_tables.pdf - an extremely nice resource on micro architectural impacts of instructions
2. https://llvm.org/docs/CommandGuide/llvm-mca.html - tooling from Intel that allows to see some of these in real machine code
3. https://www.intel.com/content/www/us/en/developer/articles/t... - shows you whether the above is matching the reality (besides the CPU alone, more often than not your bottleneck is actually memory accesses; at least on the first access which wasn’t triggered by a hardware prefetcher or a hint to it. On Linux it would be staring at “perf top” results.
So, the answer is as is very often - “it depends”.
The effects for me (living in Brussels city centre, so quite noisy - police, ambulance, sometimes loud tourists past midnight, and a bit of construction at 6am nearby to keep it real :-) ) were very pronounced:
From needing 9 hours and feeling groggy in the mornings anyway, to easily going on 7-8, feeling very refreshed and alert each day.
A cool side effect was that this superpower works also while traveling - so, I no longer care how noisy the airco is in the hotel room, being next to the lift, or having the window above the lively bar.
The only downside with those earplugs that they are good maybe for 3-4 nights and then are too squished to be useful; but the upsides more than make it up for me.
This is just pretending that if you have a cat and a dog in two bags and you call it “a bag”, it’s one and the same thing…
To put it another way, ask a professional comedian to complete a joke with a punchline. It's very likely that they'll give you a funny surprising answer.
I think the real explanation is that good jokes are actually extremely difficult. I have young children (4 and 6). Even 6 year olds don't understand humour at all. Very similar to LLMs they know the shape of a joke from hearing them before, but they aren't funny in the same way LLM jokes aren't funny.
My 4 year old's favourite joke, that she is very proud of creating is "Why did the sun climb a tree? To get to the sky!" (Still makes me laugh of course.)
In the adult would model there is absolutely no contradiction about the joke you mention - it’s just a bit of cute nonsense.
But in a child’s world this joke might be capturing the apparent contradiction - the sky is “in the tree”, so it must have climbed it, to be there (as they would have to do), yet they also know that the sun is already in the sky, so it had absolutely no reason to do that. Also, “because it’s already there” - which is a tricky idea in itself.
We take planetary systems and algebra and other things we can’t really perceive as granted, but a child model of the world is made of concrete objects that mostly need a surface to be on, so the sun is a bit of a conundrum in itself! (Speaking of my own experience remembering a shift from arithmetics to algebra when I was ~8).
If not too much of a personal question - I would love to hear what your child would answer to a question why she finds that joke funny. And whether she agrees with my explanation why it must be funny :-)