Does the author enjoy writing code primarily because they enjoy typing?
Are they not able to have the mental discipline to think and problem solve whilst using the living heck out of an AI auto complete?
What's the fun in manually typing out code that has literally been written, copied, copied again, and then re-copied so many times before that the LLM can predict it?
Isn't it more dangerous to not learn the new patterns / habits / research loops / safety checks that come with AI coding? Surely their future career success will depend on it, unless they are currently working in a very, very niche area that is guaranteed to last the rest of their career.
I'm sorry, this is a truly unnatural and absurd reaction to a very natural feeling of being out of our comfort zone because technology has advanced, which we are currently all feeling.
But imagine a new employee eager to please - you could easily imagine them OK’ing the document and making the same assumption the LLM did - “why would you randomly throw in that word if it wasn’t relevant”. Maybe they would ask about it though…
Google search has the same problem as LLMs - some meanings of a search text cannot be de-ambiguified with just the context in the search itself, but the algo has to best-guess anyway.
The cheaper input context for LLMs get, and the larger the context window, the more context you can throw in the prompt, and the more often these ambiguities can be resolved.
Imagine in your gorilla in the step example, if the LLM was given the steps, but you also included the full text of slack/notion and confluence as a reference in the prompt. It might succeed. I do think this is a weak point in LLMs though - they seem to really, really not like correcting you unless you display a high degree of skepticism, and then they go to the opposite end of the extreme and they will make up problems just to please you. I’m not sure how the labs are planning to solve this…
It seems like many recommendations are to use at least 75-100, or even 128. Being fairly conservative, if you had 10k hosts hashing 1B passwords a second, it would take 7.5 years worst case to crack [1]. If a particular site neglects to use a slow hash function and salting, it's easy to imagine bad actors precomputing rainbow tables that would make attacks relatively easy.
You can rebut that that's still a crazy amount of computation needed, but since it's reusable, I find it easy to believe it's already being done. For comparison, if the passwords have 100 bits of entropy, it would take those same 10k servers over 4 billion years to crack the password.
[1]: (2*71 / 1e9 / 10000 / (606024*365)) ≈ 7.5
If that's not true and the password is being stored using MD5 (something that's been NIST-banned at this point for over a decade), then honestly all bets are off, and even 128 bits of entropy might not be enough.
1. React burst on the scene in 2014
2. the hyperscale FANG companies were dominating the architecture meta with microservices, tooling etc, which worked for them at 500+ engineers, but made no sense for smaller companies.
3. there was a growing perception that "Rails doesn't scale" as selection bias kicked in - companies that successfully used rails to grow their companies, then were big enough to justify migrating off to microservices, or whatever.
4. Basecamp got caught up in the DEI battles and got a ton of bad press at the height of it.
5. Ruby was legitimately seen as slow.
The big companies that stuck with Rails (GH, Shopify, Gitlab, etc, etc) did a ton of work to fix Ruby perf, and it shows. Shopify in particular deserves an enormous amount of credit for keeping Ruby and Rails going. Their continued existence proves that Rails does, in fact, scale.
Also the meta - tech-architecture and otherwise - seems to be turning back to DHH's favor, make of that what you will.
I supposed it'd be nice to have one less thing to manage, but I'm wondering if there are any obvious gotchas to moving these features over to sqlite and postgresql.
The main argument for caching in the DB (the slight increase in latency going from in-memory->DB-in-memroy is more than countered by the DB's cheapness of cache space allowing you to have tons more cache) is one of those brilliant ideas that I would like to try at some point.
Solid job - i just am 100% happy with Sidekiq at this point, I don't understand why I'd switch and introduce potential instability/issues.
The point of Jensen’s inequality if I understand correctly is that you’d underestimate the value of holding using a basic estimate approach, because you’ll underestimate the compounding cash flow from growth?
https://waymo.com/safety/impact/#downloads
There isn't much to the data available for download, but it looks like 0.00001207261588 accidents per mile, or ~1.2 accidents per 100,000 miles (268/22199000). Figuring your father drives 15k miles per year, times 30 years and rounding up to 500k miles, Waymo has a recorded 6 accidents to your father's 0.
Not sure why that's an interesting comparison, however.
Assuming your dad is good at not driving when he shouldn't (tired/drunk/angry), he's not on the road when it's worrisome. I don't worry about getting into accidents with drivers who aren't on the road, I worry about the tired/drunk/angry drivers I do have to share the road with. Waymo at 2:15am after the bars let out is much less worrisome than any other car at that time, because I have no idea who's in that other car. Your father could be the safest driver ever, but I have no idea if it's him in the other car, or if that driver is totally blacked out and shouldn't be driving.
I think it’s interesting because:
1) it gives Waymo a higher target to shoot for - it hasn’t “solved” self-driving because its safer than the average driver. I am so impressed by Waymo, but I feel like some of this article smacked of premature “mission accomplished” vibes. The fact that it just accepted the comparison to average without caveat is an example of that. 2) As a matter of policy, everyone can agree that a Waymo ride home for the tipsy is good, but where policy will have issues is convincing good drivers such as my dad to take Waymos everywhere. Not to mention most drivers irrationally think they’re way better than average - that will affect policy in a real way.
Ives + Altman is perceived as a viable successor to the Ives + Jobs partnership that made Apple successful.
Apple is weak and doesn't seem capable of innovating anymore, nor do they seem to understand how to build AI into products.
There's an opportunity to build an Apple-sized hardware wearables company with AI at its core, just as Altman built ChatGPT and disrupted the Google-sized search.
"Apple-sized" more than justifies a 5B valuation.