And to answer your question, the video is a guest on the view who i don't know, saying the exact numbers i quoted. I was quoting the person on the view, which aired yesterday or something.
what's interesting, is instead of discussing what i said prior to the link, you chose to instead try and make me feel bad for linking a video.
Keep it classy, HN.
Learn more about that here: https://github.com/Brayden/starbasedb/issues/12
If they had linked to the instructions in their post (or better yet a link to a one click install of a VS Code Extension), it would help a lot with adoption.
(BTW I consider it malpractice that they are at the top of hacker news with a model that is of great interest to a large portion of the users where and they do not have a monetizable call to action on the page featured.)
Co-pilot is mostly useful to stay in the zone, allowing me to focus on a larger task and letting it get some of the details right (which it does ~90% of the time).
On the other hand, I use ChatGPT/Claude for more open ended tasks (e.g. "I got this <insert obscure> error", "how do I configure this framework so that xxx") which previously I would have googled hoping to find a stackoverflow answer or a doc page somewhere. For this use case I'd say it's ~50% successful, but I often have to deal with hallucinations - some times just following up with "Are you sure?" does help, but it's hit/miss.
As I said a the beginning, mostly marginal improvement. It's definitely saved me time, but thus far nothing that I couldn't do myself by spending a little bit more time. Largely it is a nice to have, not a need to have.
People who get 'dumber' from technologies, like GPS mentioned by a commenter (nice example), but also auto-correct, intellisense and such helpertools, they choose themselves the path of least resistance, and least learning. Arguably, they are already dumb, as they choose themselves not to learn.
It's all ok, i would say there's no rule that forces people to learn or be smart. But if you do want to learn, avoid using technologies _constantly_ which hamper your learning.
In the end, be responsible for yourself, and don't blame external things from preventing you to do things while you choose yourself to engage with them. (if it's not by choice, it'd be a different case perhaps, but no one is forcing anyone to use AI....)
Not sure about "real" but one can have useful distances which are not symmetric like the distance between cities measured in time or in gallons.
In comparison, both of your examples are much closer to norms as they both satisfy the triangle inequality.
For reference, this is what I’m referring to when I say a “norm”:
The idea of having client/server decoupled via a REST api that is itself discoverable, and that allows independent deployment, seems like a great advantage.
However, the article lacks even the simplest example of an api done the “wrong” vs the “right” way. Say I have a TODO api, how do I make it so that it uses HATEOAS (also who’s coming up with these acronyms…smh)?
Overall the article comes across more as academic pontification on “what not to do” instead of actionable advice.