Im also someone who refuses to pay for it, so maybe the paid versions do better. who knows.
Im also someone who refuses to pay for it, so maybe the paid versions do better. who knows.
I know you probably meant "augment fact checking" here, but using LLMs for answering factual questions is the single worst use-case for LLMs.
The fact that it provides those relevant links is what allows it to replace Google for a lot of purposes.
For any job-hunters, it's important you forget this during interviews.
In the past I've made the mistake of trying to convey this in system design interviews.
Some hypothetical startup app
> Interviewer: "Well what about backpressure?"
>"That's not really worth considering for this amount of QPS"
> Interviewer: "Why wouldn't you use a queue here instead of a cron job?"
> "I don't think it's necessary for what this app is, but here's the tradeoffs."
> Interviewer: "How would you choose between sql and nosql db?"
> "Doesn't matter much. Whatever the team has most expertise in"
These are not the answers they're looking for. You want to fill the whiteboard with boxes and arrows until it looks like you've got Kubernetes managing your Kubernetes.
I think three things about what you're saying:
1. The answers you're giving don't provide a lot of signal (the queue one being the exception). The question that's implicitly being asked is not just what you would choose, but why you would choose it. What factors would drive you to a particular decision? What are you thinking about when you provide an answer? You're not really verbalizing your considerations here.
A good interviewer will pry at you to get the signal they need to make a decision. So if you say that back-pressure isn't worth worrying about here, they'll ask you when it would be, and what you'd do in that situation. But not all interviewers are good interviewers, and sometimes they'll just say "I wasn't able to get much information out of the candidate" and the absence of a yes is a no. As an interviewee, you want to make the interviewer's job easy, not hard.
2. Even if the interviewer is good and does pry the information out of you, they're probably going to write down something like "the candidate was able to explain sensibly why they'd choose a particular technology, but it took a lot of prodding and prying to get the information out of them -- communications are a negative." As an interviewee, you want to communicate all the information your interviewer is looking for proactively, not grudgingly and reluctantly. (This is also true when you're not interviewing.)
3. I pretty much just disagree on that SQL/NoSQL answer. Team expertise is one factor, but those technologies have significant differences; depending on what you need to do, one of them might be way better than the other for a particular scenario. Your answer there is just going to get dinged for indicating that you don't have experience in enough scenarios to recognize this.
I'm sure it is very configurable, but every visual I've seen of this thing looks awful and not something I'd want to look at while working. But I understand we all have different tastes.
But even in the blog post I'm struggling with 'why?' here. Am I to understand the primary benefits here are improved battery life and increased developer productivity by tests running faster? Is that it?
I travel an inordinate amount and have never found a Macbook's battery life to be insufficient. I struggle to even remember the last time I've used my computer long enough to drain the batter and not be near a power outlet. I work from ski lodges, planes, my car. This has never been a problem for me. Not once. This just feels like a really bad metric to optimize for given a typical developers' schedule and work arrangement.
> On the flip side, we'll get a massive boost in productivity from being able to run our Ruby on Rails test suites locally much faster.
Is this not just a Ruby issue? I don't know what's basecamp or HEYs codebase looks like on the inside, but they don't feel like projects whose tests suites should require a completely different OS or hardware arrangement. I haven't used Ruby in a decade but I do recall it being frustratingly slow. This seemed to be an understood and accepted reality amongst teams that adopt it.
Anyway, I feel like a better 'why you should do this' in order, especially if it is being mandated amongst developers in a company.
On the note of all the linux marketing, Jonathan Blow summed it up best:
> The people who would historically be excited about a new operating system can't do that any more, because everyone is too helpless to even conceive of a new OS.
> So they have to get excited about a mildly different arrangement of bloatware from That OS From 35 Years Ago.
> But as long as you give it a Cool Name, everything is good.
> Elon: makes car company (when everyone thinks electric cars will never work), rocket company (the rockets land themselves), Neuromancer brain chip company.
> Computer Nerds: Noooooo I can’t make an OS because drivers and adoption!!!!1
———
And from another thread
> It would be nice to have an OS with a proper job system as a core component. No legacy threads or mutexes at all. Everything is designed to be fine-grained parallel for modern 16+ core CPUs.
> For starters, every API is asynchronous command buffers with an optional slower/easier noob API on top. There are a lot of things that could tremendously simplify userspace as well.
> Learning how to use LLMs in a coding workflow is trivial. There is no learning curve. You can safely ignore them if they don’t fit your workflows at the moment.
Learning how to use LLMs in a coding workflow is trivial to start, but you find you get a bad taste early if you don't learn how to adapt both your workflow and its workflow. It is easy to get a trivially good result and then be disappointed in the followup. It is easy to try to start on something it's not good at and think it's worthless.
The pure dismissal of cursor, for example, means that the author didn't learn how to work with it. Now, it's certainly limited and some people just prefer Claude code. I'm not saying that's unfair. However, it requires a process adaptation.
I don't think the software engineering field is particularly rational and mostly follows trends or what looks good or familiar. We have a proclivity to assume that anything old is legacy. Most developer have never studied any CS history and are quite young, so they're bound to reinvent the wheel as well.
I think its fine to use older technology if its the right fit for the problem, and since the tech is battle-tested, you can read up as to why it went out-of-fashion, and as a result can minimize the risks with using it. It's "predictably disappointing".