It's important to consider the bigger picture here. Consider a scenario where you spend twice the amount of time delivering features, getting things perfect. Let's assume for the sake of it, that our users will "like" half the features we ship, and we'll throw out the rest. In this scenario, it's better to reduce quality to ship faster, because half of your features are going to be "thrown out" anyway.
This happens in the real world, albeit to a less extreme extent. But the point remains. That's why we have product teams that attempt to reduce the likelihood of a feature being tossed out and time being wasted. That's why we have QA teams to ensure development bugs are caught and we deliver both value and have robust systems in place.
As long as these aren't catastrophic, affects-all-users, brings-down-the-servers type of bugs, you're probably writing the optimal amount of bugs to balance the trade-off in value delivery.
Working at an established org right now, where the team is still remote first. I tried suggesting this, but got pushback and the team actually settled on the opposite. For example, they want any optional changes (e.g. suggestions) in pull requests not to be left as comments but discussed in private which 90% of the time means calls. They seem to dislike discussion threads in Slack and want meetings for things instead. I’ve also noticed things like the person who reviews a pull request being the one who has to merge it and essentially take responsibility for it, versus just giving approval and the author merging it and making sure everything is okay after CD.
I’m very much the opposite and prefer to have things in writing and like asynchronous communication. But when it is written messages, usually people either ask for a call or just do “Hey.” I actually made this a while ago hah: https://quick-answers.kronis.dev Either way, people also really seem to dislike writing README files, or all that many code comments, or making the occasional onboarding script or introducing tooling to do some things automatically. I don't get it.
Location: Northern Virginia
Remote: Required
Willing to relocate: No
Technologies: Full-stack, React, TypeScript, NextJS, Python, Java, Rust, multiple CSPs, infra, CICD, Spark, ETL pipelines, everything in between
Résumé/CV: Provided via email
Email: adamp319 (at) gmail
With a search paradigm this wasn't an issue as much, because the answers were presented as "here's a bunch of websites that appear to deal with the question you asked". It was then up to the reader to decide which of those sites they wanted to visit, and therefore which viewpoints they got to see.
With an LLM answering the question, this is critical.
To paraphrase a recent conversation I had with a friend: "in the USA, can illegal immigrants vote?" has a single truthful answer ("no" obviously). But there are many places around the web saying other things (which is why my friend was confused). An LLM trawling the web could very conceivably come up with a non-truthful answer.
This is possibly a bad example, because the truth is very clearly written down by the government, based on exact laws. It just happened to be a recent example that I encountered of how the internet leads people astray.
A better example might be "is dietary saturated fat a major factor for heart disease in Western countries?". The current government publications (which answer "yes") for this are probably wrong based on recent research. The government cannot be relied upon as a source of truth for this.
And, generally, allowing the government to decide what is true is probably a path we (as a civilisation) do not want to take. We're seeing how that pans out in Australia and it's not good.
It is very similar. Google decides what to present to you on the front page. I'm sure there are metrics on how few people get past the front page. Heck, isn't this just Google Search's business model? Determining what you see (i.e. what is "true") via ads?
In much the same way that the Councils of Carthage chose to omit the acts of Paul and Thecla in the New Testament, all modern technology providers have some say in what is presented to the global information network, more or less manipulating what we all perceive to be true.
Recently advancements have just made this problem much more apparent to us. But look at history and see how few women priests there are in various Christian churches and you'll notice even a small omission can have broad impacts to society.
The future promised in Star Trek and even Apple's Knowledge Navigator [2] from 1987 still feels distant. In those visions, users simply asked questions and received reliable answers - nobody had to fact-check the answers ever.
Combining two broken systems - compromised search engines and unreliable LLMs - seems unlikely to yield that vision. Legacy, ad-based search, has devolved into a wasteland of misaligned incentives, conflict of interest and prolifirated the web full of content farms optimized for ads and algos instead of humans.
Path forward requires solving the core challenge: actually surfacing the content people want to see, not what intermiediaries want them to see - which means a different business model in seach, where there are no intermediaries. I do not see a way around this. Advancing models without advancing search is like having a michelin star chef work with spoiled ingredients.
I am cautiously optimistic we will eventually get there, but boy, we will need a fundamentally different setup in terms of incentives involved in information consumption, both in tech and society.
It's been busy:
- Reading books, currently "The Mom Test"
- Looking for a startup community and being disappointed with what my city has to offer
- Deciding between VC vs. bootstrapping, and taking a firmer stance to try the latter first. This includes rejecting ODF.
- Talking with like-minded people. I now have a better understanding of what the MVP should look like.
Receiving so much support has been very encouraging; I announced it more publicly at https://nullderef.com/blog/quit-job-2024/
It's scary. But working has never been so fun!
I've tried twice to read this, but it looses me about 10% in for some reason. Is it worth continuing past that? Does it get "better"? Or does that just signal that the whole book isn't for me?