Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
- "We should focus our charitable endeavors on the problems that are most impactful, like eradicating preventable diseases in poor countries." Cool, I'm on board.
- "I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way." Maybe? If you like crypto, go for it, I guess, but I don't think that's the only way to live, and I'm not frankly willing to trust the infallibility and incorruptibility of these so-called geniuses.
- "There are many billions more people who will be born in the future than those people who are alive today. Therefore, we should focus on long-term problems over short-term ones because the long-term ones will affect far more people." Long-term problems are obviously important, but the further we get into the future, the less certain we can be about our projections. We're not even good at seeing five years into the future. We should have very little faith in some billionaire tech bro insisting that their projections about the 22nd century are correct (especially when those projections just so happen to show that the best thing you can do in the present is buy the products that said tech bro is selling).
1. Years ago, Acme Corp sets up an FAQ page and creates a goo.gl link to the FAQ.
2. Acme goes out of business. They take the website down, but the goo.gl link is still accessible on some old third-party content, like social media posts.
3. Eventually, the domain registration lapses, and a bad actor takes over the domain.
4. Someone stumbles across a goo.gl link in a reddit thread from a decade ago and clicks it. Instead of going to Acme, they now go to a malicious site full of malware.
With the new policy, if enough time has passed without anyone clicking on the link, then Google will deactivate it, and the user in step 4 would now get a 404 from Google instead.
Deleted Comment
I was browsing the code, and noticed this forms library was using Supabase, presumably a paid service if this OSS library takes off. I just can't seem to grasp why a custom form building library needs a 3rd party, managed Database included. Scale maybe?
These are genuine questions as I'm woefully unaware of the state of HTML forms / Frontend in 2025
On the technical side, these form builders can actually save a decent amount of development effort. Sure, it's easy to build a basic HTML form, but once you start factoring in things like validation, animations, transitions, conditional routing, error handling, localization, accessibility, and tricky UI like date pickers and fancy dropdowns, making a really polished form is actually a lot of work. You either have to cobble together a bunch of third-party libraries and try to make them play nicely together, or you end up building your own reusable, extensible, modular form library.
It's one of those projects that sounds simple, but scope creep is almost inevitable. Instead of spending your time building things that actually make money, you're spending time on your form library because suddenly you have to show different questions on the next screen based on previous responses. Or you have to handle right-to-left languages like Arabic, and it's not working in Safari on iOS. Or your predecessor failed to do any due diligence before deciding to use a datepicker widget that was maintained by some random guy at a web agency in the Midwest that went out of business five years ago, and now you have to fork it because there's a bug that's impacting your company's biggest client.
Or, instead of all that, you could just pay Typeform a fraction of the salary for one engineer and never have to think about those things ever again.
The thing is, this feature leaned on every bit of experience and wisdom we had as a team --things like making sure the model is right, making sure the system makes sense overall and all the pieces fit together properly.
I don't know that "4x" is how it works --in this case, the AI let us really tap into the experience and skill we already had. It made us faster, but if we were missing the experience and wisdom part, we'd just be more prolific at creating messes.
However, with LaTeX, the output of the first run is often an input to the second run, so you get notably different results if you only compile it once vs. compiling twice. When I last wrote LaTeX about ten years ago, I usually encountered this with page numbers and tables of context, since the page numbers couldn't be determined until the layout was complete. So the first pass would get the bulk of the layout and content in place, and then the second pass would do it all again, but this time with real page numbers. You would never expect to see something like this in a modern compiler, at least not in a way that's visible to the user.
(That said, it's been ten years, and I never compiled anything as long or complex as a PhD thesis, so I could be wrong about why you have to compile twice.)