Not really. You can say this about smaller US cities, but NYC is absolutely a city where the >90% percentile of people can live without the daily use of a car.
(The simplest reason for this has nothing do to with car ownership or desirability per se: it's because of NYC's food delivery happens by bike or moped.)
“…relies on the data source being able to seek backwards on its changelog. But Postgres throws changelogs away once they're consumed, so the Postgres data source can't support this operation”
Dan’s understanding is incorrect, Postgres logical replication allows each consumer to maintain a bookmark in the WAL, and it will retain the WAL until you acknowledge receipt of a portion and advance the bookmark. Evidently, he tried our product briefly, had an issue or thought he had an issue, investigated the issue briefly and came to the conclusion that he understood the technology better than people who have spent years working on it.
Don’t get me wrong, it is absolutely possible for the experts to be wrong and one smart guy to be right. But at least part of what’s going on in this post is an arrogant guy who thinks he knows better than everyone, coming to snap conclusions that other people’s work is broken.
I wonder weekly whatever happened to that company. I wish it took off.
I get 100+ emails like this “handbook” a day and discard all of them. Want my attention? Spend your ad dollars on it, literally.
I’m not saying that LLMs can’t be useful, but I do think it’s a darn shame that we’ve given up on creating tools that deterministically perform a task. We know we make mistakes and take a long time to do things. And so we developed tools to decrease our fallibility to zero, or to allow us to achieve the same output faster. But that technology needs to be reliable; and pushing the envelope of that reliability has been a cornerstone of human innovation since time immemorial. Except here, with the “AI” craze, where we have abandoned that pursuit. As the saying goes, “to err is human”; the 21st-century update will seemingly be, “and it’s okay if technology errs too”. If any other foundational technology had this issue, it would be sitting unused on a shelf.
What if your compiler only generated the right code 99% of the time? Or, if your car only started 9 times out of 10? All of these tools can be useful, but when we are so accepting of a lack of reliability, more things go wrong, and potentially at larger and larger scales and magnitudes. When (if some folks are to believed) AI is writing safety-critical code for an early-warning system, or deciding when to use bombs, or designing and validating drugs, what failure rate is tolerable?
This does not follow. By your own assumptions, getting you 80% of the way there in 10% of the time would save you 18% of the overall time, if the first 80% typically takes 20% of the time. 18% time reduction in a given task is still an incredibly massive optimization that's easily worth $200/month for a professional.
Dario said in a recent interview that they never switch to a lower quality model in terms of something with different parameters during times of load. But he left room for interpretation on whether that means they could still use quantization or sparsity. And then additionally, his answer wasn't clear enough to know whether or not they use a lower depth of beam search or other cheaper sampling techniques.
He said the only time you might get a different model itself is when they are A-B testing just before a new announced release.
And I think he clarified this all applied to the webui and not just the API.
(edit: I'm rate limited on hn, here's the source in reply to the below https://www.youtube.com/watch?v=ugvHCXCOmm4&t=42m19s )
You're spot on. But for the rest of the forum:
The most commonly accepted mouse-to-human conversion is: (D)*(3/37) = H
Where D = the mouse dose in mg/kg. H = human dose in mg/kg.
So if a 25g mouse eats 0.1% of its bodyweight in taurine, that comes out to 1000mg/kg. It translates to 81mg/kg for a human. If you weigh 100kg, an equivalent daily dose for you is 8.1 grams/day.
The rat equation is similar, but 6/37 rather than 3/37.