another one could be https://github.com/opencode-ai/opencode
another one could be https://github.com/opencode-ai/opencode
looks very similar to a chrome extension i use for a similar goal: reader view - https://chromewebstore.google.com/detail/ecabifbgmdmgdllomnf...
There are almost certainly blind spots in the Jepsen test generators too--that's part of why designing different generators is so helpful!
After reading the Jepsen report on TigerBeetle, the related blog post, and briefly reviewing the Antithesis integration code on GitHub workflow, I'm trying to better understand the testing scope.
My core question is: could these bugs detected by the Jepsen test suite have also been found by the Antithesis integration?
This question comes from a few assumptions I made, which may be incorrect:
- I thought TigerBeetle was already comprehensively tested by its internal test suite and the Antithesis product.
- I had the impression that the Antithesis test suite was more robust than Jepsen's, so I was surprised that Jepsen found an issue that Antithesis apparently did not.
I'm wondering if my understanding is flawed. For instance:
1. Was the Antithesis test suite not fully capable of detecting this specific class of bug?
2. Was this particular part of the system not yet covered by the Antithesis tests?
3. Am I fundamentally comparing apples and oranges, misunderstanding the different strengths and goals of the Jepsen and Antithesis testing suites?
I would greatly appreciate any insights that could help me understand this better. I want to be clear that my goal is to educate myself on these topics, not to make incorrect assumptions or assign responsibility.
is it possible to mention in the article the cost per model to run that benchmark?
https://news.ycombinator.com/item?id=40675527
https://news.ycombinator.com/item?id=43187603
maybe this could help people to be more thoughtful on how they invest their attention