Readit News logoReadit News
davidsainez commented on Ghostty is now non-profit   mitchellh.com/writing/gho... · Posted by u/vrnvu
catlover76 · 2 months ago
I always found the fact that he named a company after himself to be pretty off-putting, personally

Also, didn't said company piss people off in some way that led to Open Tofu being created?

davidsainez · 2 months ago
Ever heard of Debian or Linux?
davidsainez commented on Arcee Trinity Mini: US-Trained Moe Model   arcee.ai/blog/the-trinity... · Posted by u/hurrycane
davidsainez · 2 months ago
Excited to put this through its paces. It seems most directly comparable to GPT-OSS-20B. Comparing their numbers on the Together API: Trinity Mini is slightly less expensive ($0.045/$0.15 v $0.05/$0.20) and seems to have better latency and throughput numbers.
davidsainez commented on Arcee AI Trinity Mini and Nano – US based open weight models   arcee.ai/blog/the-trinity... · Posted by u/BarakWidawsky
sosodev · 2 months ago
If the performance is comparable to Qwen3 in practice that's quite impressive.

Half the dataset being synthetic is interesting. I wonder what that actually means. They say that Datology needed 2048 H100s to generate the synthetic data. Does that mean they were generating data using other open weight LLMs? Seems like that would undermine the integrity of a "US based" dataset.

davidsainez · 2 months ago
Why would that undermine its integrity? AFAICT there are a selection of "open" US-based LLMs to choose from: Google's Gemma, Microsoft's Phi, Meta's LLAMA, and OpenAI's GPT-OSS. With Phi licensed under MIT and GPT-OSS under Apache 2.
davidsainez commented on How we built the v0 iOS app   vercel.com/blog/how-we-bu... · Posted by u/MaxLeiter
MaxLeiter · 2 months ago
If you can point out how we actually lock you in, that would be more constructive than blanket accusations. I recommend reading the linked post
davidsainez · 2 months ago
I find the existence of opennext convincing proof of lock-in: https://blog.logrocket.com/opennext-next-js-portability/

Personally, I don’t bother with nextjs at all.

davidsainez commented on Migrating the main Zig repository from GitHub to Codeberg   ziglang.org/news/migratin... · Posted by u/todsacerdoti
GaryBluto · 2 months ago
Denying code not on it's merits but it's source is childish.
davidsainez · 2 months ago
But to determine its merit a maintainer must first donate their time and read through the PR.

LLMs reduce the effort to create a plausible PR down to virtually zero. Requiring a human to write the code is a good indicator that A. the PR has at least some technical merit and B. the human cares enough about the code to bother writing a PR in the first place.

Deleted Comment

davidsainez commented on Migrating the main Zig repository from GitHub to Codeberg   ziglang.org/news/migratin... · Posted by u/todsacerdoti
GaryBluto · 2 months ago
Is it really a surprise that the project that declared a blanket ban on LLM-generated code is also emotional and childish in other areas?
davidsainez · 2 months ago
Not wanting to review and maintain code that someone didn't even bother to write themselves is childish?
davidsainez commented on Migrating the main Zig repository from GitHub to Codeberg   ziglang.org/news/migratin... · Posted by u/todsacerdoti
bigyabai · 2 months ago
The absolute state of Github is that I use it dozens of times a day and it works flawlessly, for free, with intermittent outages.

Microsoft is doing more with Github than I can say for most of their products. I won't go to bat for the Xbox or Windows teams, but Github is... fine. Almost offensively usable.

davidsainez · 2 months ago
> works flawlessly

> intermittent outages

Those seem like conflicting statements to me. Last outage was only 13 days ago: https://news.ycombinator.com/item?id=45915731.

Also, there have been increasing reports of open source maintainers dealing with LLM generated PRs: https://news.ycombinator.com/item?id=46039274. GitHub seems perfectly positioned to help manage that issue, but in all likelihood will do nothing about it: '"Either you have to embrace the Al, or you get out of your career," Dohmke wrote, citing one of the developers who GitHub interviewed.'

I used to help maintain a popular open source library and I do not envy what open source maintainers are now up against.

davidsainez commented on Claude Opus 4.5   anthropic.com/news/claude... · Posted by u/adocomplete
dheerkt · 3 months ago
based on their past usage of "interleaved tool calling" it means that the tool can be used while the model is thinking.

https://aws.amazon.com/blogs/opensource/using-strands-agents...

davidsainez · 3 months ago
AFAICT, kimi k2 was the first to apply this technique [1]. I wonder if Anthropic came up with it independently or if they trained a model in 5 months after seeing kimi’s performance.

1: https://www.decodingdiscontinuity.com/p/open-source-inflecti...

Deleted Comment

u/davidsainez

KarmaCake day148November 6, 2024
About
https://bsky.app/profile/davidsainez.com
View Original