Readit News logoReadit News
J_Shelby_J commented on Go is still not good   blog.habets.se/2025/07/Go... · Posted by u/ustad
J_Shelby_J · 3 days ago
No one cares more about rust than Gophers.
J_Shelby_J commented on The Core of Rust   jyn.dev/the-core-of-rust/... · Posted by u/zdw
t8sr · 4 days ago
I've taught programming to some people who had no previous experience with it, and I can tell you that the list of concepts you have to learn at once is basically as long for Python, the quintessential "beginner" language.

The author's argument feels intellectually dishonest for that reason. Especially glaring is the comparison to JavaScript. The latter has an insane amount of concepts to deal with to do anything, including some truly bizarre ones, like prototypes.

Rust is hard to learn, IMO, for precisely two reasons:

1) The borrow checker is in an uncomfortable place, where it's dumb enough that it rejects perfectly valid code, but smart enough that it's hard to understand how it works.

2) As the author points out, there are a lot of levers available for low-level control, precise allocation, etc.

With respect to the second point, the author describes a language he'd like to see: green threads, compiler deciding on allocation, fewer choices and easy thread safety.

This language already exists (minus algebraic types). It's called Go. It's perfectly fine, well-designed and good for beginners. Some people don't like its aesthetics, but that's not reason enough to invent it again, only with Rust-inspired syntax.

J_Shelby_J · 4 days ago
> the list of concepts you have to learn at once is basically as long for Python, the quintessential "beginner" language

IMO, Python is great. But deploying Python is more work than learning Rust, oh and the tooling is something that requires continual education: I hear UV is what all the serious python teams are using now. Better learn it or be left behind!

Meanwhile, vanilla Rust has everything you need. I'm glad I understood sunk cost fallacy and got out of Python after many months of coming up to speed with it.

J_Shelby_J commented on Steam can't escape the fallout from its censorship controversy   polygon.com/steam-paypal-... · Posted by u/SilverElfin
trenchpilgrim · 10 days ago
It's more than love - I have friends who will actively refuse to buy games on other stores, even for significantly better prices.

(To be fair - I think some stores like the Epic Games store actively make playing games on them worse, e.g. I played Alan Wake 2 through Epic and the achievements notifications were massively distracting, ruining many scary and dramatic moments, and they turn themselves back on every time you launch the game...)

J_Shelby_J · 10 days ago
It’s because steam is 100% an aberration in our modern world. The closest comparison is, maybe, Costco?

And we who are dependent on steam know how bad things would be if steam wasn’t this unicorn. Gaben is the rare feudal lord whose people show up to battle out because they know it’s good for them. All he had to do was not abuse his monopoly for the past 25 years, as the meme goes.

J_Shelby_J commented on Typed languages are better suited for vibecoding   solmaz.io/typed-languages... · Posted by u/hosolmaz
J_Shelby_J · 22 days ago
Writing rust and the LLM almost never gets function signatures and returns types wrong.

That just leaves the business logic to sort out. I can only imagine that IDEs will eventually pair directly with the compiler for instant feedback to fix generations.

But rust also has traits, lifetimes, async, and other type flavors that multiples complexity and causes issues. It also an in progress language… im about to add a “don’t use once cell.. it’s part of std now “ to my system prompt. So it’s not all sunshine, and I’m deeply curious how a pure vibe coded rust app would turn out.

J_Shelby_J commented on Ollama's new app   ollama.com/blog/new-app... · Posted by u/BUFU
underlines · 25 days ago
Heads up, there’s a fair bit of pushback (justified or not) on r/LocalLLaMA about Ollama’s tactics:

    Vendor lock-in: AFAIK it now uses a proprietary llama.cpp fork and builts its own registry on ollama.com in a kind of docker way (I heard docker ppl are actually behind ollama) and it's a bit difficult to reuse model binaries with other inference engines due to their use of hashed filenames on disk etc.

    Closed-source tweaks: Many llama.cpp improvements haven’t been upstreamed or credited, raising GPL concerns. They since switched to their own inference backend.

    Mixed performance: Same models often run slower or give worse outputs than plain llama.cpp. Tradeoff for convenience - I know.

    Opaque model naming: Rebrands or filters community models without transparency, biggest fail was calling the smaller Deepseek-R1 distills just "Deepseek-R1" adding to a massive confusion on social media and from "AI Content Creators", that you can run "THE" DeepSeek-R1 on any potato.

    Difficult to change Context Window default: Using Ollama as a backend, it is difficult to change default context window size on the fly, leading to hallucinations and endless circles on output, especially for Agents / Thinking models.
---

If you want better, (in some cases more open) alternatives:

    llama.cpp: Battle-tested C++ engine with minimal deps and faster with many optimizations

    ik_llama.cpp: High-perf fork, even faster than default llama.cpp

    llama-swap: YAML-driven model swapping for your endpoint.

    LM Studio: GUI for any GGUF model—no proprietary formats with all llama.cpp optimizations available in a GUI

    Open WebUI: Front-end that plugs into llama.cpp, ollama, MPT, etc.

J_Shelby_J · 25 days ago
And llamacpp has a gui out of the box that’s decent.
J_Shelby_J commented on Ollama's new app   ollama.com/blog/new-app... · Posted by u/BUFU
numbers · 25 days ago
does anyone have a suggestion on running LLMs locally on a windows PC and then accessing them (thru an app / gui) on mac? My windows PC is a gaming PC with a pretty good GPU and I'd like to take advantage of that.
J_Shelby_J · 25 days ago
Start llamacpp server with the gui accessible to your Mac?
J_Shelby_J commented on The Rising Cost of Child and Pet Day Care   marginalrevolution.com/ma... · Posted by u/surprisetalk
legitster · a month ago
A lot of the "regulations" baked into childcare aren't really going to move the needle - the inefficiencies are baked into the model. I don't care how lax your state is, you're not going to run a successful childcare center with 100 kids and 1 caretaker. So the cost of childcare is always going to be linked strongly to median prevailing wages.

While I think Baumol may have something to say here, I think we should listen to Henry George a bit more. The lion's share of overhead for a daycare goes towards its real estate costs. Similarly, it's a growing share of cost of living for both the workers and the customers. And as Henry George pointed out, the cost of housing goes up without creating economic value for anyone.

J_Shelby_J · a month ago
The knock on effect of high housing costs is driving force behind price increases and declining quality of service everywhere.

It’s an absurd situation here in California, where we are some of the richest people in history, but because the cost of housing so high, the price of everything is driven to the breaking point… meanwhile, the important jobs required for a nicely ran country are too expensive to afford and so our transit, healthcare, service industry, etc is awful.

J_Shelby_J commented on There is no memory safety without thread safety   ralfj.de/blog/2025/07/24/... · Posted by u/tavianator
blub · a month ago
This comment highlights a very important philosophical difference between the Rust community and the communities of other languages:

- in other languages, it’s understood that perhaps the language is vulnerable to certain errors and one should attempt to mitigate them. But more importantly, those errors are one class of bug and bugs can happen. Set up infra to detect and recover.

- in Rust the code must be safe, must be written in a certain way, must be proven correct to the largest extent possible at compile time.

This leads to the very serious, solemn attitude typical of Rust developers. But the reality is that most people just don’t care that much about a particular type of error as opposed to other errors.

J_Shelby_J · a month ago
> This leads to the very serious, solemn attitude typical of Rust developers.

Opposite really. I like rust because I can be care free and have fun.

J_Shelby_J commented on Reflections on OpenAI   calv.info/openai-reflecti... · Posted by u/calvinfo
uh_uh · a month ago
> I don't actually believe it's true, I think it's pure hype and LLMs won't even approximate AGI.

Not sure how you can say this so confidently. Many would argue they're already pretty close, at least on a short time horizon.

J_Shelby_J · a month ago
Many would argue that you should give them a billion dollars funding, and that’s what they’re doing when they say AGI is close.

There is a decade + worth of implementation details and new techniques to invent before we have something functionally equivalent to Jarvis.

J_Shelby_J commented on Show HN: We made our own inference engine for Apple Silicon   github.com/trymirai/uzu... · Posted by u/darkolorin
AlekseiSavin · a month ago
You're right, modern edge devices are powerful enough to run small models, so the real bottleneck for a forward pass is usually memory bandwidth, which defines the upper theoretical limit for inference speed. Right now, we've figured out how to run computations in a granular way on specific processing units, but we expect the real benefits to come later when we add support for VLMs and advanced speculative decoding, where you process more than one token at a time
J_Shelby_J · a month ago
VLMs = very large models?

u/J_Shelby_J

KarmaCake day820January 17, 2022View Original