Readit News logoReadit News
materiallie commented on Systems Programming with Zig   manning.com/books/systems... · Posted by u/signa11
materiallie · 3 months ago
Zig certainly has a lot of interesting features and good ideas, but I honestly don't see the point of starting a major project with it. With alternatives like Rust and Swift, memory safety is simply table stakes these days.

Yes, I know Zig does a lot of things to help the programmer avoid mistakes. But the last time I looked, it was still possible to make mistakes.

The only time I would pick something like C, C++, or Rust is if I am planning to build a multi-million line, performance sensitive project. In which case, I want total memory safety. For most "good enough" use cases, garbage collectors work fine and I wouldn't bother with a system's programming language at all.

That leaves me a little bit confused about the value proposition of Zig. I suppose it's a "better C". But like I said, for serious industry projects starting in 2025, memory safety is just tablestakes these days.

This isn't meant to be a criticism of Zig or all of the hard work put into the language. I'm all for interesting projects. And certainly there are a lot of interesting ideas in Zig. I'm just not going to use them until they're present in a memory safe language.

I am actually a bit surprised by the popularity of Zig on this website, given the strong dislike towards Go. From my perspective, both languages are very similar, from the perspective that they decided to "unsolve already solved problems". Meaning, we know how to guarantee memory safety. Multiple programming languages have implemented this in a variety of ways. Why would I use a new language that takes a problem a language like Rust, Java, or Swift already solved for me, and takes away features (memory safety) that I already have?

materiallie commented on Why LLMs can't really build software   zed.dev/blog/why-llms-can... · Posted by u/srid
alfalfasprout · 4 months ago
It's funny always seeing comments like this. I call them "skill issue" comments.

The reality is the author very much understands what's available today. Zed, after all, is building out a lot of AI-focused features in its editor and that includes leveraging SOTA LLMs.

> It's certainly not perfect but it works about as well, if not better, than a human junior engineer. Sometimes it can't solve a bug, but human junior engineers get in the same situation too.

I wonder if comments like this are more of a reflection on how bad the hiring pool was even a few years ago than a reflection of how capable LLMs are. I would be distraught if I hired a junior eng with less wherewithal and capabilities than Sonnet 3.7.

materiallie · 4 months ago
This is a very friendly and cordial response. Given that the parent comment was implying that the creators of Zed don't actually know how to build software. Based on their credentials building Rails crud apps, I suppose.
materiallie commented on Use Your Type System   dzombak.com/blog/2025/07/... · Posted by u/ingve
default-kramer · 5 months ago
I think checked exceptions were maligned because they were overused. I like that Java supports both checked and unchecked exceptions. But IMO checked exceptions should only be used for what Eric Lippert calls "exogenous" exceptions [1]; and even then most of them should probably be converted to an unchecked exception once they leave the library code that throws them. For example, it's always possible that your DB could go offline at any time, but you probably don't want "throws SQLException" polluting the type signature all the way up the call stack. You'd rather have code assuming all SQL statements are going to succeed, and if they don't your top-level catch-all can log it and return HTTP 500.

[1] https://ericlippert.com/2008/09/10/vexing-exceptions/

materiallie · 5 months ago
Put another way: errors tend to either be handled "close by" or "far away", but rarely "in the middle".

So Java's checked exceptions force you to write verbose and pointless code in all the wrong places (the "in the middle" code that can't handle and doesn't care about the exception).

materiallie commented on LLM Inevitabilism   tomrenner.com/posts/llm-i... · Posted by u/SwoopsFromAbove
alonsonic · 5 months ago
But there are a ton of LLM powered products in the market.

I have a friend in finance that uses LLM powered products for financial analysis, he works in a big bank. Just now anthropic released a product to compete in this space.

Another friend in real estate uses LLM powered lead qualifications products, he runs marketing campaigns and the AI handles the initial interaction via email or phone and then ranks the lead in their crm.

I have a few friends that run small businesses and use LLM powered assistants to manage all their email comms and agendas.

I've also talked with startups in legal and marketing doing very well.

Coding is the theme that's talked about the most in HN but there are a ton of startups and big companies creating value with LLMs

materiallie · 5 months ago
It feels like there's a lot of shifting goalposts. A year ago, the hype was that knowledge work would cease to exist by 2027.

Now we are trying to hype up enhanced email autocomplete and data analysis as revolutionary?

I agree that those things are useful. But it's not really addressing the criticism. I would have zero criticisms of AI marketing if it was "hey, look at this new technology that can assist your employees and make them 20% more productive".

I think there's also a healthy dose of skepticism after the internet and social media age. Those were also society altering technologies that purported to democratize the political and economic system. I don't think those goals were accomplished, although without a doubt many workers and industries were made more productive. That effect is definitely real and I'm not denying that.

But in other areas, the last 3 decades of technological advancement have been a resounding failure. We haven't made a dent in educational outcomes or intergenerational poverty, for instance.

materiallie commented on Human coders are still better than LLMs   antirez.com/news/153... · Posted by u/longwave
drodgers · 7 months ago
I don't know how someone could be following the technical progress in detail and hold this view. The progress is astonishing, and the benchmarks are becoming saturated so fast that it's hard to keep track.

Are there plenty of gaps left between here and most definitions of AGI? Absolutely. Nevertheless, how can you be sure that those gaps will remain given how many faculties these models have already been able to excel at (translation, maths, writing, code, chess, algorithm design etc.)?

It seems to me like we're down to a relatively sparse list of tasks and skills where the models aren't getting enough training data, or are missing tools and sub-components required to excel. Beyond that, it's just a matter of iterative improvement until 80th percentile coder becomes 99th percentile coder becomes superhuman coder, and ditto for maths, persuasion and everything else.

Maybe we hit some hard roadblocks, but room for those challenges to be hiding seems to be dwindling day by day.

materiallie · 7 months ago
I think benchmark targeting is going to be a serious problem going forward. The recent Nate Silver podcast on poker performance is interesting. Basically, the LLM models still suck at playing poker.

Poker tests intelligence. So what gives? One interesting thing is that for whatever reason, poker performance isn't used a benchmark in the LLM showdown between big tech companies.

The models have definitely improved in the past few years. I'm skeptical that there's been a "break-through", and I'm growing more skeptical of the exponential growth theory. It looks to me like the big tech companies are just throwing huge compute and engineering budgets at the existing transformer tech, to improve benchmarks one by one.

I'm sure if Google allocated 10 engineers a dozen million dollars to improve Gemini's poker performance, it would increase. The idea before AGI and the exponential growth hypothesis is that you don't have to do that because the AI gets smarter in a general sense all on it's own.

materiallie commented on Human coders are still better than LLMs   antirez.com/news/153... · Posted by u/longwave
Buttons840 · 7 months ago
LLMs aren't my rubber duck, they're my wrong answer.

You know that saying that the best way to get an answer online is to post a wrong answer? That's what LLMs do for me.

I ask the LLM to do something simple but tedious, and then it does it spectacularly wrong, then I get pissed off enough that I have the rage-induced energy to do it myself.

materiallie · 7 months ago
This is my experience, too. As a concrete example, I'll need to write a mapper function to convert between a protobuf type and Go type. The types are mirror reflections of each other, and I feed the complete APIs of both in my prompt.

I've yet to find an LLM that can reliability generate mapping code between proto.Foo{ID string} to gomodel.Foo{ID string}.

It still saves me time, because even 50% accuracy is still half that I don't have to write myself.

But it makes me feel like I'm taking crazy pills whenever I read about AI hype. I'm open to the idea that I'm prompting wrong, need a better workflow, etc. But I'm not a luddite, I've "reached up and put in the work" and am always trying to learn new tools.

u/materiallie

KarmaCake day47May 30, 2025View Original