Readit News logoReadit News
daxfohl commented on Go is still not good   blog.habets.se/2025/07/Go... · Posted by u/ustad
openasocket · 3 days ago
I've worked almost exclusively on a large Golang project for over 5 years now and this definitely resonates with me. One component of that project is required to use as little memory as possible, and so much of my life has been spent hitting rough edges with Go on that front. We've hit so many issues where the garbage collector just doesn't clean things up quickly enough, or we get issues with heap fragmentation (because Go, in its infinite wisdom, decided not to have a compacting garbage collector) that we've had to try and avoid allocations entirely. Oh, and when we do have those issues, it's extremely difficult to debug. You can take heap profiles, but those only tell you about the live objects in the heap. They don't tell you about all of the garbage and all of the fragmentation. So diagnosing the issue becomes a matter of reading the tea leaves. For example, the heap profile says function X only allocated 1KB of memory, but it's called in a hot loop, so there's probably 20MB of garbage that this thing has generated that's invisible on the profile.

We pre-allocate a bunch of static buffers and re-use them. But that leads to a ton of ownership issues, like the append footgun mentioned in the article. We've even had to re-implement portions of the standard library because they allocate. And I get that we have a non-standard use case, and most programmers don't need to be this anal about memory usage. But we do, and it would be really nice to not feel like we're fighting the language.

daxfohl · 2 days ago
Embed a local redis or sqlite instance?
daxfohl commented on AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'   theregister.com/2025/08/2... · Posted by u/JustExAWS
JCM9 · 4 days ago
So to summarize:

My boss said we were gonna fire a bunch of people “because AI” as part of some fluff PR to pretend we were actually leaders in AI. We tried that a bit, it was a total mess and we have no clue what we’re doing, I’ve been sent out to walk back our comments.

daxfohl · 3 days ago
Boss->VP: "We need to fire people because AI"

VP->Public: "We'll replace all our engineers with AI in two years"

Boss->VP: "I mean we need to fire VPs because AI"

VP->Public: "Replacing people with AI is stupid"

daxfohl commented on Teaching GPT-5 to Use a Computer   prava.co/archon/... · Posted by u/Areibman
daxfohl · 7 days ago
Very cool. I've been thinking for a while that this is where things will end up. While custom AI integrations per service/product/whatever can be better and more efficient, there's always going to be stuff that doesn't have AI integrations but your workflow will need to use.

Without this, AI is going to be limited and kloodgy. Like if I wanted to have AI run a FEA simulation on some CAD model, I have to wait until the FEA software, the CAD software, the corporate models repo, etc., etc. all have AI integrations and then create some custom agent that glues them all together. Once AI can just control the computer effectively, then it can look up the instruction manuals for each of these pieces of software online, and then just have at it e2e like a human would. It can even ping you over slack if it gets stuck on something.

I think once stuff like this becomes possible, custom AI integrations will become less necessary. I'm sure they'll continue to exist for special cases, but the other nice thing about a generic computer-use agent is that you can record the stream and see exactly what it's doing, so a huge increase in observability. It can even demo to human workers how to do things because it works via the same interfaces.

daxfohl commented on Sunny days are warm: why LinkedIn rewards mediocrity   elliotcsmith.com/linkedin... · Posted by u/smitec
etra0 · 8 days ago
LinkedIn posts really read like an alternative reality (which I would not like to be a part of, lol).

I cannot take seriously most of what I read over there. The comments are also often toxic, the whole business is... just weird.

What's funny as a personal anecdote, I've found more jobs through Twitter (pre-X) than through LinkedIn.

Seriously. And I've tried using LinkedIn for job hunt.

daxfohl · 7 days ago
Yeah, I think the "pro-linkedin" comments here are probably valid, with the caveat that eventually everyone will quit using linkedin if there isn't more substance on these things at some point.

The way it's headed, it feels like AI is going to be writing 99% of posts at some point, and who wants to be a consumer of that? IDK, maybe lots of people, or at least maybe lots of people will continue to consume it because of how good AI will get at fine-tuning to your eyeballs, even though the people know they hate reading it.

daxfohl commented on California unemployment rises to 5.5%, worst in the U.S. as tech falters   sfchronicle.com/californi... · Posted by u/littlexsparkee
groby_b · 8 days ago
Your cynical hunch is very much wrong. That change was easier to swallow on massive capitalization than for smaller businesses.
daxfohl · 7 days ago
Yeah, actually it looks like you're right about the tax code impacts. Doesn't seem like it'd be causing layoffs just yet though, but could definitely be putting hiring on pause due to the uncertainty.
daxfohl commented on California unemployment rises to 5.5%, worst in the U.S. as tech falters   sfchronicle.com/californi... · Posted by u/littlexsparkee
tqi · 9 days ago
IMO it's far too early for "AI" to have had a meaningful effect on Software company hiring. A more plausible explanation for me is that between roughly 2012 and 2022, there was a tremendous increase in the supply of SWE talent (via undergraduate CS programs massively increasing enrollment, boot camps, immigration, etc), fueled primarily by ZIRP. On the demand side, ZIRPy VC funding primarily went to bullshit Crypto and (to a lesser extent) bullshit Metaverse companies, most of which have not panned out, meaning there is a dearth of late stage and newly public companies to hire said talent.
daxfohl · 9 days ago
And old unicorns like airbnb and uber now having to compete with traditional hotels and taxis again.

I think Elon's takeover of twitter set something of a precedent too: if he could reduce headcount as much as he did and still have a functioning product, then why can't I?

BTW I also don't think it has much to do with that engineering tax deferral code change that people keep talking about. My cynical hunch is that that topic keeps getting seeded by the billionaires who have the most to gain by reversing it, and hey maybe they'll hire an extra engineer or two afterward just to be good sports, but it's not going to reverse any major employment trends.

daxfohl commented on Evaluating LLMs playing text adventures   entropicthoughts.com/eval... · Posted by u/todsacerdoti
daxfohl · 12 days ago
A while ago I tried something similar but tried to boil it down to the simplest thing I could come up with. I ended up making a standard maze into a first-person perspective where it unfolds one step at a time, and seeing if a model could solve it without re-entering areas it had already fully explored. They all failed.

Setup: a maze generator generates a square maze and puts the start and end on opposite corners. It doesn't show the full maze to the LLM, just has the LLM explore it one square at a time like a text adventure. It tells the LLM which directions of its current position has walls (relative direction: front, back, left, right). The LLM then chooses between move forward, turn left, turn right. That's pretty much it.

First attempt: Just maintain all the above in a chat, step by step. It'd get lost pretty quickly and start re-exploring already-explored area quite readily. Not very surprising, as we all know they can get lost in long chat threads. The chat model seemed to just go forward or turn right forever (which can work in some mazes), whereas the thinking model did seem to avoid backtracking until it got to T-junctions of a wrong way, where it always seemed to go back and forth forever.

Second attempt: After each step, tell the LLM to "externalize" everything it knew about the maze, and then feed that to a brand new LLM context. The idea was to avoid long chat context problems and see if the LLM could adequately represent its internal state and knowledge such that a "new" LLM could take over. This really didn't end up working any better. The biggest problem was that sometimes it would think that "turn left" would also change the position, and sometimes not. There were other issues too, so I didn't go much further with this approach.

Third attempt: Tell the LLM the premise of the game, and tell it to create a python state machine that stores all the state information it would need to represent its progress through the maze, and then to emit specific keywords when it needed to interact with it (and I added some code that served as a proxy). This also didn't work great. The state machine was close, but one thing it always forgot to do was relate index with direction. So if it's "in cell (5, 5) and facing up", it wouldn't know whether "forward" would be an increase or decrease in the x or y index.

I was also humored by its sycophancy here. I'd ask it

"Would adding a map to the state machine output be useful?"

"Yes, that is a great idea, let's do that!"

It'd do a great job of adding the map, but then I'd ask, "Does a map create more opportunity confusion?"

"Yes, that's an excellent insight, let's remove it!"

"No, really, you're the LLM, you're the one who's going to be using this app. I'm asking you, what do you think?"

"Whatever you want to do, just tell me"

Eventually, as the OP pointed out, these costs do add up pretty quickly. All I was after was "does externalizing the state help solve some of the long chat context problems", and the answer was "no" enough for me.

EDIT: Note that in all cases, they 100% emitted valid commands. And also I never noticed a case where "move forward" was chosen when there was a wall in front of them, nor "turn" when they were in the middle of a corridor.

daxfohl commented on If you're remote, ramble   stephango.com/ramblings... · Posted by u/lawgimenez
Aurornis · 22 days ago
> I'm with the other commenters who agree in spirit, but would hate the details in the post

This seems to happen a lot: Someone writes some highly exaggerated career advice that has good intent at the core but turns into overly weird suggestions by the end. They might be trying to be memorable or to make an impact by exaggerating the advice.

Then some people, often juniors, take it literally and start practicing it. They think they’re doing some secret that will make them the best employee. Their coworkers and managers are more confused than impressed and think it’s just a personality quirk.

As a manager I found it helpful to skim Reddit and other sites for semi-viral advice blogs like this. With enough juniors in a company there’s a chance one of them will suddenly start doing the thing written in a shared post like this. Knowing why they’re doing it is a good way to help defuse the behavior (assuming they don’t really benefit but rather do it because they perceive it will look good)

daxfohl · 21 days ago
"When a metric becomes a target, it ceases to be a useful metric"

Deleted Comment

daxfohl commented on The Math Is Haunted   overreacted.io/the-math-i... · Posted by u/danabramov
kevinbuzzard · 24 days ago
Most mathematicians aren't doing formalization themselves, but my impression is that a lot of them are watching with interest. I get asked "is my job secure?" quite a lot nowadays. Answer is "currently yes".
daxfohl · 24 days ago
Okay, yeah my response is ultimately based on the one conversation about it that I've had with the one prof of the one math class I've taken in the last 30 years, so take that with a grain of salt.

(Tangentially, I'm so so so so angry that universities stopped offering remote classes after covid. I'd been wanting to take a bunch of classes for a long time, but it's just not feasible when you've got a full-time job in the 'burbs. I managed to get through measure theory and quantum mechanics while the window was open, and it was great. I planned to get through a few more in differential geometry and algebraic topology, but then the window closed. Feels like I'll pretty much have to wait until retirement at this point. Oh well, first-world problems.)

Edit: oh and actually, follow-up question: are these tools useful for _learning_ advanced mathematics? I looked up in Lean and its approach to topology is very non-standard, which makes me question whether I'd actually be learning math or whether I'd mainly be learning how to finagle things into Lean-friendly representations but missing the higher-level concepts.

u/daxfohl

KarmaCake day4621January 12, 2014View Original