Readit News logoReadit News
stuartaxelowen commented on Getting good results from Claude Code   dzombak.com/blog/2025/08/... · Posted by u/ingve
bgirard · 20 days ago
I'm playing with Claude Code to build an ASCII factorio-like. I first had it write code without much code supervision. It quickly added most of the core features you'd expect (save/load, options, debug, building, map generation, building, belts, crafting, smart belt placing, QoL). Then I started fixing minor bugs and each time it would break something eg. tweaking movement broke belts. So I prompted it to add Playwright automation. Then it wasn't able to write good quality tests and have them all pass, the test were full of sleep calls, etc...

So I looked at the code more closely and it was using the React frontend and useEffect instead of a proper game engine. It's also not great at following hook rules and understanding their timing in advance scenarios. So now I'm prompting it to use a proper tick based game engine and rebuilding the game up, doing code reviews. It's going 'slower' now, but it's going much better now.

My goal is to make a Show HN post when I have a good demo.

stuartaxelowen · 19 days ago
It sounds like you implicitly delegated many important design decisions to claude? In my experience it helps to first discuss architecture and core components of the problem with Claude, then either tell it what to do for the high leverage decorations, or provide it with the relevant motivating context to allow it to make the right decisions itself.
stuartaxelowen commented on Apache ECharts   echarts.apache.org/en/ind... · Posted by u/tomtomistaken
lucasfcosta · 5 months ago
We've tested almost every visualization library under the sun when building Briefer (https://briefer.cloud) and I can confidently say that Apache ECharts is the best.

The main issues with other libraries is that they're either:

(a) ugly (b) difficult to use (i.e. having to do things imperatively) (c) not flexible enough

Apache ECharts solve these 3 problems. It's pretty by default, it allows us to mount/calculate the declarative spec for the graphs in the back-end and then only send the desired spec to the front-end so it can render, and it's also extremely flexible to the point we can support everything that traditional BI tools can do.

We've never had to extend the lib to do anything new, everything we need is already there.

Glad to see this great piece of work on top of HN.

stuartaxelowen · 5 months ago
Did you compare to vega/vega lite? Curious to hear how they compared!
stuartaxelowen commented on Isolating complexity is the essence of successful abstractions   v5.chriskrycho.com/journa... · Posted by u/chriskrycho
gsf_emergency · 7 months ago
>The question is first of all whether we have written them down anywhere

The only hard thing in software: papers please (easily accessible documentation)

stuartaxelowen · 7 months ago
The hard part about documentation is that it requires you to have a component that can be comprehensibly and sufficiently documented. So much of the software written is seen as provisional, that even its authors think “well, we’ll document the v1”, not realizing that their prototype is just that.
stuartaxelowen commented on Isolating complexity is the essence of successful abstractions   v5.chriskrycho.com/journa... · Posted by u/chriskrycho
bb88 · 7 months ago
Python showed what relaxed types could do. And we could go a long way as it turns out without types. But there are use cases for types, and even python admitted such when they added type annotations.

However, when I was a kid a would put a firecracker next to an object. I didn't bother running the scenario through a compiler to see if the object was of type Explodable() and had an explode() method that would be called.

stuartaxelowen · 7 months ago
Python showed that you can be wrong about your types and still build a successful product.
stuartaxelowen commented on A love letter to Apache Echarts   alicegg.tech//2024/02/14/... · Posted by u/zer0tonin
MikeOfAu · 2 years ago
Surprising to me that Vega and Vegalite don't get a lot more love
stuartaxelowen · 2 years ago
Completely - they simplify the act of transforming data into visualizations down to the essential association of dimensions to axes/color/etc visualization concerns.
stuartaxelowen commented on Ask HN: What is a 2024-2030 moat for AI    · Posted by u/jonas_kgomo
nextos · 2 years ago
It's hard to say. Good moats, at least mid-term, are rarely algorithms. It's usually data and infrastructure.

However, for some applications I am interested in, I think that robust representation learning solutions could give a significant edge.

But that is mostly an open problem in high-dimensional spaces.

stuartaxelowen · 2 years ago
And not just data, but hard to match data generators. I’m skeptical of the defensibility of any given licensed dataset.
stuartaxelowen commented on Ditching PaaS: Why I Went Back to Self-Hosting   shubhamjain.co/2023/01/18... · Posted by u/shubhamjain
throwawaaarrgh · 2 years ago
You saved $35 a month but spent 3x as much time maintaining and tweaking your self hosting. I guess we know how much your time is worth!
stuartaxelowen · 2 years ago
My preferred way of looking at this is “your project costs you this much to keep being alive”. An upfront cost means maintenance costs are essentially zero, resulting in your projects needing to hit a much lower bar to stay alive.
stuartaxelowen commented on Rethinking serverless with FLAME   fly.io/blog/rethinking-se... · Posted by u/kiwicopple
chrismccord · 2 years ago
Thanks! I try to address this thought in the opening. The issue with this approach is you are scaling at the wrong level of operation. You're scaling your entire app, ie webserver, in order to service specific hot operations. Instead what we want (and often reach for FaaS for) is granular elastic scale. The idea here is we can do this kind of granular scale for our existing app code rather that smashing the webserver/workers scale buttons and hoping for the best. Make sense?
stuartaxelowen · 2 years ago
If you autoscale based on CPU consumption, doesn’t the macro level scaling achieve the same thing? Is the worry scaling small scale services where marginal scaling is a higher multiple, e.g. waste from unused capacity?
stuartaxelowen commented on Bad numbers in the “gzip beats BERT” paper?   kenschutte.com/gzip-knn-p... · Posted by u/ks2048
skrebbel · 2 years ago
Can anyone explain to me how a compression algorithm can beat an LLM at anything? Isn’t that like saying horses are better than graffiti?

I’m sure the answer is in there somewhere but I’m not well versed in AI and I simply can’t figure it out.

stuartaxelowen · 2 years ago
Many other replies here are wrong - the primary reason is that the LLMs were used on completely out of distribution data (e.g. trained on English, evaluated on completely different language that shared some characters). The points about compression's relatedness to understanding are valid, but they are not the primary reason for LLMs underperforming relative to naive compression.
stuartaxelowen commented on MLOps is mostly data engineering   cpard.xyz/posts/mlops_is_... · Posted by u/dpbrinkm
stuartaxelowen · 2 years ago
Feature stores are essentially materialized views (aside from any realtime feature resolution needed). I think it's a good thing that there is specialized effort being taken here, though: features stores are an abstraction that could be useful in other domains also, and this surge in interest is an opportunity for us to make better tools.

u/stuartaxelowen

KarmaCake day654March 26, 2015
About
Founder at Thought Vector. Doing NLP in Seattle and Tokyo. Need easy text auto-tagging? Checkout taggit.io.

stuart at axelbrooke dot com

View Original