Readit News logoReadit News
zellyn commented on ASCII characters are not pixels: a deep dive into ASCII rendering   alexharri.com/blog/ascii-... · Posted by u/alexharri
mwillis · 24 days ago
Fantastic technique and deep dive. I will say, I was hoping to see an improved implementation of the Cognition cube array as the payoff at the end. The whole thing reminded me of the blogger/designer who, years ago, showed YouTube how to render a better favicon by using subpixel color contrast, and then IIRC they implemented the improvement. Some detail here: https://web.archive.org/web/20110930003551/http://typophile....
zellyn · 24 days ago
+1 yo wanting to see the cognition logo with contrast. It was set up as the target, but no payoff!

Lovely article, and the dynamic examples are :chefs-kiss:

zellyn commented on Total monthly number of StackOverflow questions over time   data.stackexchange.com/st... · Posted by u/maartin0
0xfaded · a month ago
I once published a method for finding the closest distance between an ellipse and a point on SO: https://stackoverflow.com/questions/22959698/distance-from-g...

I consider it the most beautiful piece of code I've ever written and perhaps my one minor contribution to human knowledge. It uses a method I invented, is just a few lines, and converges in very few iterations.

People used to reach out to me all the time with uses they had found for it, it was cited in a PhD and apparently lives in some collision plugin for unity. Haven't heard from anyone in a long time.

It's also my test question for LLMs, and I've yet to see my solution regurgitated. Instead they generate some variant of Newtons method, ChatGPT 5.2 gave me an LM implementation and acknowledged that Newtons method is unstable (it is, which is why I went down the rabbit hole in the first place.)

Today I don't know where I would publish such a gem. It's not something I'd bother writing up in a paper, and SO was the obvious place were people who wanted an answer to this question would look. Now there is no central repository, instead everyone individually summons the ghosts of those passed in loneliness.

zellyn · a month ago
Please, start a blog! Hugo + GitHub hosting makes it laughably simple. (Or pick a different stack; that’s just mine.)

Even if you’re worried it’ll be sparse and crappy, isn’t an Internet full of idiosyncratic personal blogs what we all want?

If you want help or encouragement, reach out: zellyn@ most places

zellyn commented on 10 years of personal finances in plain text files   sgoel.dev/posts/10-years-... · Posted by u/wrxd
binarin · a month ago
I've tried to track personal finances several times, but it only started to work when I've discovered the idea (from https://github.com/adept/full-fledged-hledger) that you need to treat the whole PTA story more like a project compilation:

- Everything is managed by build system that is able to track dependencies

- Inputs from financial institutions are kept in the repo as is

- Those inputs are converted by your scripts to .csv files that are readable by PTA import engine

- There are rules files that describe how to convert .csv lines to PTA entries

- Generated files are included from per-year PTA journals (and you can put any manual transactions in here also)

The benefit is that you can change any part of this pipeline, and just re-generate the changed parts:

- improve the program that converts to .csv - raw data immediately gets better across the whole repo

- add/customize import rules - better classification is immediately applied to all of the past data

And with this approach you can start small (like, a single month of data from your primary bank), and refine the thing in steps, like adding more historical data or adding more data sources (examples being not only bank statements, but even things like itemized Amazon orders and Paypal slips).

zellyn · a month ago
Thank you! I’ve always procrastinated tracking finances, but as a programmer who believes in reproducible builds, this just clicked.

I just downloaded a bunch of qfx and csv files, and got Claude Code to do this. Took an hour to create the whole system from nothing. Then of course I stayed up until 2am getting CC to create Python rules to categorize things better, and trying to figure out what BEENVERI on my bank statement means

(If you do this, make Claude generate fingerprints for each transaction so it’s easy to add individual overrides…)

Getting Claude to write a FastAPI backend to serve up a “Subscriptions” dashboard took about 3 minutes, plus another minute or two to add an svg bar graph and change ordering.

Crazy times.

zellyn commented on Show HN: Vibe coding a bookshelf with Claude Code   balajmarius.com/writings/... · Posted by u/balajmarius
spzb · a month ago
I am yet to see a vibe coded success that isn't a small program that already exists in multiple forms in the training data. Let's see something ground-breaking. If AI coding is so great and is going to take us to 10x or 100x productivity let's see it generate a new, highly efficient compression algorithm or a state-of-art travelling salesman solution.
zellyn · a month ago
trifling.org is an entire Python coding site, offline first (localstorage after first load), with docs, turtle graphics, canvas, and avatar editor, vibe coded from start to finish, with all conversations in the GitHub repo here: https://github.com/zellyn/trifling/tree/main/docs/sessions

This is going to destroy my home network, since I never moved it off the little Lenovo box sitting in my laundry room beside the Eero waypoint, but I’m out of town for three days, so

Granted, the seed of the idea was someone posting about how they wired pyiodide to Ace in 400 lines of JavaScript, so I can’t truly argue it’s non-trivial.

As a light troll to hackernews, only AI-written contributions are accepted

[Edit: the true inception of this project was my kid learning Python at school and trinket.io inexplicably putting Python 3 but not 2 behind the paywall. Alas, Securely will not let him and his classmates actually access it ]

zellyn commented on Fabrice Bellard Releases MicroQuickJS   github.com/bellard/mquick... · Posted by u/Aissen
simonw · 2 months ago
Clarification added later: One of my key interests at the moment is finding ways to run untrusted code from users (or generated by LLMs) in a robust sandbox from a Python application. MicroQuickJS looked like a very strong contender on that front, so I fired up Claude Code to try that out and build some prototypes.

I had Claude Code for web figure out how to run this in a bunch of different ways this morning - I have working prototypes of calling it as a Python FFI library (via ctypes), as a Python compiled module and compiled to WebAssembly and called from Deno and Node.js and Pyodide and Wasmtime https://github.com/simonw/research/blob/main/mquickjs-sandbo...

PR and prompt I used here: https://github.com/simonw/research/pull/50 - using this pattern: https://simonwillison.net/2025/Nov/6/async-code-research/

zellyn · 2 months ago
I’m horribly biased but I think it’s a combination of: (1) knee-jerk reaction to similar-looking but low-value comments, and (2) most people not having played around with LLM coding agents and messed around with their own agents enough to immediately jump to excitement at simple, safe sandboxing primitives for that purpose.

And +1000 on linking to your own (or any other well-written) blog.

zellyn commented on If a Meta AI model can read a brain-wide signal, why wouldn't the brain?   1393.xyz/writing/if-a-met... · Posted by u/rdgthree
zellyn · 2 months ago
I’ve long thought it would be unsurprising if we eventually found evidence of certain kinds of telepathy. It would just be too damn useful, and tuning up one exquisitely complex magneto-electro-chemical instrument in close proximity to another similar one seems like a good way to at least get resonance. Who knows?
zellyn commented on Show HN: Walrus – a Kafka alternative written in Rust   github.com/nubskr/walrus... · Posted by u/janicerk
nubskr · 2 months ago
s3 charges per 1,000 Update requests, not sure how it's sustainable to do it every 250ms tbh, especially in multi tenant mode where you can have thousands of 'active' blocks being written to
zellyn · 2 months ago
Guess it beats doing it every 250ms for every topic…
zellyn commented on 1D Conway's Life glider found, 3.7B cells long   conwaylife.com/forums/vie... · Posted by u/nooks
Cthulhu_ · 2 months ago
Philosophically and depending on what schools of thought you follow, reality is just a really complex GoL simulation. I'm sure I read about it once, but if we were living in a simulation, would we be able to know?
zellyn · 2 months ago
I enjoy the [GoL -> our “reality” -> outside-the-simulation] comparison. It really drives home how unlikely we would be to understand the outside-the-simulation world.

Of course, there are other variants (see qntm's https://qntm.org/responsibility) where the simulation _is_ a simulation of the world outside. And we have GoL in GoL :-)

zellyn commented on Show HN: Walrus – a Kafka alternative written in Rust   github.com/nubskr/walrus... · Posted by u/janicerk
EdwardDiego · 2 months ago
TBH I don't think anyone can utilise S3 for the active segment, I didn't dig into Warpstream too much, but I vaguely recall they only offloaded to S3 once the segment was rolled.
zellyn · 2 months ago
The Developer Voices interview where Kris Jenkins talks to Ryan Worl is one of the best, and goes into a surprising amount of detail: https://www.youtube.com/watch?v=xgzmxe6cj6A

tl;dr they write to s3 once every 250ms to save costs. IIRC, they contend that when you keep things organized by writing to different files for each topic, it's the Linux disk cache being clever that turns the tangle of disk block arrangement into a clean view per file. They wrote their own version of that, so they can cheaply checkpoint heavily interleaved chunks of data while their in-memory cache provides a clean per-topic view. I think maybe they clean up later async, but my memory fails me.

I don't know how BufStream works.

The thing that really stuck with me from that interview is the 10x cost reduction you can get if you're willing and able to tolerate higher latency and increased complexity and use S3. Apparently they implemented that inside Datadog ("Labrador" I think?), and then did it again with WarpStream.

I highly recommend the whole episode (and the whole podcast, really).

zellyn commented on Human Fovea Detector   shadertoy.com/view/4dsXzM... · Posted by u/AbuAssar
jchw · 3 months ago
180 worked pretty well on my Framework 16.
zellyn · 3 months ago
Ditto on my MacBook Air

u/zellyn

KarmaCake day4735March 13, 2008
About
[ my public key: https://keybase.io/zellyn; my proof: https://keybase.io/zellyn/sigs/BQV1Sc3sptnq8uZpwibu9P_fQlT-gTGk8VBi8DGEA68 ]
View Original