Readit News logoReadit News
_venkatasg commented on Ask HN: What Are You Working On? (December 2025)    · Posted by u/david927
_venkatasg · 6 days ago
I was thinking about FizzBuzz and thought it might be cool to benchmark various LLMs to see the highest number they could go before they got it wrong. FizzBuzz is cool because you can test whether the model's can generalize to any other game (divisors of 3 and 7 instead of 3 and 5 for example).

Fun, short and sweet experiment to run over the weekend, with some mildly interesting results :)

https://github.com/venkatasg/fizzbuzz-llm

_venkatasg commented on Typeface Proofing Without Pangrams   venkatasg.net/typeproof/... · Posted by u/_venkatasg
PaulHoule · 5 months ago
I'm running a pretty standard Win 11. I looked at the page you made about the differences and looked for those and don't see them.

Pro tip: take a look at the home page vs /new -- people are pretty frustrated with all the "I vibe coded something that almost worked" posts and not voting them up.

_venkatasg · 5 months ago
SHould have double checked Windows default fonts — Windows doesn't have Helvetica ugh.
_venkatasg commented on Typeface Proofing Without Pangrams   venkatasg.net/typeproof/... · Posted by u/_venkatasg
PaulHoule · 5 months ago
I'm running a pretty standard Win 11. I looked at the page you made about the differences and looked for those and don't see them.

Pro tip: take a look at the home page vs /new -- people are pretty frustrated with all the "I vibe coded something that almost worked" posts and not voting them up.

_venkatasg · 5 months ago
Weird. I see the differences on mine across browsers and on my phone and desktop. And yeah I get it, I don’t claim this to be something amazing that should shoot up in ranking, just that it was my first experience vibe coding. But it certainly works on my end! What about if you search for a google web font?
_venkatasg commented on Typeface Proofing Without Pangrams   venkatasg.net/typeproof/... · Posted by u/_venkatasg
PaulHoule · 5 months ago
Doesn't work for me: I switch between Arial and Helvetica but the font doesn't change.
_venkatasg · 5 months ago
The differences are very subtle in lowercase, check uppercase? But maybe your system doesn't have Helvetica/Arial installed? I assumed its available on every OS, but that might not be the case.
_venkatasg commented on Typeface Proofing Without Pangrams   venkatasg.net/typeproof/... · Posted by u/_venkatasg
_venkatasg · 5 months ago
Inspired by Jonathan Hoefler's essay that illustrated the problems with pangrams as typeface proofs, I wanted to build a simple site that lets you proof any font from Google Fonts with his proof texts. Almost all(~95%) of the site was built with Gemini CLI
_venkatasg commented on Zasper: A Modern and Efficient Alternative to JupyterLab, Built in Go   github.com/zasper-io/zasp... · Posted by u/thunderbong
prasunanand · a year ago
I am the author of Zasper.

The unique feature of Zasper is that the Jupyter kernel handling is built with Go coroutines and is far superior to how it's done by JupyterLab in Python.

Zasper uses one fourth of RAM and one fourth of CPU used by Jupterlab. While Jupyterlab uses around 104.8 MB of RAM and 0.8 CPUs, Zasper uses 26.7 MB of RAM and 0.2 CPUs.

Other features like Search are slow because they are not refined.

I am building it alone fulltime and this is just the first draft. Improvements will come for sure in the near future.

I hope you liked the first draft.

_venkatasg · a year ago
Just wanna say this is a really cool project, and I can't think of higher praise than me hoping I build something as cool as this some day! I've been meaning to learn Go for sometime now, and will be referring to Zasper for the future :)
_venkatasg commented on Overcoming the limits of current LLMs   seanpedersen.github.io/po... · Posted by u/sean_pedersen
mitthrowaway2 · a year ago
LLMs don't only hallucinate because of mistaken statements in their training data. It just comes hand-in-hand with the model's ability to remix, interpolate, and extrapolate answers to other questions that aren't directly answered in the dataset. For example if I ask ChatGPT a legal question, it might cite as precedent a case that doesn't exist at all (but which seems plausible, being interpolated from cases that do exist). It's not necessarily because it drew that case from a TV episode. It works the same way that GPT-3 wrote news releases that sounded convincing, matching the structure and flow of real articles.

Training only on factual data won't solve this.

Anyway, I can't help but feel saddened sometimes to see our talented people and investment resources being drawn in to developing these AI chatbots. These problems are solvable, but are we really making a better world by solving them?

_venkatasg · a year ago
Most sentences in the world are not about truth or falsity. Training on a high quality corpus isn’t going to fix ‘hallucination’. The complete separation of facts from sentences is what makes LLMs powerful.

u/_venkatasg

KarmaCake day287May 16, 2018View Original