Readit News logoReadit News
icarito commented on GoDaddy is auctioning a 15-year-old .org from an FOSS volunteer group – help?   somosazucar.org/... · Posted by u/icarito
icarito · a month ago
OP here, the actual content of the missing domain is here: https://somosazucar.github.io/www-blog/ for now
icarito commented on GoDaddy is auctioning a 15-year-old .org from an FOSS volunteer group – help?   somosazucar.org/... · Posted by u/icarito
icarito · a month ago
Hi all — I help manage somosazucar.org, one of the local volunteer groups of Sugar Labs, the nonprofit behind the open-source Sugar Learning Platform originally developed for the One Laptop Per Child (OLPC) project.

SomosAzúcar has supported open education and children’s digital literacy initiatives across Latin America since 2009.

The domain expired on 2025-10-06, but due to a Postfix configuration issue on sugarlabs.org, GoDaddy’s renewal notices never reached us.

By the time we discovered the problem — about 35 days after expiration — GoDaddy informed us that the domain was already being prepared for auction, and that the only way to recover it would be to bid for it like any other buyer.

It feels wrong that a long-standing nonprofit project could lose its .org domain over a technical mail glitch.

Has anyone here faced something similar with GoDaddy or other registrars?

Is there any way to appeal to PIR (.org registry) or GoDaddy executive support to restore the domain before it’s auctioned?

Any advice or contacts would be deeply appreciated — this domain represents more than 15 years of open education work.

icarito commented on Show HN: My LLM CLI tool can run tools now, from Python code or plugins   simonwillison.net/2025/Ma... · Posted by u/simonw
kristopolous · 7 months ago
Interesting. What do you use it for beyond the normal chatting
icarito · 7 months ago
I sometimes use llm from the command line, for instance with a fragment, or piping a resource from the web with curl, and then pick up the cid with `llm gtk-chat --cid MYCID`.
icarito commented on Show HN: My LLM CLI tool can run tools now, from Python code or plugins   simonwillison.net/2025/Ma... · Posted by u/simonw
icarito · 7 months ago
For all of you using `llm` - perhaps take a look at [Gtk-llm-chat](https://github.com/icarito/gtk-llm-chat).

I put a lot of effort into it - it integrates with `llm` command line tool and with your desktop, via a tray icon and nice chat window.

I recently released 3.0.0 with packages for all three major desktop operating systems.

icarito commented on Show HN: My LLM CLI tool can run tools now, from Python code or plugins   simonwillison.net/2025/Ma... · Posted by u/simonw
kristopolous · 7 months ago
It's a wildly nontrivial problem if you're trying to only be forward moving and want to minimize your buffer.

That's why everybody else either rerenders (such as rich) or relies on the whole buffer (such as glow).

I didn't write Streamdown for fun - there are genuinely no suitable tools that did what I needed.

Also various models have various ideas of what markdown should be and coding against CommonMark doesn't get you there.

Then there's other things. You have to check individual character width and the language family type to do proper word wrap. I've seen a number of interesting tmux and alacritty bugs in doing multi language support

The only real break I do is I render h6 (######) as muted grey.

Compare:

    for i in $(seq 1 6); do 
      printf "%${i}sh${i}\n\n-----\n" | tr " " "#"; 
    done | pv -bqL 30 | sd -w 30
to swapping out `sd` with `glow`. You'll see glow's lag - waiting for that EOF is annoying.

Also try sd -b 0.4 or even -b 0.7,0.8,0.8 for a nice blue. It's a bit easier to configure than the usual catalog of themes that requires a compilation after modification like with pygments.

icarito · 7 months ago
That's right this is a nontrivial problem that I struggled with too for gtk-llm-chat! I resolved it using the streaming markdown-it-py library.
icarito commented on Show HN: Light like the Terminal – Meet GTK LLM Chat Front End   github.com/icarito/gtk-ll... · Posted by u/icarito
cma · 8 months ago
What's the startup time now with 9950X3D, after a prior start so the pyc's are cached in RAM?
icarito · 8 months ago
Hey I felt bad that there was a longer delay and by making sure to lazy-load everything I could, I managed to bring down the startup time from 2.2 seconds to 0.6 on my machine! Massive improvement! Thanks for the challenge!
icarito commented on Show HN: Light like the Terminal – Meet GTK LLM Chat Front End   github.com/icarito/gtk-ll... · Posted by u/icarito
cma · 8 months ago
With a laptop 7735HS, using WSL2, I get 15ms for the interpreter to start and exit without any imports.
icarito · 8 months ago
I've got a i5-10210U CPU @ 1.60GHz.

You triggered my curiosity. The chat window takes consistently 2.28s to start. The python interpreter takes roughly 30ms to start. I'll be doing some profiling.

icarito commented on Show HN: Light like the Terminal – Meet GTK LLM Chat Front End   github.com/icarito/gtk-ll... · Posted by u/icarito
indigodaddy · 8 months ago
Does this work on Mac or Linux only?
icarito · 8 months ago
I'd truly like to know! But I've no access to a Mac to try. If you can, try it and let me know? If it does, please send a screenshot!
icarito commented on Show HN: Light like the Terminal – Meet GTK LLM Chat Front End   github.com/icarito/gtk-ll... · Posted by u/icarito
cma · 8 months ago
What's the startup time now with 9950X3D, after a prior start so the pyc's are cached in RAM?
icarito · 8 months ago
I wonder! In my more modest setup, it takes a couple of seconds perhaps. After that it's quite usable.

u/icarito

KarmaCake day53September 25, 2014
About
Sebastian Silva ( sebastian [AT] fuentelibre.org )
View Original