Readit News logoReadit News
mythz commented on From M1 MacBook to Arch Linux: A month-long experiment that became permanenent   ssp.sh/blog/macbook-to-ar... · Posted by u/articsputnik
mythz · 8 days ago
I've owned several laptops over the years and have come to regret each of my non Apple laptop purchases which never got more than 3-4 years life without some hardware failures, Surface was the worst which died after 10 months of low sporadic usage.

Whilst my gen 1 MB Air has been too slow for anything, my 2013 MB Intel still looks and runs great which the kids still make good use of. My latest M2 MB is by far the best I've ever owned with great build quality, performance, battery life where it's the first time I can confidently travel without a power brick.

Whilst Apple's non-Desktop hardware is always best-of-class, I've become increasingly dissatisfied with the direction of macOS and Windows which IMO have both become power-user-hostile and have switched to a Linux desktop full-time. Everyone's been predicting the year of the Linux Desktop for 20+ years, but I believe we're at a turning point for Linux adoption with Windows 11 becoming an intolerable ad/spyware infested marketing platform and Apple's continued ignorance of developers and ambitions of turning its neglected macOS into a locked down appliance.

Hopefully Valve can continue their investments in Steam Deck and Arch Linux to accelerate the adoption, their contributions to Proton have already IMO unblocked the biggest barrier to adoption. Whilst currently a happy Fedora user I like the direction, taste, philosophy and community behind Omarchy from what I've seen after kicking the tires in a VM, will look into switching over after they bring out their ISO.

mythz commented on Nginx introduces native support for ACME protocol   blog.nginx.org/blog/nativ... · Posted by u/phickey
vivzkestrel · 18 days ago
Not gonna lie, setting up Nginx, Certbot inside docker is the biggest PITA ever. you need certificates to start the NGINX server but you need the NGINX server to issue certificates? see the problem? It is made infinitely worse by a tonne of online solutions and blog posts none of which I could ever get to work. I would really appreciate if someone has documented this extensively for docker compose. I dont want to use libraries like nginx-proxy as customizing that library is another nightmare alltogether
mythz · 18 days ago
What's the issue with nginx-proxy? We've used it for years to handle CI deploying multiple multiple Docker compose Apps to the same server [1] without issue, with a more detailed writeup at [2].

This served us well for many years before migrating to use Kamal [3] for its improved remote management features.

[1] https://docs.servicestack.net/ssh-docker-compose-deploment

[2] https://servicestack.net/posts/kubernetes_not_required

[3] https://docs.servicestack.net/kamal-deploy

mythz commented on GitHub is no longer independent at Microsoft after CEO resignation   theverge.com/news/757461/... · Posted by u/Handy-Man
pyuser583 · 20 days ago
How is Rider v. VS?

This is the sort of question I don't trust AI with yet.

mythz · 19 days ago
> How is Rider v. VS?

Rider is far better than VS for everything apart from Desktop UI Apps and perhaps Blazor WASM hot reloading, which is itself far behind the UX of JS/Vite hot reloading, so I avoid it and just use Blazor static rendering. Otherwise VS tooling is far behind Intellij/Rider for authoring Web dev assets, inc. TypeScript.

I switched to Rider/VS Code long before moving to Linux, which I'm happy to find works just as well in Linux. Not a fan of JetBrains built-in AI Integration (which IMO they've fumbled for years), but happy with Augment Code's Intellij Plugin which I use in both Rider and VS Code.

mythz commented on Open models by OpenAI   openai.com/open-models/... · Posted by u/lackoftactics
mythz · a month ago
Getting great performance running gpt-oss on 3x A4000's:

    gpt-oss:20b = ~46 tok/s
More than 2x faster than my previous leading OSS models:

    mistral-small3.2:24b = ~22 tok/s 
    gemma3:27b           = ~19.5 tok/s
Strangely getting nearly the opposite performance running on 1x 5070 Ti:

    mistral-small3.2:24b = ~39 tok/s 
    gpt-oss:20b          = ~21 tok/s
Where gpt-oss is nearly 2x slow vs mistral-small 3.2.

mythz · 25 days ago
ok issue is with ollama as gpt-oss 20B runs much faster on 1x 5070 Ti with llama.cpp and LM Studio:

    llama-server     = ~181 tok/s
    LM Studio        = ~46 tok/s  (default)
    LM Studio Custom = ~158 tok/s (changed to offload to GPU and switch to CUDA llama.cpp engine)
and llama-server on my 3x A4000 GPU Server is getting 90 tok/s vs 46 tok/s on ollama

mythz commented on Ollama Turbo   ollama.com/turbo... · Posted by u/amram_art
smlacy · a month ago
Watching ollama pivot from a somewhat scrappy yet amazingly important and well designed open source project to a regular "for-profit company" is going to be sad.

Thankfully, this may just leave more room for other open source local inference engines.

mythz · a month ago
Same, was just after a small lightweight solution where I can download, manage and run local models. Really not a fan of boarding the enshittification train ride with them.

Always had a bad feeling when they didn't give ggerganov/llama.cpp their deserved credit for making Ollama possible in the first place, if it were a true OSS project they would have, but now makes more sense through the lens of a VC-funded project looking to grab as much marketshare as possible to avoid raising awareness for alternatives in OSS projects they depend on.

Together with their new closed-source UI [1] it's time for me to switch back to llama.cpp's cli/server.

[1] https://www.reddit.com/r/LocalLLaMA/comments/1meeyee/ollamas...

mythz commented on Open models by OpenAI   openai.com/open-models/... · Posted by u/lackoftactics
mythz · a month ago
Getting great performance running gpt-oss on 3x A4000's:

    gpt-oss:20b = ~46 tok/s
More than 2x faster than my previous leading OSS models:

    mistral-small3.2:24b = ~22 tok/s 
    gemma3:27b           = ~19.5 tok/s
Strangely getting nearly the opposite performance running on 1x 5070 Ti:

    mistral-small3.2:24b = ~39 tok/s 
    gpt-oss:20b          = ~21 tok/s
Where gpt-oss is nearly 2x slow vs mistral-small 3.2.

mythz commented on Open models by OpenAI   openai.com/open-models/... · Posted by u/lackoftactics
artembugara · a month ago
Disclamer: probably dumb questions

so, the 20b model.

Can someone explain to me what I would need to do in terms of resources (GPU, I assume) if I want to run 20 concurrent processes, assuming I need 1k tokens/second throughput (on each, so 20 x 1k)

Also, is this model better/comparable for information extraction compared to gpt-4.1-nano, and would it be cheaper to host myself 20b?

mythz · a month ago
gpt-oss:20b is ~14GB on disk [1] so fits nicely within a 16GB VRAM card.

[1] https://ollama.com/library/gpt-oss

mythz commented on Modern Node.js Patterns   kashw1n.com/blog/nodejs-2... · Posted by u/eustoria
mythz · a month ago
Good to see Node is catching up although Bun seems to have more developer effort behind it so I'll typically default to Bun unless I need it to run in an environment where node is better for compatibility.
mythz commented on Modern Node.js Patterns   kashw1n.com/blog/nodejs-2... · Posted by u/eustoria
stevage · a month ago
I usually write it like:

    const data = (await fetch(url)).then(r => r.json())

But it's very easy obviously to wrap the syntax into whatever ergonomics you like.

mythz · a month ago
why not?

    const data = await (await fetch(url)).json()

mythz commented on PixiEditor 2.0 – A FOSS universal 2D graphics editor   pixieditor.net/blog/2025/... · Posted by u/ksymph
oneshtein · a month ago
It doesn't fit my 2k screen on Fedora (X11/Mate) with no option to scale it down. It looks like 4k screens are supported only.
mythz · a month ago
Yeah can confirm it looks great on my 4K

u/mythz

KarmaCake day9536August 20, 2010
About
https://servicestack.net
View Original