Readit News logoReadit News
andhuman commented on How to effectively write quality code with AI   heidenstedt.org/posts/202... · Posted by u/i5heu
gspr · 2 days ago
I hear you. And maybe you're right. Maybe I'm deluding myself, but: when I look at my skilled colleagues who vibecode, I can't understand how this is sustainable. They're smart people, but they've clearly turned off. They can't answer non-trivial questions about the details of the stuff they (vibe-)delivered without asking the LLM that wrote it. Whoever uses the code downstream aren't gonna stand (or pay!) for this long-term! And the skills of the (vibe-)authors will rapidly disappear.

Maybe I'm just as naive as those who said that photographs lack the soul of paintings. But I'm not 100% convinced we're done for yet, if what you're actually selling is thinking, reasoning and understanding.

andhuman · 2 days ago
I have this nagging feeling I’m more and more skimming text, not just what the LLMs output, but all type of texts. I’m afraid people will get too lazy to read, when the LLM is almost always right. Maybe it’s a silly thought. I hope!
andhuman commented on Backseat Software   blog.mikeswanson.com/back... · Posted by u/zdw
andhuman · 10 days ago
One of the more annoying software that does this is the copilot Office 365 on the web. Every time (!) I open it, it shows a popup on how to add files to the context. That itself would be annoying, but it also steals focus! So you would be typing something and suddenly you’re not typing anymore for M$ decided it’s time for a popup. I finally learned to just wait for the pop up and then dismiss it with esc. Ugh!
andhuman commented on Ask HN: What's the current best local/open speech-to-speech setup?    · Posted by u/dsrtslnd23
andhuman · 16 days ago
I built this recently. I used nvidia parakeet as STT, open wake word as the wake word detection, mistral ministral 14b as LLM and pocket tts for tts. Fits snugly in my 16 gb VRAM. Pocket is small and fast and has good enough voice cloning. I first used the chatterbox turbo model, which perform better and even supported some simple paralinguistic word like (chuckle) that made it more fun, but it was just a bit too big for my rig.
andhuman commented on Qwen3-TTS family is now open sourced: Voice design, clone, and generation   qwen.ai/blog?id=qwen3tts-... · Posted by u/Palmik
indigodaddy · 18 days ago
How does the cloning compare to pocket TTS?
andhuman · 18 days ago
It’s uncanny good. I prefer it to pocket, but then again pocket is much smaller and for realtime streaming.
andhuman commented on GLM-4.7-Flash   huggingface.co/zai-org/GL... · Posted by u/scrlk
andhuman · 21 days ago
Gave it four of my vibe questions around general knowledge and it didn’t do great. Maybe expected with a model as small as this one. Once support in llama.cpp is out I will take it for a spin.
andhuman commented on Pocket TTS: A high quality TTS that gives your CPU a voice   kyutai.org/blog/2026-01-1... · Posted by u/pain_perdu
oybng · 24 days ago
>If you want access to the model with voice cloning, go to https://huggingface.co/kyutai/pocket-tts and accept the terms, then make sure you're logged in locally with `uvx hf auth login` lol
andhuman · 24 days ago
I’ve tried the voice clinking and it works great. I added a 9s clip and it captured the speaker pretty well.

But don’t do the fake mistake I did and use a hf token that doesn’t have access to read from repos! The error message said that I had to request access to the repo, but I’ve had already done that, so I couldn’t figure out what was wrong. Turns out my HF token only had access to inference.

andhuman commented on OLED, Not for Me   nuxx.net/blog/2026/01/09/... · Posted by u/c0nsumer
andhuman · a month ago
I’ve recently bought the LG with 4th generation OLED, and for me that works for long coding sessions (I use it for work). They shifted or did something with the pixel arrangement for this generation just for text legibility.
andhuman commented on Can Claude teach me to make coffee?   lesswrong.com/posts/aZYr5... · Posted by u/paulpauper
andhuman · 2 months ago
Interesting experiment. I would hazard to guess that Google is on top when it comes to these sorts of things (spatial ability), then OpenAI and last Anthropic. I would like to see the same experiment using Google’s Live view or whatever it’s called in their Gemini App.
andhuman commented on Sick of smart TVs? Here are your best options   arstechnica.com/gadgets/2... · Posted by u/fleahunter
energy123 · 2 months ago
> My only caution is OLED can experience burn-in

The other limitation is lower brightness than miniLED monitors, around 30-60% of the nits in SDR. Whether that matters obviously depends on the ambient light or reflective surfaces near you.

For me, because I'm next to a big window and already squinting at my 400 nits IPS monitor, a < 300 nits OLED is a non-starter, but a 600 nits in SDR, IPS miniLED, is ideal.

This limitation should be temporary however because there are some high nit OLED TVs coming on the market in 2025 so bright OLED 27-43" monitors will likely follow.

andhuman · 2 months ago
The new LG panels are bright enough. I think they’re called 4th generation WOLED.
andhuman commented on Mistral 3 family of models released   mistral.ai/news/mistral-3... · Posted by u/pember
andhuman · 2 months ago
This is big. The first really big open weights model that understands images.

u/andhuman

KarmaCake day54September 13, 2021View Original