Readit News logoReadit News
bachittle commented on I miss thinking hard   jernesto.com/articles/thi... · Posted by u/jernestomg
bachittle · 8 days ago
The friction didn't disappear with AI tools. It just shifted. It's now more so about knowing when to trust an AI system versus when to dig into things yourself. The key insight is this: don't devalue learning things on your own. AI is a tool, but if the tool messes up, you need other tools in your toolbox. If you've only ever leaned on the AI, you're in trouble the moment it fails on something subtle.
bachittle commented on We're losing our voice to LLMs   tonyalicea.dev/blog/were-... · Posted by u/TonyAlicea10
bachittle · 3 months ago
If you give an LLM enough context, it writes in your voice. But it requires using an intelligent model, and very thoughtful context development. Most people don't do this because it requires effort, and one could argue maybe even more effort than just writing the damn thing yourself. It's like trying to teach a human, or anyone, how to talk like you: very hard because it requires at worst your entire life story.
bachittle commented on No Socials November   bjhess.com/posts/no-socia... · Posted by u/speckx
bachittle · 3 months ago
I use the following extensions to help with managing my social media intake while on my work computer:

Focused Youtube: https://chromewebstore.google.com/detail/nfghbmabdoakhobmimn... Removes all recommendations and just keeps a search bar. No shorts rabbit holes or algorithm-based media consumption

StayFocusd: https://chromewebstore.google.com/detail/laankejkbhbdhmipfmg... I like using the nuclear option. Blocks a bunch of sites I have that are in a list, such that I cannot open them at all.

bachittle commented on Claude outage   status.claude.com/inciden... · Posted by u/stuartmemo
bachittle · 3 months ago
Pro tip: if you pay for Claude, also subscribe to status updates here: https://status.claude.com . you may want to add a rule to filter these to a tag or folder as they can be quite spammy, but it has helped me lots. It tells you which specific models are down and what platforms are down, such as claude web, app, API, etc.
bachittle commented on FFmpeg 8.0   ffmpeg.org/index.html#pr8... · Posted by u/gyan
0xbeefcab · 6 months ago
Linking a previous discussion to FFMPEG's inclusion of whisper in this release: https://news.ycombinator.com/item?id=44886647

This seemed to be interesting to users of this site. tl;dr they added support for whisper, an OpenAI model for speech-to-text, which should allow autogeneration of captions via ffmpeg

bachittle · 6 months ago
Heads up: Whisper support depends on how your FFmpeg was built. Some packages will not include it yet. Check with `ffmpeg -buildconf` or `ffmpeg -filters | grep whisper`. If you compile yourself, remember to pass `--enable-whisper` and give the filter a real model path.
bachittle commented on Making LLMs Cheaper and Better via Performance-Efficiency Optimized Routing   arxiv.org/abs/2508.12631... · Posted by u/omarsar
bachittle · 6 months ago
I’m fascinated by this new paradigm. We’ve more or less perfected Mixture-of-Experts inside a single model, where routing happens between subnetworks. What GPT-5 auto (and this paper) are doing is a step further: “LLM routing” across multiple distinct models. It’s still rough right now, but it feels inevitable that this will get much better over time.
bachittle commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
tankenmate · 6 months ago
This is definitely good to have this as an option but at the same time having more context reduces the quality of the output because it's easier for the LLM to get "distracted". So, I wonder what will happen to the quality of code produced by tools like Claude Code if users don't properly understand the trade off being made (if they leave it in auto mode of coding right up to the auto compact).
bachittle · 6 months ago
As of now it's not integrated into Claude Code. "We’re also exploring how to bring long context to other Claude products". I'm sure they already know about this issue and are trying to think of solutions before letting users incur more costs on their monthly plans.
bachittle commented on Can modern LLMs count the number of b's in "blueberry"?   minimaxir.com/2025/08/llm... · Posted by u/minimaxir
bachittle · 6 months ago
OpenAI definitely tarnished the name of GPT-5 by allowing these issues to occur. It's clearly a smaller model optimized for cost and speed. Compare it to GPT-4.5 which didn't have these errors but was "too expensive for them".

This is why Anthropic naming system of haiku sonnet and opus to represent size is really nice. It prevents this confusion.

bachittle commented on Crush: Glamourous AI coding agent for your favourite terminal   github.com/charmbracelet/... · Posted by u/nateb2022
mbladra · 6 months ago
Woah I love the UI. Compared to the other coding agents I've used (eg. Claude Code, aider, opencode) this feels like the most enjoyable to use so far.. Anyone try switching LLM providers with it yet? That's something I've noticed to be a bit buggy with other coding agents
bachittle · 6 months ago
Bubble Tea has always been an amazing TUI. I find React TUI (which is what Claude Code uses) to be buggy and always have to work against it.
bachittle commented on Crush: Glamourous AI coding agent for your favourite terminal   github.com/charmbracelet/... · Posted by u/nateb2022
oceanplexian · 6 months ago
Actually not really.

I spent at least an hour trying to get OpenCode to use a local model and then found a graveyard of PRs begging for Ollama support or even the ability to simply add an OpenAI endpoint in the GUI. I guess the maintainers simply don't care. Tried adding it to the backend config and it kept overwriting/deleting my config. Got frustrated and deleted it. Sorry but not sorry, I shouldn't need another cloud subscription to use your app.

Claude code you can sort of get to work with a bunch of hacks, but it involves setting up a proxy and also isn't supported natively and the tool calling is somewhat messed up.

Warp seemed promising, until I found out the founders would rather alienate their core demographic despite ~900 votes on the GH issue to allow local models https://github.com/warpdotdev/Warp/issues/4339. So I deleted their crappy app, even Cursor provides some basic support for an OpenAI endpoint.

bachittle · 6 months ago
LM Studio is probably better in this regard. I was able to get LM studio to work with Cursor, a product known to specifically avoid giving support to local models. The only requirement is if it uses servers as a middle-man, which is what Cursor does, you need to port forward.

u/bachittle

KarmaCake day52December 6, 2023View Original