Readit News logoReadit News
ako commented on A guide to Gen AI / LLM vibecoding for expert programmers   stochasticlifestyle.com/a... · Posted by u/ChrisRackauckas
recursive · 3 days ago
I think I enjoy programming. Vibe coding removes most of the parts that I like. It already looks like hell. I'm probably a minority, but I don't think I'm alone in this.
ako · 2 days ago
I really like creating software solutions, vibe coding removes the part that is most tedious. LLMs allow me to experiment with different solutions and different designs faster.
ako commented on What could have been   coppolaemilio.com/entries... · Posted by u/coppolaemilio
acdha · 6 days ago
Thought exercise: has any of the money Apple has spent integrating AI features produced as much customer good-will as fixing iOS text entry would? One reason for paying attention to quality is that if you don't, over time it tarnishes your brand and makes it easier for competitors to start cutting into your core business.
ako · 6 days ago
Text entry has been mostly fixed with AI: dictate, transcribe and cleanup with Ai works well for many use cases, especially larger texts.
ako commented on Claudia – Desktop companion for Claude code   claudiacode.com/... · Posted by u/zerealshadowban
commandar · 7 days ago
On the one hand, it's good that we're seeing a lot of exploration in this space.

On the other, the trend seems to be everyone developing a million disparate tools that largely replicate the same functionality with the primary variation being greater-or-lesser lock-in to a particular set of services.

This is about the third tool this week I've taken a quick look at and thought "I don't see what this offers me that I don't already have with Roo, except only using Claude."

We're going to have to hit a collapse and consolidation cycle eventually, here. There's absolutely room for multiple options to thrive, but most of what I've seen lately has been "reimplement more or less the same thing in a slightly different wrapper."

ako · 7 days ago
As the code generation tools improve, this will only get worse. Having gen ai build a clone of something with some minor differences will become easier and easier.
ako commented on AI is different   antirez.com/news/155... · Posted by u/grep_it
aorloff · 9 days ago
Every time someone casually throws out UBI my mind goes to the question "who is paying taxes when some people are on UBI ?"

Is there like a transition period where some people don't have to pay taxes and yet don't get UBI, and if so, why hasn't that come yet ? Why aren't the minimum tax thresholds going up if UBI could be right around the corner ?

ako · 9 days ago
You also have to consider the alternative: if there’s no ubi, are you expecting millions to starve? This is a recipe for civil war, if you have a very large group of people unable to survive you get social unrest. Either you spend the money on ubi or on police/military suppression to battle the unrest.
ako commented on Why LLMs can't really build software   zed.dev/blog/why-llms-can... · Posted by u/srid
tobr · 10 days ago
The point isn’t that writing and reading aren’t useful. The point is that they’re different from forming new neurological connections as you familiarize yourself with a problem. LLMs, as far as I know, can’t do that when you use them.
ako · 10 days ago
Does that really matter if the result is the same, they have a brain, they have additional instructions, and with these they can achieve specified outcomes. Would be interesting to see how far we can shrink the brains to get desired outcomes with the right instructions.
ako commented on Why LLMs can't really build software   zed.dev/blog/why-llms-can... · Posted by u/srid
tobr · 10 days ago
I’m not an expert on this, so I’m not familiar with what RDF graphs are, but I feel like everything you’re describing happens textually, and used as context? That is, it’s not at all ”learning” the way it’s learning during training, but by writing things down to refer to them later? As you say - ”ask it to summarize the insights for later use” - this is fundamentally different from the types of ”insights” it can have during training. So, it can take notes about your code and refer back to them, but it only has meaningful ”knowledge” about code it came across in training.

To me as a layman, this feels like a clear explanation of how these tools break down, why they start going in circles when you reach a certain complexity, why they make a mess of unusual requirements, and why they have such an incredible nuanced grasp of complex ideas that are widely publicized, while being unable to draw basic conclusions about specific constraints in your project.

ako · 10 days ago
To me it feels very much like a brain: my brain often lacks knowledge, but i can use external documents to augment it. My brain also has limitations in what it can remember, I hardly remember anything I learned in high school or university on science, chemistry, math, so I need to write things down to bring back knowledge later.

Text and words are the concepts we use to transfer knowledge in schools, across generations, etc. we describe concepts in words, so other people can learn these concepts.

Without words and text we would be like animals unable to express and think about concepts

ako commented on Why LLMs can't really build software   zed.dev/blog/why-llms-can... · Posted by u/srid
tobr · 11 days ago
The article has a very nuanced point about why it’s not just a matter of today’s vs tomorrow’s LLMs. What’s lacking is a fundamental capacity to build mental models and learn new things specific to the problem at hand. Maybe this can be fixed in theory with some kind of on-the-fly finetuning, but it’s not just about more context.
ako · 10 days ago
You can give it some documents, or classroom textbooks, and it can turn those into rdf graphs, explaining what the main concepts are, and how they are related. This can then be used by an llm to solve other problems.

It can also learn new things using trial and error with mcp tools. Once it has figured out some problem, you can ask it to summarize the insights for later use.

What would define as an AI mental model?

ako commented on GPT-5 vs. Sonnet: Complex Agentic Coding   elite-ai-assisted-coding.... · Posted by u/intellectronica
quijoteuniv · 17 days ago
Today I used GPT-5 for some OpenTelemetry Collector configs that both Claude and OpenAI models struggled with before and it was surprisingly impressive. It got the replies right on the first try. Previously, both had been tripped up by outdated or missing docs (OTel changes so quickly).

For home projects, I wish I could have GPT-5 plugged into Claude’s code CLI interface. iteration just works! Looking forward to less baby sitting in the future!

ako · 17 days ago
ako commented on Cursor CLI   cursor.com/cli... · Posted by u/gonzalovargas
MrGreenTea · 17 days ago
Habe you though about adding a Session start hook that reads this file and adds it to the context?
ako · 17 days ago
Not yet, but that sounds like a good suggestion.
ako commented on Cursor CLI   cursor.com/cli... · Posted by u/gonzalovargas
novaleaf · 17 days ago
problem is that claude doesn't actually read those or keep them in context unless you prompt it to. it has to be in CLAUDE.md or it'll quickly forget about the contents
ako · 17 days ago
I've added these instructions in CLAUDE.md and .windsurfrules, and yes sometimes you have to remind it, but overall it works quite well.

u/ako

KarmaCake day2451July 2, 2009
About
andrej at koelewijn dot net
View Original