Readit News logoReadit News
ScotterC commented on AI is different   antirez.com/news/155... · Posted by u/grep_it
mxwsn · 9 days ago
AI with ability but without responsibility is not enough for dramatic socioeconomic change, I think. For now, the critical unique power of human workers is that you can hold them responsible for things.

edit: ability without accountability is the catchier motto :)

ScotterC · 9 days ago
I’m surprised that I don’t hear this mentioned more often. Not even in a Eng leadership format of taking accountability for your AI’s pull requests. But it’s absolutely true. Capitalism runs on accountability and trust and we are clearly not going to trust a service that doesn’t have a human responsible at the helm.
ScotterC commented on Richard Feynman's blackboard at the time of his death (1988)   digital.archives.caltech.... · Posted by u/bookofjoe
jszymborski · 6 months ago
Surely it can be true that a profoundly wise and consistently effective person holds a belief or utters a phrase that is profoundly unwise and endlessly futile.
ScotterC · 6 months ago
Absolutely true. And paradoxically, they may fully understand that the phrase is profoundly unwise and endlessly futile and yet know the benefit of holding the belief anyway.
ScotterC commented on Cognitive load is what matters   minds.md/zakirullin/cogni... · Posted by u/zdw
codespin · 8 months ago
With LLM generated code (and any code really) the interface between components becomes much more important. It needs to be clearly defined so that it can be tested and avoid implicit features that could go away if it were re-generated.

Only when you know for sure the problem can't be coming through from that component can you stop thinking about it and reduce the cognitive load.

ScotterC · 8 months ago
Agreed.

Regarding some of the ‘layered architecture’ discussion from the OP, I’d argue that having many modules that are clearly defined is not as large a detriment to cognitive load when an LLM is interpreting it. This is dependent on two factors, each module being clearly defined enough that you can be confident the problem lies within the interactions between modules/components and not within them AND sharing proper/sufficient context with an LLM so that it is focused on the interactions between components so that it doesn’t try to force fit a solution into one of them or miss the problem space entirely.

The latter is a constant nagging issue but the former is completely doable (types and unit testing helps) but flies in the face of the mo’ files, mo’ problems issue that creates higher cognitive loads for humans.

ScotterC commented on Cognitive load is what matters   minds.md/zakirullin/cogni... · Posted by u/zdw
ScotterC · 8 months ago
Looks like a solid post with solid learnings. Apologies for hijacking the thread but I’d really love to have a discussion on how these heuristics of software development change with the likes of Cursor/LLM cyborg coding in the mix.

I’ve done an extensive amount of LLM assisted coding and our heuristics need to change. Synthesis of a design still needs to be low cognitive load - e.g. how data flows between multiple modules - because you need to be able to verify the actual system or that the LLM suggestion matches the intended mental model. However, striving for simplicity inside a method/function matters way less. It’s relatively easy to verify that an LLM generated unit test is working as intended and the complexity of the code within the function doesn’t matter if its scope is sufficiently narrow.

IMO identifying the line between locations where “low cognitive load required” vs “low cognitive load is unnecessary” changes the game of software development and is not often discussed.

ScotterC commented on The tech utopia fantasy is over   blog.avas.space/tech-utop... · Posted by u/mooreds
creativeSlumber · 9 months ago
You can't solve people problems with technology.
ScotterC · 9 months ago
Many would be quick to argue that they are interlinked but I strongly agree with you - particularly in the context of this thread.

For me, optimism still provides a lot of value in my worldview. The beliefs that have changed the most for me in the last 25 years is the limits and boundaries of what tools can do for us.

Technology ‘fixing humanity’ (when that ‘fixing’ is based on your own personal value system) is certainly a fools errand. But that limitation shouldn’t get in the way of imagining a utopia which is worth having.

Sure, I miss my childhood feelings of the 90s. But I also never expected a techno utopia to be ‘easy’.

ScotterC commented on Cat as a Service   cataas.com/... · Posted by u/kaycebasques
ScotterC · a year ago
Missed opportunity to have

> Have 1516 cats for meow

instead of now

ScotterC commented on Partnership with News Corp   openai.com/index/news-cor... · Posted by u/davidbarker
ScotterC · a year ago
Is "FN" Fox News? Or is that separate from News Corp?
ScotterC commented on Tell HN: OpenAI will require pre-purchased API credits from March 25th    · Posted by u/notmytempo
ScotterC · a year ago
My immediate instinct here is they need cash. Or are dealing with a lot of fraud which is a pain to limit.
ScotterC commented on Launch HN: Glide (YC W24) – AI-assisted technical design docs    · Posted by u/robmck
ivanovm · a year ago
Ok I got a first cut of Ruby parser in - the latest commit on Rails was a little sluggish but it works!

Right now the parser intentionally tags modules as classes, we made some assumptions in the generic parser code for other languages that don't quite align with ruby's notion of modules, but that'll be adjusted in the near future.

ScotterC · a year ago
hey this is awesome. I'm using it and I think it's got some wheels. But please add a quick link to submit feedback in app! Here's what I got so far: - specific issue: I updated the triage text before submitting to the planning stage. Although the triage text maintained my edits I'm pretty sure the prompt that went into the first Plan generation did not include my edits. - rough idea: Plans are long and take awhile to regenerate. There's fear of submitting for a regeneration when all you want is it to understand something different about the high level thoughts. Maybe keeping a history of plans or even diffing them could be useful.

Both items above coalesce into a desire to 'rewind' back to the top of the process and try to get the context understood at the beginning, and have that clearly reflected to the user so that as you move through to details it gets easier to handle the nitty gritty without as much editing.

ScotterC commented on Launch HN: Glide (YC W24) – AI-assisted technical design docs    · Posted by u/robmck
ivanovm · a year ago
Ok I got a first cut of Ruby parser in - the latest commit on Rails was a little sluggish but it works!

Right now the parser intentionally tags modules as classes, we made some assumptions in the generic parser code for other languages that don't quite align with ruby's notion of modules, but that'll be adjusted in the near future.

ScotterC · a year ago
Great. I’ll give it a try today

u/ScotterC

KarmaCake day2036June 11, 2010
About
Building something new Blog: http://www.scottcarleton.com/ Twitter: @ScotterC or http://www.twitter.com/ScotterC
View Original