Readit News logoReadit News
doc_manhat commented on AI slows down open source developers. Peter Naur can teach us why   johnwhiles.com/posts/ment... · Posted by u/jwhiles
horsawlarway · 2 months ago
Just anecdotally - I think your reason for disagreeing is a valid statement, but not a valid counterpoint to the argument being made.

So

> Reason: you cannot evaluate the work accurately if you have no mental model. If there's a bug given the systems unwritten assumptions you may not catch it.

This is completely correct. It's a very fair statement. The problem is that a developer coming into a large legacy project is in this spot regardless of the existence of AI.

I've found that asking AI tools to generate a changeset in this case is actually a pretty solid way of starting to learn the mental model.

I want to see where it tries to make changes, what files it wants to touch, what libraries and patterns it uses, etc.

It's a poor man's proxy for having a subject matter expert in the code give you pointers. But it doesn't take anyone else's time, and as long as you're not just trying to dump output into a PR can actually be a pretty good resource.

The key is not letting it dump out a lot of code, in favor of directional signaling.

ex: Prompts like "Which files should I edit to implement a feature which does [detailed description of feature]?" Or "Where is [specific functionality] implemented in this codebase?" Have been real timesavers for me.

The actual code generation has probably been a net time loss.

doc_manhat · 2 months ago
Yeah fair points particularly for larger codebases I could see this being a huge time saver.
doc_manhat commented on AI slows down open source developers. Peter Naur can teach us why   johnwhiles.com/posts/ment... · Posted by u/jwhiles
doc_manhat · 2 months ago
I directionally disagree with this:

``` It's common for engineers to end up working on projects which they don't have an accurate mental model of. Projects built by people who have long since left the company for pastures new. It's equally common for developers to work in environments where little value is placed on understanding systems, but a lot of value is placed on quickly delivering changes that mostly work. In this context, I think that AI tools have more of an advantage. They can ingest the unfamiliar codebase faster than any human can, and can often generate changes that will essentially work. ```

Reason: you cannot evaluate the work accurately if you have no mental model. If there's a bug given the systems unwritten assumptions you may not catch it.

Having said that it also depends on how important it is to be writing bug free code in the given domain I guess.

I like AI particularly for green field stuff and one off scripts as it let's you go faster here. Basically you build up the mental model as you're coding with the AI.

Not sure about whether this breaks down at a certain codebase size though.

doc_manhat commented on Postgres LISTEN/NOTIFY does not scale   recall.ai/blog/postgres-l... · Posted by u/davidgu
doc_manhat · 2 months ago
Got up to the TL;DR paragraph. This was a major red flag given the initial presentation of the discovery of a bottleneck:

''' When a NOTIFY query is issued during a transaction, it acquires a global lock on the entire database (ref) during the commit phase of the transaction, effectively serializing all commits. '''

Am I missing something - this seems like something the original authors of the system should have done due diligence on before implementing a write heavy work load.

doc_manhat commented on Dear diary, today the user asked me if I'm alive   blog.fsck.com/2025/05/28/... · Posted by u/obrajesse
patcon · 3 months ago
Ugh, I hate that I'm about to say this, because I think AI is still missing something very important, but...

What makes us think that "processing emotion" is really such a magical and "only humans do it the right way" sorta thing? I think there's a very real conclusion where "no, AI is not as special as us yet" (esp around efficiency) but also "no, we are not doing anything so interesting either" (or rather, we are not special in the ways we think we are)

For example, there's a paper called "chasing the rainbow" [1] that posits that consciousness is just the subjective experience of being the comms protocol between internal [largely unconscious] neural state. It's just what the compulsion to share internal state between minds feels like, but it's not "the point", and instead an inert byproduct like a rainbow. Maybe our compulsion to express or even process emotion is not some greater reason, but just a way we experience the compulsion of the more important thing: the collective search for interpolated beliefs that best model and predict the world and help our shared structure persist, done by exploring tensions in high dimensional considerations we call emotions.

Which is to say: if AI is doing that with us, role-modelling resolution of tension or helping build or spread shared knowledge alongside us through that process... then as far as the universe cares, it's doing what we're doing, and toward the same ends. It's compulsion having the same origin as ours doesn't matter, so long as it's doing the work that is the reason the universe has given us the compulsion.

Sorry, new thought. Apologies if it's messy (or too casually dropping an unsettling perspective -- I rejected that paper for quite awhile, because my brain couldn't integrate the nihilism of it)

[1] https://www.frontiersin.org/articles/10.3389/fpsyg.2017.0192...

doc_manhat · 3 months ago
Conversely this is exactly why I believe LLMs are sentient (or conscious or what have you).

I basically don't believe there's anything more to sentience than a set of capabilities, or at the very least there's nothing that I should give weight in my beliefs to further than this.

Another comment mentioned philosophical zombies - another way to put it is I don't believe in philosophical zombies.

But I don't have evidence to not believe in philosophical zombies apart from people displaying certain capabilities that I can observe.

Therefore I should not require further evidence to believe in the sentience of LLMs.

doc_manhat commented on For algorithms, a little memory outweighs a lot of time   quantamagazine.org/for-al... · Posted by u/makira
porphyra · 3 months ago
There's not even a proof that P != EXPTIME haha

EDIT: I am a dumbass and misremembered.

doc_manhat · 3 months ago
I think there is right? It's been a long time but I seem to remember it following from the time hierarchy theorem
doc_manhat commented on Plain Vanilla Web   plainvanillaweb.com/index... · Posted by u/andrewrn
doc_manhat · 4 months ago
Question - why would you do this in current year? Is it that much more performant? I might be ignorant but frameworks seem to be the lingua franca for a reason - they make your life much easier to manage once set up!
doc_manhat commented on Apparent signs of distress during LLM redteaming   lesswrong.com/posts/MnYnC... · Posted by u/cubefox
doc_manhat · 5 months ago
Yeah I'm firmly on the LLMs are actually sentient train so this was a bit of a distressing read
doc_manhat commented on DOJ will push Google to sell off Chrome   bloomberg.com/news/articl... · Posted by u/redm
doc_manhat · 9 months ago
ITT: panicked Google employees try to convince you this is a very bad thing

u/doc_manhat

KarmaCake day14November 19, 2024View Original