Readit News logoReadit News
camgunz commented on Rick Beato is right to rant about music copyright strikes   savingcountrymusic.com/ri... · Posted by u/breve
camgunz · an hour ago
This is because whatever Google tech is checking for this stuff is too simple right? Or, maybe it's too aggressive because of music industry contracts?
camgunz commented on PSA: Libxslt is unmaintained and has 5 unpatched security bugs   vuxml.freebsd.org/freebsd... · Posted by u/burnt-resistor
camgunz · 2 days ago
Am I right that, while we can't have SQLite because there's only 1 implementation, we can have XSLT even though there's only 1--unmaintained--implementation?
camgunz commented on Blacksky grew to millions of users without spending a dollar   newpublic.substack.com/p/... · Posted by u/benwerd
camgunz · 5 days ago
This is very cool, I'm glad someone else is doing the relay (or whatever it's called) hosting. But that's money right? Donated money is still money.
camgunz commented on macOS dotfiles should not go in –/Library/Application Support   becca.ooo/blog/macos-dotf... · Posted by u/zdw
camgunz · 5 days ago
Wow I never thought about this, but I think I agree. Also Go does ~/Library/Application Support for os.UserConfigDir too [0]

[0]: https://pkg.go.dev/os#UserConfigDir

camgunz commented on We put a coding agent in a while loop   github.com/repomirrorhq/r... · Posted by u/sfarshid
doug_durham · 7 days ago
I wonder though. One of the superpowers of LLMs is code reading. I say the tools are better and reading than writing. It is very easy to get comprehensive documentation for any code base and get understanding by asking questions. At that point does it matter that there is a living developer who understands the code? If an arbitrary person with knowledge of the technology stack can get up to speed quickly is it important to have the original developers around any more?
camgunz · 6 days ago
> I say the tools are better and reading than writing.

No way, models are much, much better at writing code than giving you true and correct information. The failure modes are also a lot easier to spot when writing code: it doesn't compile, tests got skipped, it doesn't run right, etc. If Claude Code gave you incorrect information about a system, the only way to verify is to build a pretty good understanding of that system yourself. And because you've incurred a huge debt here, whoever's building that understanding is going to take much more time to do it.

Until LLMs get way closer (not entirely) to 100%, there's always gonna have to be a human in the loop who understands the code. So, in addition to the above issue you've now got a tradeoff: do you want that human to be able to manage multiple code bases but have to come up to speed on a specific one whenever intervention is necessary, or do you want them to be able to quickly intervene but only in 1 code base?

More broadly, you've also now got a human resource problem. Software engineering is pretty different than monitoring LLMs: most people get into into it because they like writing code. You need software experts in the loop, but when the LLMs take the "fun" part for themselves, most SWEs are no longer interested. Thus, you're left with a small subset of an already pretty small group.

Apologists will point out that LLMs are a lot better in strongly typed languages, in code bases with lots of tests, and using language servers, MCP, etc, for their actions. You can imagine more investments and tech here. The downside is models have to work much, much harder in this environment, and you still need a software expert because the failure modes are far more obscure now that your process has obviated the simple stuff. You've solved the "slop" problem, but now you've got a "we have to spend a lot more money on LLMs and a lot more money on a rare type of expert to monitor them" problem.

---

I think what's gonna happen is a division of workflows. The LLM workflows will be cheap and shabby: they'll be black boxes, you'll have to pull the lever over and over again until it does what you want, you'll build no personal skills (because lever pulling isn't a skill), practically all of your revenue--and your most profitable ideas--will go to your rapacious underlying service providers, and you'll have no recourse when anything bad happens.

The good workflows will be bespoke and way more expensive. They'll almost always work, there will be SLAs for when they don't, you'll have (at least some) rights when you use them, they'll empower and enrich you, and you'll have a human to talk to about any of it at reasonable times.

I think jury's out on whether or not this is bad. I'm sympathetic to the "an LLM brain may be better than no brain", but that's hugely contingent on how expensive LLMs actually end up being and any deleterious effects of outsourcing core human cognition to LLMs.

camgunz commented on We put a coding agent in a while loop   github.com/repomirrorhq/r... · Posted by u/sfarshid
thyristan · 6 days ago
Well, actually there could be a separate step: understanding is done during and after gathering requirements, before and while writing specifications. Only then are specifications turned into code.

But almost no-one really works like that, and those three separate steps are often done ad-hoc, by the same person, right when the fingers hit the keys.

camgunz · 6 days ago
I can use those processes to understand things at a high level, but when those processes become detailed enough to give me the same level of understanding as coding, they're functionally code. I used to work in aerospace, and this is the work systems engineers are doing, and their output is extremely detailed--practically to the level of code. There's downsides of course, but the division of labor is nice because they don't need to like, decide algorithms or factoring exactly, and I don't need to be like, "hmm this... might fail? should there be a retry? what about watchdog blah blah".
camgunz commented on We put a coding agent in a while loop   github.com/repomirrorhq/r... · Posted by u/sfarshid
worldsayshi · 6 days ago
The prevailing counter narrative around vibe coding seems to be that "code output isn't the bottle neck, understanding the problem is". But shouldn't that make vibe coding a good tool for the tool belt? Use it to understand the outermost layer of the problem, then throw out the code and write a proper solution.
camgunz · 6 days ago
Coding is how I build a sufficiently deep understanding of the problem space--there's no separating coding and understanding for me. I acknowledge there's different ways of working (and I imagine this is one of the reasons a lot of people think they get a lot more value out of LLMs than I do), but like, having Cursor crank code out for me actually slows me down. I have to read all the stuff it does so I can coach it into doing better, and also use its work to build a good mental model of the problem, and all that takes longer than writing the code myself.
camgunz commented on AI tooling must be disclosed for contributions   github.com/ghostty-org/gh... · Posted by u/freetonik
raggi · 10 days ago
For a large LLM I think the science in the end will demonstrate that verbatim reproduction is not coming from verbatim recording, as the structure really isn’t setup that way in the models under question here.

This is similar to the ruling by Alsup in the Anthropic books case that the training is “exceedingly transformative”. I would expect a reinterpretation or disagreement on this front from another case to be both problematic and likely eventually overturned.

I don’t actually think provenance is a problem on the axis you suggest if Alsups ruling holds. That said I don’t think that’s the only copyright issue afoot - the copyright office writing on copyrightability of outputs from the machine essentially requires that the output fails the Feist tests for human copyrightability.

More interesting to me is how this might realign the notion of copyrightability of human works further as time goes on, moving from every trivial derivative bit of trash potentially being copyrightable to some stronger notion of, to follow the feist test, independence and creativity. Further it raises a fairly immediate question in an open source setting if many individual small patch contributions themselves actually even pass those tests - they may well not, although the general guidance is to set the bar low - but is a typo fix either? There is so far to go on this rabbit hole.

camgunz · 9 days ago
> For a large LLM I think the science in the end will demonstrate that verbatim reproduction is not coming from verbatim recording

We don't need all this (seemingly pretty good) analysis. We already know what everyone thinks: no relevant AI company has had their codebase or other IP scraped by AI bots they don't control, and there's no way they'd allow that to happen, because they don't want an AI bot they don't control to reproduce their IP without constraint. But they'll turn right around and be like, "for the sake of the future, we have to ingest all data... except no one can ingest our data, of course". :rolleyes:

camgunz commented on Apple and Amazon will miss AI like Intel missed mobile   gmays.com/the-biggest-bet... · Posted by u/gmays
camgunz · 13 days ago
People are gonna pretty quickly quit paying for AI--we're well into the "let me see what everyone's talking about" phase and that'll wear off soon. The price is already skyrocketing way ahead of quality or utility, so that'll accelerate the decline. Businesses incorporating AI into their products will scale that back as costs increase, or as they replace the most commonly used functionality with purpose-built code.

The real question is how do we continue the grift? AI's a huge, economy-sustaining bubble, and there's currently no off-ramp. My guess is we'll rebrand ML: it's basically AI, it actually works, and it can use video cards.

AI is a great feature funnel in terms of like, "what workflows are people dumping into AI that we can write purpose-built code for", but it has to transition from revenue generator to loss leader. The enormity of the bubble has made this very difficult, but I have faith in us.

camgunz commented on We accidentally built the wrong internet   karimjedda.com/we-acciden... · Posted by u/ilovefood
camgunz · 13 days ago
The core problem TFA describes isn't complexity, capitalism, or bots. It's the lack of trust. People are fine w/ a few big players (or even one big player) as long as you can trust them, but we don't trust the players anymore: Google, Mastercard, our governments, etc. We think they're all corrupt, and broadly we're right [0].

Blockchain's answer is "OK we give up on trust", but humans can't live that way--or at least strongly don't want to. Successful markets, courts, schools, workplaces, all arise out of a culture of trust and accountability, not the other way around. Unless we hold these institutions accountable they will inevitably decay; our markets will become lemon markets; our courts will become kangaroo courts; our schools will become insipid daycares; our workplaces will become surveillance salt mines. There is no technology that allows us to abdicate our duty to justice and to each other.

There's this episode of Star Trek: TNG [1] where the crew rescues some 20th century humans. One's a blowhard who keeps using ship-wide communications to make random demands, so Picard finally marches down to the guy's quarters to explain that comms are for ship business only. The guy is like "well if they're so important why don't they require an executive key?", to which Picard replies "we're aboard a starship so that is not necessary, we're all capable of exercising self-discipline".

There is no tech, no bureaucracy, no system of rules and regulations that can save a culture unwilling to save itself, whose answer to "what is acceptable to do" is "anything that isn't explicitly illegal, and sometimes explicitly illegal stuff depending on how much money you have". If we spent 1/100 of the effort on community building as we did zk-snarks or whatever the fuck, we simply wouldn't have these problems. Or as the kids say I guess, touch grass.

[0]: https://www.youtube.com/watch?v=-JcQxfhcg2Q

[1]: https://www.youtube.com/watch?v=XQQYbKT_rMg

u/camgunz

KarmaCake day5708November 8, 2014
About
Hi! You can reach me at charlie at charlieg dooooot net. Please reach out; I'm happy to correspond.
View Original