Readit News logoReadit News
sigotirandolas commented on What is going on right now?   catskull.net/what-the-hel... · Posted by u/todsacerdoti
flatb · 5 days ago
Thereby self correcting perhaps.
sigotirandolas · 4 days ago
I'd say the opposite, LLMs are a know-it-nothing machine to perfectly suit know-it-alls. Unlike a human, it isn't that hard to get the machine to say what you want, and then generate enough crap to 'defeat' any human challenger.
sigotirandolas commented on What is going on right now?   catskull.net/what-the-hel... · Posted by u/todsacerdoti
mdaniel · 4 days ago
For consideration, one can pretty objectively determine a programmer who is not qualified. Secretary. CFO. Sysadmin. How would one judge a product manager? That there's no product? That it sucks balls? "We're soliciting feedback and finding product market fit, iterating, A/B testing, we'll be better next quarter, goto 1"

I wouldn't want that job, but I also don't currently know how to bring demonstrable evidence that they're incompetent, either

I have roughly the same opinion about UX folks, but they don't jam up my day to day nearly as much as PMs

sigotirandolas · 4 days ago
My only answer to this is, the ones at the top up to the CEO must be mindful enough to realize this, smart enough to figure out a solution, and brave enough to act on it.

Otherwise, it's a matter of time until the house of cards falls down and the company stagnates (sadly, the timescales are less of a house of cards, and more like a coal mine fire).

sigotirandolas commented on What is going on right now?   catskull.net/what-the-hel... · Posted by u/todsacerdoti
chasd00 · 5 days ago
i gave a ppt of 4-5 slides laying out an approach to implementing a business requirement to a very junior dev. I wanted to make sure they understood what was going on so i asked them to review the slides and then explain it back to me as if i'm seeing them for the first time. What i got back was the typical overly verbose and articulate review from chatgpt or some other llm. I thought it was pretty funny that they thought it would work let alone be acceptable to do that. When i called them and asked, "now do it for real" i ended up answering a dozen questions but hung up knowing they actually did understand the approach.
sigotirandolas · 4 days ago
> What i got back was the typical overly verbose and articulate review from chatgpt or some other llm. I thought it was pretty funny that they thought it would work let alone be acceptable to do that.

Did that end up working for you?

I had this same experience recently, and it floored my expectations for that dev, it just felt so wrong.

I made it abundantly clear that it was substandard work with comically wrong content and phrasings, hoping that he would understand that I trust _him_ to do the work, but I still later saw signs of it all over again.

I wish there was something other than "move on". I'm just lost, and scarred.

sigotirandolas commented on Hyprland – An independent, dynamic tiling Wayland compositor   hypr.land/... · Posted by u/AbuAssar
dogas · 17 days ago
I've attempted many times to adopt Hyprland, but I always come back to swaywm. Stability and speed always seem to be an issue. Both hyprland and the plugins (hyprpm, etc) have an alpha-level quality to them.

I have nothing but respect for vaxerski. He's 100% dedicated to the project and is incredibly prolific. But I feel like they need a better release strategy for those who prioritize stability over shiny new thing.

sigotirandolas · 17 days ago
What makes me not want to use Hyprland is that the code has all kind of "YOLO" tells, the kind of ones that make you wonder if something is going to happen some say... for example:

- https://github.com/hyprwm/Hyprland/blob/00da4450db9bab1abfda...

- https://github.com/hyprwm/Hyprland/blob/00da4450db9bab1abfda...

- https://github.com/hyprwm/Hyprland/blob/00da4450db9bab1abfda...

sigotirandolas commented on Seven replies to the viral Apple reasoning paper and why they fall short   garymarcus.substack.com/p... · Posted by u/spwestwood
thomasahle · 2 months ago
> On the other hand general problem solving is, and so far any attempt to replicate it using computer algorithms has more or less failed.

Well, this is what the whole debate is about isn't it? Can LRMs do "general problem solving"? Can humans? What exactly does it mean?

sigotirandolas · 2 months ago
A lot of it is being able to make reasonable decisions under novel and incomplete information and being able to reflect and refine on their outcome.

LLMs's huge knowledge base covers for their incapacity to reason under incomplete information, but when you find a gap in their knowledge, they are terrible at recovering from it.

sigotirandolas commented on WhatsApp introduces ads in its app   nytimes.com/2025/06/16/te... · Posted by u/greenburger
wvh · 2 months ago
I am conflicted because to some extent, paying for some of these services feels like paying a blackmailer, spying on you, holding a whole ecosystem hostage and even jeopardising mental health and the public discourse.

I pay for email and some other services. Some other services, not so much. I find it hard to support some companies financially because I don't agree with their basic modus operandi. It's not the money; it's who it goes to.

If only we could convince large crowds to choose more free alternatives.

sigotirandolas · 2 months ago
To be devil's advocate, this is the kind of all-talk argument the parent was referring to. Once the paid option is available, people will demand it to be [cheaper / better / someone else] and still not pay.

While I don't love my money going to Google, I find YouTube's overall quality astronomically higher than Instagram/Twitter/TikTok/etc. and the amount of censorship/"moderation"/controversy has been relatively limited. When I find something I really want to keep I have always been able to download it without much trouble.

sigotirandolas commented on Seven replies to the viral Apple reasoning paper and why they fall short   garymarcus.substack.com/p... · Posted by u/spwestwood
TeMPOraL · 2 months ago
> I think the more realistic argument is that the model can generalize, but only by learning shortcuts (e.g. how to pattern match a problem to a likely answer) and simple algorithms (e.g. how to propagate carries in a multiplication).

This is exactly what humans do too. Anything more and we need to use tools to externalize state and algorithms. Pen and paper are tools too.

sigotirandolas · 2 months ago
My thought is that we humans are bad (by computer standards) at arithmetic and memorization because those are not evolutionarily useful on their own.

On the other hand general problem solving is, and so far any attempt to replicate it using computer algorithms has more or less failed. So it must be more complex than just some simple heuristics.

Perhaps the answer is just "more compute" but the argument that "because LLMs somewhat resemble human reasoning, we must be really close!" (instead of 25+ years away) seems wishful thinking, when:

(1) LLMs leverage a much bigger knowledge base than any human can memorize, yet

(2) LLMs fail spectacularly at certain problems and behaviours humans find easy

sigotirandolas commented on Seven replies to the viral Apple reasoning paper and why they fall short   garymarcus.substack.com/p... · Posted by u/spwestwood
Workaccount2 · 2 months ago
People aren't claiming that they are holding textbooks in their model, that would just be even more evidence of reasoning (the LLM would have to reason what textbook to reference, and then extrapolate from the textbook(s) how to solve the problem at hand - pretty much what students in school do; study the textbook and reason from it to answer new test questions)

People are claiming that the models sit on a vast archive of every answer to every question. i.e. when you ask it 92384 x 333243 = ?, the model is just pulling from where it has seen that before. Anything else would necessitate some level of reasoning.

Also in my own experience, people are stunned when they learn that the models are not exabytes in size.

sigotirandolas · 2 months ago
I think the more realistic argument is that the model can generalize, but only by learning shortcuts (e.g. how to pattern match a problem to a likely answer) and simple algorithms (e.g. how to propagate carries in a multiplication). And this only appears intelligent because this pattern matching is really good and backed by a huge amount of compressed/memorized answers.

The AI pessimist's argument is that there's a huge gap between the compute required for this pattern matching, and the compute required for human level reasoning, so AGI isn't coming anytime soon.

sigotirandolas commented on Seven replies to the viral Apple reasoning paper and why they fall short   garymarcus.substack.com/p... · Posted by u/spwestwood
Workaccount2 · 2 months ago
LLMs are (suspected) a few TB in size.

Gemma 2 27B, one of the top ranked open source models, is ~60GB in size. LLama 405B is about 1TB.

Mind you that they train on likely exabytes of data. That alone should be a strong indication that there is a lot more than memory going on here.

sigotirandolas · 2 months ago
I'm not convinced by this argument. You can fit a bunch of books covering up to MSc level maths on less than 100MB. After that point, more books will mostly be redundant information so it doesn't need much more space for maths beyond that.

Similarly TBs of Twitter/Reddit/HN add near zero new information per comment.

If anything you can fit an enormous amount of information in 1MB - we just don't need to do it because storage is cheap.

sigotirandolas commented on Human coders are still better than LLMs   antirez.com/news/153... · Posted by u/longwave
tharkun__ · 3 months ago
Fair enough on 'cutting the learning tree' at some points i.e. ignoring that you don't understand yet why something works/does what it does. We (should) keep doing that later on in life as well.

But unless you teach a kid that's never done any math where `x` was a thing to program, what's so hard about understanding the concept of a variable in programming?

sigotirandolas · 3 months ago
You'd be surprised. Off the top of my head:

Many are conditioned to see `x` as a fixed value for an equation (as in "find x such that 4x=6") rather than something that takes different values over time.

Similarly `y = 2 * x` can be interpreted as saying that from now on `y` will equal `2 * x`, as if it were a lambda expression.

Then later you have to explain that you can actually make `y` be a reference to `x` so that when `x` changes, you also see the change through `y`.

It's also easy to imagine the variable as the literal symbol `x`, rather than being tied to a scope, with different scopes having different values of `x`.

u/sigotirandolas

KarmaCake day330August 15, 2017View Original