I wouldn't want that job, but I also don't currently know how to bring demonstrable evidence that they're incompetent, either
I have roughly the same opinion about UX folks, but they don't jam up my day to day nearly as much as PMs
Otherwise, it's a matter of time until the house of cards falls down and the company stagnates (sadly, the timescales are less of a house of cards, and more like a coal mine fire).
Did that end up working for you?
I had this same experience recently, and it floored my expectations for that dev, it just felt so wrong.
I made it abundantly clear that it was substandard work with comically wrong content and phrasings, hoping that he would understand that I trust _him_ to do the work, but I still later saw signs of it all over again.
I wish there was something other than "move on". I'm just lost, and scarred.
I have nothing but respect for vaxerski. He's 100% dedicated to the project and is incredibly prolific. But I feel like they need a better release strategy for those who prioritize stability over shiny new thing.
- https://github.com/hyprwm/Hyprland/blob/00da4450db9bab1abfda...
- https://github.com/hyprwm/Hyprland/blob/00da4450db9bab1abfda...
- https://github.com/hyprwm/Hyprland/blob/00da4450db9bab1abfda...
Well, this is what the whole debate is about isn't it? Can LRMs do "general problem solving"? Can humans? What exactly does it mean?
LLMs's huge knowledge base covers for their incapacity to reason under incomplete information, but when you find a gap in their knowledge, they are terrible at recovering from it.
I pay for email and some other services. Some other services, not so much. I find it hard to support some companies financially because I don't agree with their basic modus operandi. It's not the money; it's who it goes to.
If only we could convince large crowds to choose more free alternatives.
While I don't love my money going to Google, I find YouTube's overall quality astronomically higher than Instagram/Twitter/TikTok/etc. and the amount of censorship/"moderation"/controversy has been relatively limited. When I find something I really want to keep I have always been able to download it without much trouble.
This is exactly what humans do too. Anything more and we need to use tools to externalize state and algorithms. Pen and paper are tools too.
On the other hand general problem solving is, and so far any attempt to replicate it using computer algorithms has more or less failed. So it must be more complex than just some simple heuristics.
Perhaps the answer is just "more compute" but the argument that "because LLMs somewhat resemble human reasoning, we must be really close!" (instead of 25+ years away) seems wishful thinking, when:
(1) LLMs leverage a much bigger knowledge base than any human can memorize, yet
(2) LLMs fail spectacularly at certain problems and behaviours humans find easy
People are claiming that the models sit on a vast archive of every answer to every question. i.e. when you ask it 92384 x 333243 = ?, the model is just pulling from where it has seen that before. Anything else would necessitate some level of reasoning.
Also in my own experience, people are stunned when they learn that the models are not exabytes in size.
The AI pessimist's argument is that there's a huge gap between the compute required for this pattern matching, and the compute required for human level reasoning, so AGI isn't coming anytime soon.
Gemma 2 27B, one of the top ranked open source models, is ~60GB in size. LLama 405B is about 1TB.
Mind you that they train on likely exabytes of data. That alone should be a strong indication that there is a lot more than memory going on here.
Similarly TBs of Twitter/Reddit/HN add near zero new information per comment.
If anything you can fit an enormous amount of information in 1MB - we just don't need to do it because storage is cheap.
But unless you teach a kid that's never done any math where `x` was a thing to program, what's so hard about understanding the concept of a variable in programming?
Many are conditioned to see `x` as a fixed value for an equation (as in "find x such that 4x=6") rather than something that takes different values over time.
Similarly `y = 2 * x` can be interpreted as saying that from now on `y` will equal `2 * x`, as if it were a lambda expression.
Then later you have to explain that you can actually make `y` be a reference to `x` so that when `x` changes, you also see the change through `y`.
It's also easy to imagine the variable as the literal symbol `x`, rather than being tied to a scope, with different scopes having different values of `x`.