I've said this before, but I'd say it again: anti-AI people, instead of AI users, are usually those who expect AI to be magical panacea.
The vibe reminds me of some people who are against static typing because "it can't catch logical error anyway."
The only code you do not have to maintain is code that is fully perfect (100% accurate, 100% of the time). Bus factor zero is a problem for code that needs to be maintained. If AI is short of the 100%, then it is generating code that needs to be maintained. Ergo, a bus factor zero will be a problem for AI generated code (until such time as that code is perfect).
What’s terrifying is that what would be elected is far worse.
To your other point: The goal post is the statement (I'm paraphrasing here): "It was not a majority of Gazans that voted for Hamas, but instead a plurality". My rebuttal is that for sure half of the country did not vote for Hamas because the last election is before the median age of the country (half the country was not even born yet).
Could you explain how I shifted goal posts?
I think you might be assuming that we "know" without elections that the majority of the current population is "radicalized". The evidence of pluralities and majorities is given through elections, we don't have evidence for the current population. Maybe that is what you perceive as shifting the goal posts?
If you're going off of something other than elections as evidence for support of hamas at a plurality level of current Gazans - please share the data you are using to be "terrifi[ed] of .. what would be elected" (quoting you @nobodyandpround with slight paraphrase to make the grammar work). The population is roughly 2M people, it's difficult to get to any answer other than "we don't know" without a full blown and free election.
Why is that wrong? I mean, I support that thesis.
> since being a next-token-predictor is compatible with being intelligent.
No. My argument is by definition that is wrong. It's wisdom vs intelligence. Street-smart vs book smart. I think we all agree there is a distinction between wisdom and intelligence. I would define wisdom as being able to recall pertinent facts and experiences. Intelligence is measured in novel situations, it's the ability to act as if one had wisdom.
A next token predictor by definition is recalling. The intelligence of a LLM is good enough to match questions to potentially pertinent definitions, but it ends there.
It feels like there is intelligence for sure. In part it is hard to comprehend what it would be like to know the entirety of every written word with perfect recall - hence essentially no situation is novel. LLMs fail on anything outside of their training data. The "outside of the training" data is the realm of intelligence.
I don't know why it's so important to argue that LLMs have this intelligence. It's just not there by definition of "next token predictor", which is at core a LLM.
For example, a human being probably could pass through a lot of life by responding with memorized answers to every question that has ever been asked in written history. They don't know a single word of what they are saying, their mind perfectly blank - but they're giving very passable and sophisticated answers.
> When mikert89 says "thinking machines have been invented",
Yeah, absolutely they have not. Unless we want to reducto absurd-um the definition of thinking.
> they must become "more than a statistical token predictor"
Yup. As I illustrated by breaking down the components of "smart" into the broad components of 'wisdom' and 'intelligence', through that lens we can see that next token predictor is great for the wisdom attribute, but it does nothing for intelligence.
>dgfitz argument is wrong and BoiledCabbage is right to point that out.
Why exactly? You're stating apriori that the argument is wrong without saying way.
I'm more shocked that so many people seem unable to come to grips with the fact that something can be a next token predictor and demonstrate intelligence. That's what blows my mind, people unable to see that something can be more than the sum of its parts. To them, if something is a token predictor clearly it can't be doing anything impressive - even while they watch it do I'm impressive things.
Except LLMs have not shown much intelligence. Wisdom yes, intelligence no. LLMs are language models, not 'world' models. It's the difference of being wise vs smart. LLMs are very wise as they have effectively memorized the answer to every question humanity has written. OTOH, they are pretty dumb. LLMs don't "understand" the output they produce.
> To them, if something is a token predictor clearly it can't be doing anything impressive
Shifting the goal posts. Nobody said that a next token predictor can't do impressive things, but at the same time there is a big gap between impressive things and other things like "replace very software developer in the world within the next 5 years."
Liberals lie by saying that it wasn’t a majority; yet omitting a key detail: Yes Hamas won, with 44%.
And if you’re gonna quote wikipedia: “ Tensions between Fatah and Hamas began to rise in 2005 after the death of Yasser Arafat in November 2004. After the legislative election on 25 January 2006, which resulted in a Hamas victory, relations were marked by sporadic factional fighting. This became more intense after the two parties repeatedly failed to reach a deal to share government power, escalating in June 2007 and resulting in Hamas' takeover of Gaza.[35]”
A plurality of gazans did not vote for Hamas because half of them were not even yet born. They had no vote.
But, while on the topic. Targetting vs accepting children as collateral damage are different. Both can be true.
The judiciary is also notably notoriously lenient in prosecuting crimes against Gazans.
> Gaza citizens supported Hamas for 20+ years- cheering on 7th of October-and probably still are
This is a lie. The stats do not bare this out. Sure you can find examples, but a picture or three does not represent 2 million people. Even if it did, we're talking about badly maimed children that are clearly innocent. Last I checked humanity considers children as innocents.
"In fact, Hamas got 44% of party list votes in the 2006 Palestinian legislative elections across Gaza and the West Bank, and lost three of the five districts in Gaza to the secular Fatah party. There has been no election since then."
I’d like to see the proof for TDD; last I heard it slowed development with only minor reliability improvements.
What it boils down to: - TDD in the hands of a junior is very good. Drastically reduces bugs, and teaches the junior how to write code that can be tested and is not just a big long single method of spaghetti with every data structure represented as another dimension on some array.
- TDD in the hands of a midlevel can be a mixed bag. They've learned how to do TDD well, but have not learned when and why TDD can go bad. This creates design damage, where everything is shoe-horned into TDD and the goal of 90% line coverage is a real consideration. This is maximum correctness but also potentially maximum design damage.
- TDD in the hands of a senior is a power tool. The "right" tests are written for the right reasons with the right level of coupling and the tests overall are useful. Every really complicated algorithm I've had to write, TDD was a life saver for getting it landed.
Feels a lot like asking someone if they prefer X or Y and they say "X" is the industry best practice. My response universally is now an eye brow raise "oh, is it? For which segments of the industry? Why? How do we know it's actually a best practice? Okay, given our context, why would it be a best practice for US". Juniors don't know the best practices, mid-levels apply them everywhere, seniors evaluate and consider when best practices are not best practices.
TDD slows development when tests are written in a blind way with an eye on code coverage and not correctness and design. TDD speeds up development in being a good way to catch errors and is one of the best ways to ensure correctness.