I just disagree with both postulates, and that's fine. The author can go on thinking that life needs to be something specific in order for it to be desirable. I myself like being productive. I also like eating fast food every once in a while. I think I'd be able to go on living (with some happiness to boot) if I never had another productive day or another McD's burger ever again.
Life can be its own end. If we manage to end death by aging, someday there will be children who have never known another world, and they'll marvel at all the death-centric thinking that permeated the societies of their past.
> His argument is precise: the desires that give you reason to keep living (he calls them categorical desires) would either eventually exhaust themselves, leaving you in a state of "boredom, indifference and coldness", or they'd evolve so completely that you'd become a different person anyway. Either way, the You that wanted immortality doesn't get it. You just die from a lack of Self rather than through physical mortality.
That said, I think the author's use of "bag of words" here is a mistake. Not only does it have a real meaning in a similar area as LLMs, but I don't think the metaphor explains anything. Gen AI tricks laypeople into treating its token inferences as "thinking" because it is trained to replicate the semiotic appearance of doing so. A "bag of words" doesn't sufficiently explain this behavior.
I learned that from trying to use Apple Music to handle my local library. Never again.
- The valuations are only reasonable if they are going to enable mass worker replacement. Yes there is the machine god argument, Wall Street doesn’t buy that.
- The tooling doesn’t have to be capable of replacing workers. The sales people just have to be able to convince execs it is.
- Even ignoring to fact that lots of people would lose their jobs this replacement would make everything worse because AI isn’t capable of replacing jobs.
- The bubble is based on the assumption everything will get better.
- We need to convince people things will get worse before they actually do.
These tools aren’t useless. They are remarkable. But that doesn’t mean they will meet the hype nor the valuations. In order to avoid an economic cataclysm it’s important for a realistic and measured narrative to take hold fast.
Was it this, or was it that your mother/grandmother was a great cook? I hear a lot of older people talk about how awful their food was, limited ingredients, everything was boiled...
Food also probably tastes better when you're actually hungry, and not able to Doordash whatever you want to eat at any time of day.
That’s not to say you cannot get really good food that’s not “farm fresh” but food right out of the ground absolutely on average is better.
I think this is a large factor in the turn towards more authoritarian tendencies in the Silicon Valley elites. They spent the 2000s and 2010s as a bit more utopian and laissez faire and saw it got them almost nowhere because of technology doesn't solve people problems.
Gemini is less a consumer brand name and a more a brand name for those of us who care about models.
One point to think about - an entity being tested for intelligence/thinking/etc only needs to fail once, o prove that it is not thinking. While the reverse applies too - to prove that a program is thinking it must be done in 100% of tests, or the result is failure. And we all know many cases when LLMs are clearly not thinking, just like in my example above. So the case is rather clear for the current gen of LLMs.
Of course the most famous and clear example are the split brain experiments which show post hoc rationalization[0].
And then there’s the Libet experiments[1] showing that your conscious experience is only realized after the triggering brain activity. While it’s not showing you cannot explain why it does seem to indicate your explanation is post hoc.
0: https://www.neuroscienceof.com/human-nature-blog/decision-ma...
1: https://www.informationphilosopher.com/freedom/libet_experim...
The shrug in the book was people turning their back, walking away— people who thought their talents were either wasted or unequally compensated in some way, or footing an unfair portion of things, and the “shrug” was them walking away. A fundamental individual, not collective and corporate act. The central character felt exploited by the company he worked for.
The book has enough problems without also confusing who the author meant when she said “Atlas”. It wasn’t corporations, it was individuals.