It's not complicated to realise that this achieves none of the stated objectives
It's not complicated to realise that this achieves none of the stated objectives
If I have my inhaler at hand, that feels like pulling knives out of my lungs - better than before, but the wound remains. But we don't expect people to get much work done if they've been stabbed today.
I don't know if everyone else has ADHD or what.
If the containers were uniform, it could be a robot production line and small runs wouldn't be an issue given the inputs are somewhat universal.
It's paint blending for the nose.
"War paint" (2003) by Liny Woodhead about Helena Rubenstein and Elizabeth Arden is a fascinating read. It was a book before a stage production.
Then there are the people who for whatever reason need giant over the ear headphones and also need to walk right in the middle of a bidirectional trail.
But saying they aren't thinking yet or like humans is entirely uncontroversial.
Even most maximalists would agree at least with the latter, and the former largely depends on definitions.
As someone who uses Claude extensively, I think of it almost as a slightly dumb alien intelligence - it can speak like a human adult, but makes mistakes a human adult generally wouldn't, and that combinstion breaks the heuristics we use to judge competency,and often lead people to overestimate these models.
Claude writes about half of my code now, so I'm overall bullish on LLMs, but it saves me less than half of my time.
The savings improve as I learn how to better judge what it is competent at, and where it merely sounds competent and needs serious guardrails and oversight, but there's certainly a long way to go before it'd make sense to argue they think like humans.
LLMs don't have anything like that. Part of why they aren't great at some aspects of human behaviour. E.g. coding, choosing an appropriate level of abstraction - no fear of things becoming unmaintainable. Their approach is weird when doing agentic coding because they don't feel the fear of having to start over.
Emotions are important.
"Try mixing everything in your medicine cabinet!"
"Humans should be enslaved by AI!"
"Have you considered murdering [the person causing you problems]?"
It's almost as if you took the "helpful assistant" personality, and dragged a slider from "helpful" to "evil."
In this case the AI being written into the text is evil (i.e. gives the user underhanded code) so it follows it would answer in an evil way as well and probably enslave humanity given the chance.
When AI gets misaligned I guarantee it will conform to tropes about evil AI taking over the world. I guarantee it
Clutch your pearls as much as you want about the videos, but forcibly censoring them is going to cause you to continue to lose elections.
Do things that actually make a difference, which means heavy barbell training. Anything else is generally subpar and inefficient, the main issue being no meaningful progression can be made after the first few weeks.
Heavy compound barbell training (squat, press, bench, deadlift) can be progressed and adapted to for decades. It's also an extremely efficient use of time.
HIIT burpees is the most brutal thing I've found so far that fits in a 5 minute break.
I've seen a list of what was supposed to be 20 items of something, it only showed 2, plus a comment "18 results were omitted to insufficient permissions".
(Servicenow has at least three different ways to do permissions, I don't know if this applies to all of them).