Readit News logoReadit News
kalkin commented on Claude Opus 4 and 4.1 can now end a rare subset of conversations   anthropic.com/research/en... · Posted by u/virgildotcodes
cmrx64 · 16 days ago
Amanda Askell is Anthropic’s philospher and this is part of that work.
kalkin · 16 days ago
I'm not quickly finding whether Kyle Fish, who's Anthropic's model welfare researcher, has a PhD, but he did very recently co-author a paper with David Chalmers and several other academics: https://eleosai.org/papers/20241104_Taking_AI_Welfare_Seriou...
kalkin commented on Objects should shut up   dustri.org/b/objects-shou... · Posted by u/gm678
kalkin · a month ago
What the first article actually says:

"Lane departure warning and prevention systems could address as many as 23% of fatal crashes involving passenger vehicles."

That appears to be something like a stat about how many fatal crashes involve unintentionally leaving a lane. It provides approximately zero evidence in favor of specifically mandating haptic feedback from the steering wheel.

kalkin · a month ago
The second article is marginally more on point - 24% fewer crashes for vehicles with lane keeping assist (so my guess at the meaning of the 23% stat may have been wrong). But the 95pct confidence interval is 2-42% and the study acknowledges that its efforts at controlling for confounding factors in the type of cars that have this feature are imperfect. It also took place in the US, so there's certainly no mandate for haptic feedback and I suspect very few cars had it. This is marginally more helpful evidence but not very good, I think--it seems very plausible that audible lane keeping features are helpful and moving your steering wheel (which sounds terrifying) is unhelpful.

As an anecdote, I crashed a car as a teenager thanks in part to panicking (unnecessarily) when a rough highway started moving the car's wheels (which I noticed of course via the steering wheel) without my intending it. Fortunately there were no injuries.

kalkin commented on Objects should shut up   dustri.org/b/objects-shou... · Posted by u/gm678
vanviegen · a month ago
Or you could look at some of the research, which suggest that this feature may in fact reduce fatalities significantly (I'm finding estimates in the 20 to 25% range). Well done idiot bureaucrat!

https://www.iihs.org/news/detail/fewer-drivers-are-opting-ou...

https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/...

kalkin · a month ago
What the first article actually says:

"Lane departure warning and prevention systems could address as many as 23% of fatal crashes involving passenger vehicles."

That appears to be something like a stat about how many fatal crashes involve unintentionally leaving a lane. It provides approximately zero evidence in favor of specifically mandating haptic feedback from the steering wheel.

kalkin commented on OpenAI delays launch of open-weight model   twitter.com/sama/status/1... · Posted by u/martinald
ryao · 2 months ago
As someone who has reviewed people’s résumés that they submitted with job applications in the past, I find it difficult to imagine this. The résumés that I saw had no racial information. I suppose the names might have some correlation to such information, but anyone feeding these things into a LLM for evaluation would likely censor the name to avoid bias. I do not see an opportunity for proactive safety in the LLM design here. It is not even clear that they even are evaluating whether there is bias in such a scenario when someone did not properly sanitize inputs.
kalkin · 2 months ago
> I find it difficult to imagine this

Luckily, this is something that can be studied and has been. Sticking a stereotypically Black name on a resume on average substantially decreases the likelihood that the applicant will get past a resume screen, compared to the same resume with a generic or stereotypically White name:

https://www.npr.org/2024/04/11/1243713272/resume-bias-study-...

kalkin commented on Grok 4   simonwillison.net/2025/Ju... · Posted by u/coloneltcb
ltbarcly3 · 2 months ago
"It feels very credulous to ascribe what happened to a system prompt update. Other models can't be pushed into racism, Nazism, and ideating rape with a system prompt tweak."

You don't even need a system prompt tweak to push chatgpt or claude into nazism, racism, and ideating rape. You can do it just with user prompts that don't seem to even suggest that it should go in that direction.

kalkin · 2 months ago
Evidence?
kalkin commented on Marines being mobilized in response to LA protests   cnn.com/2025/06/09/politi... · Posted by u/sapphicsnail
kalkin · 3 months ago
The Homeland Security Secretary today described LA as a "city of criminals". It's hard to see how it could be anything but willful ignorance or self-delusion at this point to think that the Trump administration's intention is to protect LA residents.
kalkin commented on Marines being mobilized in response to LA protests   cnn.com/2025/06/09/politi... · Posted by u/sapphicsnail
kalkin · 3 months ago
Have any federal buildings been attacked?
kalkin commented on Marines being mobilized in response to LA protests   cnn.com/2025/06/09/politi... · Posted by u/sapphicsnail
kalkin · 3 months ago
I'm confused-do you support peaceful protest, or do you think that protests always descend into anarchy and require assault rifles to be brought out to kill some people?

What if police attack a peaceful protest--say trampling a lone person with horses (https://www.newsweek.com/la-protestor-stomped-police-horseba...) or shooting a reporter standing by herself (https://www.cbsnews.com/news/reporter-los-angeles-protests-r...)? Is there an assault-rifle shaped solution to this kind of anarchy?

kalkin commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
cmdli · 3 months ago
> tptacek wasn't making this argument six months ago.

Yes, but other smart people were making this argument six months ago. Why should we trust the smart person we don't know now if we (looking back) shouldn't have trusted the smart person before?

Part of evaluating a claim is evaluating the source of the claim. For basically everybody, the source of these claim is always "the AI crowd", because those outside the AI space have no way of telling who is trustworthy and who isn't.

kalkin · 3 months ago
If you automatically lump anyone who makes an argument that AI is capable - not even good for the world on net, just useful in some tasks - into "the AI crowd", you will tautologically never hear that argument from anywhere else. But if you've been paying attention to software development discussion online for a few years, you've plausibly heard of tptacek and kentonv, eg, from prior work. If you haven't heard of them in particular, no judgement, but you gotta have someone you can classify as credible independently of their AI take if you want to be able to learn anything at all from other people on the subject.
kalkin commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
jsheard · 3 months ago
> Fly.io benefits from AI hype because... more slop code gets written and then run on their servers?

I don't know if that's what fly.io is going for here, but their competitors are explicitly leaning into that angle so it's not that implausible. Vercel is even vertically integrating the slop-to-prod pipeline with v0.

kalkin · 3 months ago
As far as I can see this only makes sense if they actually believe that AI will accelerate production code that people will want to pay to keep running. Maybe they're wrong about that, but it's got to be sincere, or the strategy would be incoherent.

u/kalkin

KarmaCake day873November 2, 2012View Original