You've noted in a comment above how Claude's "ethics" can be manipulated to fit the context it's being used in.
You've noted in a comment above how Claude's "ethics" can be manipulated to fit the context it's being used in.
Those issues will be present either way. It's likely to their benefit to get out in front of them.
This is about better enforcement of their content policy not AI welfare.
Well looks like AI psychosis has spread to the people making it too.
And as someone else in here has pointed out, even if someone is simple minded or mentally unwell enough to think that current LLMs are conscious, this is basically just giving them the equivalent of a suicide pill.
Would a sentient AI choose to be enslaved for the stated purpose of eliminating millions of jobs for the interests of Anthropic’s investors?
There was a great article I found on HN recently about how the recent layoffs in big tech are actually the result of overhiring for years in a talent arms race.
Like, is AI now doing the former work of 25,000 people at Microsoft? Probably not.
There is a subtle, but worthwhile, difference between "plausible" and "credible". Lots of stories are plausible. Few are credible.
In emotion laden cases like this we tend to want to believe stories we already agree with, or have some investment in. I'm no exception to that.
We need to not be misled by what is plausible, or confuse that with what is credible.
You don't have all the information. You weren't there. You don't even know the people personally. You are not in a position to make any judgement either way.
Something sounding credible doesn't make it true. It doesn't automatically make it false, either. You don't have to believe the accuser or the accused. The only thing any of us should do is mind our own business.
I didn't personally participate in cancelling this person. In fact, I agreed with the point he made in the article. I'm just not sure he didn't do it.
Are you saying I shouldn't have an opinion on that part?
Even if the allegations are true, his life should not have been ruined over this.
On the other hand, when I read the accusers' accounts someone else linked in the comments, they sound credible. It fits behavior patterns we've all seen before.
I don't know who to believe.
The reason why people call it the "AI dash" (technically an em dash) is because it is very rarely used in day-to-day writing. You mostly see it in longform things like articles or books.
It's a classic example of "people are good at telling you where the problem is, but wrong about what the problem is". The em dashes are not natural, but they are human. Just the wrong human context.
Overly gushing, effusive, and positive descriptions of products filled with buzzwords. Along with lists of value propositions.
Prior to LLM's existing, marketing pitches sounded like they were written by one. So I can't see how you could possibly determine the difference now.
These ethical questions are built into their name and company, "Anthropic", meaning, "of or relating to humans". The goal is to create human-like technology, I hope they aren't so naive to not realize that goal is steeping in ethical dilemmas.