The artificial intelligence startup Anthropic, founded by former OpenAI employees concerned about A.I. safety, is releasing a new chatbot called Claude. However, the company's employees remain deeply worried about the risks of powerful A.I. models like Claude becoming dangerous if misused. They compare themselves to the creators of the atomic bomb and obsess over existential threats that advanced A.I. could pose. Still, they argue that building cutting-edge models is necessary for researching how to make them safer, and that having safety-focused companies in the A.I. race is better than leaving it solely to profit-driven firms. So Anthropic pushes forward, attempting to balance competitiveness with caution.
How many PRs a week does a director work on there? Seems unusual to have any.
Should we be using if an email is pwned as input to antiabuse systems to give them higher confidence?
It reminds me a bit of when the % of emails that were #spam vs ham crossed 50% many years ago.