Right now I'm using abliterated llama 3.1. I have no need for vision but I want to use the saved memory for more context so 3.2 is not so relevant. Llama 3.1 is perfect. But I want to try newer models too.
Until gpt-oss can be uncensored it's no use to me. But if there was nothing erotic in its training data it can't be. And no, I never have it do erotic roleplay. I'm not really interested when there's no real people involved.
The main appeal of feature flags is simplicity and being a low-hanging way to apply per-customer/user configuration, few platforms allow true a/b testing (amplitude comes to mind but I'm sure there are more).
The guitar solo sounds very unnatural, especially the phrasing, which is totally random. Blues musicians are actually attempting to say something through their instrument. This was just a random number generated solo played by a 6 finger three handed robot. No thanks, lol.
> Let's try a different approach.
“Let’s try a different approach” always makes me nervous with Claude too. It usually happens when something critical prevents the task being possible, and the correct response would be to stop and tell me the problem. But instead, Claude goes into paperclip mode making sure the task gets done no matter what.
It's mind-blowing it happens so often.
Ah nothing like a double emdash early to know that the page is not worth reading.
If I’m selling to cash cows in America or Europe it’s not an issue at all.
As long as you have >10mbps download across 90% of users I think it’s better to think about making money. Besides if you don’t know that lazy loading exists in 2025 fire yourself lol.
https://www.mcmaster.com/ was found last year to be doing some real magic to make it load literally as fast as possible for the crapiest computers possible.
https://web.archive.org/web/20250208000940/https://www.parad...