Typical reasons are highly specialised models that are cheap and fast to train, lack of censorship, lack of API and usage restrictions, lightweight variants and so on. The reason there's a lot of excitement right now is indeed how fast the space is moving.
https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...
For example a therapist, a search bot for you diary, a company intranet help bot. Anything where the prompt contains something you don’t want to send to a third party.
Thanks!
It's just that at the moment I'm finding the open source LLM community hard to contextualize from an outside perspective. Maybe it's because things are moving so fast (probably a good thing).
I just know that personally, I'm not going to be exploring any projects until I know they're near or exceeding GPT-4 performance level. And it's hard to develop an interest in anything else other than GPT-4 when comparison is so tough to begin with.
There's really only one thing I care about: How does this compare to GPT-4?
I have no use for models that aren't at that level. Even though this almost definitely isn't at that level, it's hard to know how close or far it is from the data presented.
I just want to:
* Go to desktop and select one of more folders full of music.
* Right-click -> Play, or drag into the player. Either add or replace to existing file set.
* Select or de-select shuffle
That's it. Why is this so hard??
https://bsky.app/profile/rawrmaan.com