Neurosama, an AI streamer, is massively popular.
Silllytavern which lets people make and chat with characters or tell stories with LLMs feeds Openrouter 20 million messages a day, which is a fraction of it's totally usage. Anecdotally I've have non tech friends learn how to install Git and work an API to get this one working.
There are unfortunately tons of secretly AI made influencers on Instagram.
When Meta started these profiles in 2023 it was less clear how the technologies were going to be used and most were just celeb licensed.
I think a few things went wrong. The biggest is GenAI has the highest value in narrowcast and the lowest value in broadcast. GenAI can do very specific and creative things for an individual but when spread to everyone or used with generic prompts it start averaging and becomes boring. It's like Google showing its top searches: it's always going to just be for webpages. Making an GenAI profile isn't fun because these AIs don't really do interesting things on their own. I chatted with these they had very little memory and almost no willingness to do interesting things.
Second, mega corps are, for better or worse, too risk averse to make these any fun. GenAI is most wild and interesting when it can run on its own or do unhinged things. There are several people on Twitter who have ongoing LLM chat rooms that get extremely weird and fascinating but in a way a tech company would never allow. Silllytavern is most interesting/human when the LLM takes things off the rails and challenges or threatens the user. One of the biggest news stories of 2023 was an LLM telling a journalist it loved him. But Meta was never going to make a GenAI that would do self-looping art or have interesting conversations. These LLMs probably are guardrailed into the ground and probably also have watcher models on them. You can almost feel that safeness and lack of risk taking in the boringness of the profiles if you look up the ones they set up in 2023. Football person, comedy person, fashion person, all geared to advice and stuff safe and boring.
I suspect these things had almost zero engagement and they had shuttered most of them. I wonder what Meta was planning with the new ones they were going to roll out.
It boggles my mind that there are people who think this is a good/ok idea. From a human perspective, all it does is pulls the mind ever closer to fictional imaginative world rather than encouraging real life interactions which I believe is inherently wrong no matter what business strategy is wrapped around it.
OV = Original Version
OmU = Originalfassung mit Untertitel (original with German subtitles)
DF = Deutsche Fassung (German version)
OmeU = Originalversion mit englischsprachigen Untertiteln (original with English subtitles)
But, I still think that it needs some more clarity. Does a film listing not having any of these abbreviations default to German audio?
Just a few things of this sort.
I recently arrived in Germany and movie/cinema aggregation is a huge issue and a hassle tbh, so I'll be using this frequently.
A suggestion, if I may, is to add language of the movie if possbile? It'd be great if there's any way to fetch that info and display it directly on your website instead of visiting every multiplex website to check it.
Either way, thank you for this!
I can barely enjoy a full day without being required to bring my phone / debit card with me because the register does not accept cash.
But, Bard can give you images as an output as well with the links to those images which may or may not be the correct links and GPT 3.5 can't do that. Still, in any general case, I'd say GPT 3.5 is way more reliable than Bard.
Nightmare fuel.
I know we're not there yet, and perhaps we won't ever be, or we'll be there in 10 years, but we don't have to map every single thing in the brain in order to exploit the foundational behavior.
Also the more we can model the minds workings, the more some sod is going to exploit that for commercial benefits i.e. advertising, propaganda etc.
I agree. I don't know why but when I read this article and the responses here stating how things can be related and can be measured, I suddenly felt this sigh? disappointment? for 'demystifying the mind'.
E.g. I first gave it a passage inside of Basel Main Train Station which included a text 'Sprüngli', a Swiss brand. The model got that part correct, but it suggested Zurich which wasn't the case.
The second picture was a lot tougher. It was an inner courtyard of a museum in Metz, and the model missed right from the start and after roaming around a bit (in terms of places), it just went back to its first guess which was a museum in Paris. It recognized that the photo was from some museum or a crypt, but even the city name of 'Metz' never occurred in its reasoning.
All in all, it's still pretty cool to see it reason and make sense out of the image, but for a bit lesser exposed places, it doesn't perform well.