Many such cases...
Many such cases...
Still, I don't really see this going anywhere. There are already so many "slightly better C++" languages out there, e.g. D, cppfront/cpp2, Carbon, Zig and they pretty much all don't see wider adoption for the same reason. No matter how simple or ergonomic the interop with C++ is, the switching cost is still high and the benefit tends to be marginal. Almost all of them either include garbage collection or don't fully guarantee memory safety. Choosing a restricted subset of C++ and an opinionated, enforced linter & static analyzer goes a long way and gets you most of the benefits of these new languages, so organizations tend to just do that.
The exception is Rust, because in spite of all its downsides it has the killer feature of guaranteed memory safety without garbage collection, so that's the one seeing constantly increasing institutional support.
First time native image gen was introduced in Gemini 1.5 Flash if I'm not wrong, and then OpenAI was released for 4o which took over the internet by Ghibli Art.
We have been getting good quality images from almost all image generators like Midjourney, OpenAI and other providers, but the thing that made it special was true "multimodal" nature of it. Here's what I mean
When you used to ask chatgpt to create an image, it will rephrase that prompt and internally send that prompt to Dalle, similarly gemini would send it to Imagen which were diffusion models and they had little to know context in your next response about what's there in the previous image
In native image generation, it understands Audio, Text and even Image tokens in the same model and need not to rely on diffusion models internally, I don't think both Openai and google has released how they've trained it but my guess is that it's partially auto-regressive and diffusion but not sure about it
I imagine this is for anti-jailbreak moderation reasons, which is understandable
Unless the model incorporates an actual chess engine (Fritz 5.32 from 1998 would suffice) it will not do well.
I am a reasonably skilled player (FM) so can evaluate way better than LLMs. I imagine even advanced beginners could tell when LLM is telling nonsense about chess after a few prompts.
Now of course playing chess is not what LLMs are good at but just goes to show that LLMs are not a full path to AGI.
Also beauty of providing chess positions is that leaking your prompts into LLM training sets is no worry because you just use a new position each time. Little worry of running out of positions...
EDIT: News about their brand consolidation [1]
[0]: https://web.archive.org/web/20010405033628/http://choopa.com...
And it was all done, apparently, at least in the beginning, because they hired smart people and they let them do what they wanted.
I daily a black LF20W-1A and I also use the A168W-1 and AE1200WHD. The faces and design are way more interesting to me and they are more affordable.
I wish I never got the Apple Watch ...