I guess today's my day: https://xkcd.com/1053/
Will give it a try, thanks!
Possibly I could do much more prompt fine-tuning to nudge openai/anthropic in the direction I want, but with the same prompts Gemini often gives me answers/structure/tone I like much better.
Example: I had claude 3.7 generating embedding images and captions along with responses. Same prompt into Gemini it gave much more varied and flavorful pictures.
And honestly, even with LLM assistance getting Image Magick to output a 1200x600 image with two SVGs next to each other that are correctly resized to fill their half of the image sounds pretty tricky. Probably easier (for Claude) to achieve with HTML and CSS.
That's nowhere near enough reason to think we've hit a plateau - the pace has been super fast, give it a few more months to call that...!
I think the opposite about the features - they aren't gimmicks at all, but indeed they aren't part of the core AI. Rather it's important "tooling" that adjacent to the AI that we need to actually leverage it. The LLM field in popular usage is still in it's infancy. If the models don't improve (but I expect they will), we have a TON of room with these features and how we interact, feed them information, tool calls, etc to greatly improve usability and capability.