Deleted Comment
Deleted Comment
First, start with a pile of money and the vague notion of an AI tool. Usefulness will come later, surely, definitely, before we run out of pile of money.
Modern theming systems have high DPI support which is in theory an upgrade, but the desktop appearance zeitgeist has skewed so flat and dull that the extra pixels make no material difference.
Yeah, this is definitely one of the saddest things about modern UI fashion. We have the highest-resolution, highest-DPI, cleanest-looking extra-bright, extra-deep-black HDR OLED screens, and... we've got flatter UI than ever, UI that would've looked dull even on a 90s CRT.
You could even, if it didn't come by default already, have the active title bar in a different color.
Maybe 99% of people didn't use this. Maybe they hired an authoritarian at GNOME to make the adwaita "one theme to rule them all". But it used to feel like I the style choices for my own computer's gui belonged more to me as a user.
Often, that meant picking a theme that I liked, from the very active theme-design community (the garden lists more than 3000 themes, although I'm not a mac person) and then just tweaking a color here or there.
We now praise dark mode as some big achievement, but... we -had- dark mode, before. The Mac Themes Garden has countless "dark mode" themes.
Deleted Comment
My go to test for checking hallucinations is 'Tell me about Mercantour park' (a national park in south eastern France).
Easily half of the facts are invented. Non-existing mountain summits, brown bears (no, there are none), villages that are elsewhere, wrong advice ('dogs allowed' - no they are not).
LLMs are not encyclopedias.
Give an LLM the context you want to explore, and it will do a fantastic job of telling you all about it. Give an LLM access to web search, and it will find things for you and tell you what you want to know. Ask it "what's happening in my town this week?", and it will answer that with the tools it is given. Not out of its oracle mind, but out of web search + natural language processing.
Stop expecting LLMs to -know- things. Treating LLMs like all-knowing oracles is exactly the thing that's setting apart those who are finding huge productivity gains with them from those who can't get anything productive out of them.