I drive a Toyota that is nearly old enough to run for US Senator. Every control in the car is visible, clearly labeled and is distinct to the touch - at all times. The action isn't impeded by routine activity or maintenance (ex:battery change).
Because it can be trivially duplicated, this is minimally capable engineering. Yet automakers everywhere lack even this level of competence. By reasonable measure, they are poor at their job.
I'm sympathetic , but think it's a disservice to the designers to present it like that:
> Every control in the car is visible
No. And that would be horrible.
Every control _critically needed while driving_ is visible and accessible. Controls that matter less can be smaller and more convoluted, or straight hidden.
The levers to adjust seat high and positions are hidden while still accessible. The latch to open the car good can (should ?) be less accessible and can be harder to find.
There are a myriad of subtle and opinionated choices to make the interface efficient. There's nothing trivial or really "simple" about that design process, and IMHO brushing over that is part of what leads us to the current situation where car makers just ignore these considerations.
I think Blizzard got this right in StarCraft and WarCraft III. During a game, there is a 3x3 (or 3x5) grid on the bottom right that sort of looks like a numpad. When you have units selected, the grid will show the actions, and only those actions, that correspond to those units.
Technically, you never see "all" actions - you only see the actions that make sense for the selected units. However, because there is a predictable place where the actions will show up, and because you know those are all the actions that are there, it never feels confusing.
On the contrary, it lets you quickly learn what the different skills are for each unit.
There is also a "default" action that will happen when you right-click somewhere on the map. What this default action will do is highly context specific and irregular: e.g. right-clicking on an enemy unit will trigger an attack order, but only if your selected unit actually has a matching weapon, otherwise it will trigger a move order. Right-clicking a resource item will issue a "mine" order, but only if you have selected a worker, etc etc.
Instead of trying to teach you all those rules, or to let you guess what the action is doing, the UI has two simple rules:
- How the default action is chosen may be complicated, but it will always be one of the actions from the grid.
- If a unit is following an action, that action will be highlighted in the grid.
This means the grid doubles as a status display to show you not just what the unit could do but also what it is currently doing. It also lets you learn the specifics of the default action by yourself, because if you right-click somewhere, the grid will highlight the action that was issued.
The irony is that in the actual game, you almost always use the default action and very rarely actually click the buttons in the grid. But I think the grid is still essential for those reasons: As a status display and to let you give an order explicitly if the default isn't doing what you want it to do.
The counterexample would be the C&C games: The UI there only has the right-click mechanic, without any buttons, with CTRL and ALT as modifier keys if you want to give different orders. But you're much more on your own to memorize what combination of CTRL, ALT, selected unit, target unit and click will issue which order.
The older designs weren't perfect, but they generally respected that you might need to adjust something without thinking too hard or taking your eyes off the road
I think there is a difference between “hidden” (like the notification and control centers on an iPhone) and “out of the way less visible but still there”, like a car seat adjustment lever on the side of the seat.
i disagree. i only want minimalist functionality and therefore it's reasonable to have ALL controls always present and physical.
someone needs to have the courage to say no to features that will get people killed.
a simple gun doesn't jam in the heat of battle.
u
my 1989 Toyota corolla has manual windows and that is great.
It allows UI designers to add nearly endless settings and controls where they were before limited by dash space. It's similar to how everything having flash for firmware allows shipping buggy products that they justify because they can always fix it with a firmware update.
Is this true given all the chips modern cars have, all the programming that must be done, and all the complex testing and QA required for the multitude of extra function?
I would gladly gladly keep my AC, heat, hazards, blinkers, wipers, maybe a few other buttons and that's it. I don't need back cameras, lane assist, etc.
I find it hard to believe it's cheaper to have all the cameras, chips, and other digital affordances rather than a small number of analog buttons and functions.
One of the reasons I purchased a (newer but used Mazda) was because it still has buttons and knobs right next to the driver's right hand in the center console. I can operate parts of the car without even having to look.
(another reason was because it still has a geared transmission instead of a CVT, but that's a separate discussion)
This is often repeated but I don't believe this for a second. I have an 90s vehicle which is based on 60/70s technology. A switch for a fog light is like £10 on ebay for a replacement and I know I am not paying anywhere near cost i.e. I am being ripped off.
This implies it's a consequential cost. Building with tactile controls would take the (already considerable) purchase price and boost that high enough to impact sales.
If tactile controls were a meaningful cost difference, then budget cars with tactile controls shouldn't be common - in any market.
Not just that, wiring it in to the single control bus is easier, otherwise you are stuck doing an analog to digital conversion anyways. Even new cars that have separate controls, these are mostly capacitive buttons or dials that simply send a fixed signal on the bus (so your dial will go all the way around, because it isn't actually the single volume control on the radio, but just a turn the volume up or down control).
Most of the cost savings is in having a single bus to wire up through the car, then everything needs a little computer in it to send on that bus...so a screen wins out.
I’m not sure if this is actually true for the volumes produced by the big carmakers. You’d very quickly get to volumes that make the largest component the material cost.
The good news over here is that the European NCAP is now mandating they put a bunch of those physical controls back if they want a 5-star safety rating. Would not be sorry to say good bye to the awful touchscreen UI in my car...
Youtuber/Engineer William Osman had a great rant some time back when he bought a new microwave and it came with a ton of buttons, his argument being that a microwave only really needs one (and ideally its just a dial instead of a button).
My previous one lasted more than 20 years, from when my parents bought it for me when I went to study until some time in my 40s. It was still functional, but its dial had become loose and it didn't look that great anymore.
The one I bought after that follows the new pattern, it has buttons up the wazoo and who even knows what they do? To be honest I just need one power setting with a time and maybe a defrost option?
I have a commercial microwave with exactly one dial[1]. It's great. It's more expensive than a "normal" microwave, but the UI is great, the construction is really solid and it's easy to clean. There's no external moving parts—no annoying rotating tray on the inside, and no visible latch on the door. It's clearly meant to take some abuse.
At first it was a bit annoying because frozen meals sometimes want you to run it at lower power and this microwave has no power setting. If that's a problem, I imagine there's some other similar model that does. But in practice, just running it at full power for shorter seems to work just as well.
It would look much nicer if it didn't have a cooking guide printed on it.
In Europe, I saw some consumer-grade microwaves with similarly minimalist designs, like these Gorenje microwaves[2] with two dials. I'd have gotten one of those, but I couldn't easily find them in the US. But I also did not look especially hard.
When I was looking to buy a microwave myself I wanted to buy one that has exactly two dials and two buttons.
Power, time, start, stop.
It turns out that luckily there is one like that made. The Y4ZM25MMK. Also as bonus no clock.
That said, I realized only very late that the function dial actually has a marker to show which function it selects. An extremely shallow colorless groove.
My parents still have theirs. It needs some resto love, but it’s still fully functional. I’ve already put my foot down in terms of who’s inheriting it.
My microwave has more controls and settings then an airbus. I have no idea what any of them even do. All a microwave needs is a timer. I've never had a case yet where i want less power.
My current microwave is 20 years old. It has a lot of buttons. I love the buttons it has. It's sensor modes are spot on.
I stab a potato and cover it in butter and salt, put it on a plate, press "potato" and it's cooked just perfect every time. Doesn't matter if it's big or small, it's just right.
When I have a plate of leftovers I just press reheat and it's perfect pretty much every time. Could be pork chops and Mac and cheese, could be a spaghetti with marinara sauce, could be whatever. Toss it in, lightly cover, press reheat, and it's good.
When I want to quickly thaw out some ground beef or ground sausage, I just toss it in, press defrost, put in a weight to a tenth of a pound, and it's defrosted without really being cooked yet.
Back when I microwaved popcorn, just pressing the popcorn button was spot on. Didn't matter what the bag size was, didn't matter the brand, the bag was always pretty much fully popped and not burned.
Despite being the same age it's still in excellent working order while yours with the dials fell apart.
I had similar discussions with my father who started his career in the 80s as an engineer, and has been a CEO for the last ~15 years. The discussion was a bit broader, about engineering and quality/usability in everything.
His perspective was that companies were "run" by engineers first, then a few decades later by managers, and then by marketing.
Who knows what's next, maybe nothing (as in all decisions are accidentally made by AI because everyone at all levels just asks AI). Could be better than our current marketing-driven universe.
While I agree with your sentiment, designing and manufacturing custom molds for each knob and function (including premium versions) instead of just slapping a screen on the dash does have a cost.
> designing and manufacturing custom molds for each knob and function ... dash does have a cost.
Manufacturing car components already involves designing and custom molds, does it not? Compared to the final purchase price, the cost of adding knobs to that stack seems inconsequential.
Lots of comments that a few plastic knobs, switches, wiring add to the cost. Yes. But buttons and knobs are more intuitive, less distracting, can be operated blind while keeping eyes on the road.
So guess what Mr.Auto Manufacturer, you can keep your hifi $30K-70K touchscreen surveillance machine on your lot. I'll keep driving my 20+ year old Corolla until you learn to do better.
I commented on here about the surge in US car mfg recruiters contacting me about working on their new car systems. The HN opinion seemed to that they are complete disasters and stay away if I value my sanity.
By reasonable measure, they are poor at their job.
I don't think you can make this assertion without knowing what they were tasked with doing. I very much doubt they were tasked with making the most user friendly cockpit possible. I suspect they were required to minimize moving parts (like switches and buttons) and to enable things like Sirius, iPhone and Android integration, etc.
Power abhors a vacuum. Choosing to not change is viewed as failure to innovate, even if the design suffers. Planned obsolescence is as old as the concept of yearly production models themselves, and likely older, going back to replacement parts manufacturing and standardized production overtaking piecework.
It’s a race to the bottom to be the least enshittified versus your market competitors. Usability takes a backseat to porcine beauty productization.
I notice an interesting phenomenon here and elsewhere. There is this complaint where everyone agrees that the current state of affairs sucks. There are some (perhaps limited, but still) ways to improve it, and yet, they don’t get much traction. My very brief research produced this list of cars with limited touch screens:
Toyota 4Runner, Toyota Tacoma, Jeep Wrangler, Nissan Frontier, Ford Maverick, Ford Bronco, Jeep Gladiator, Mazda MX-5 Miata
I wonder what kind of cars do you guys drive.
Stranger still, if someone comes up with an idea of how to improve that thing that sucks, frequently the reaction is very negative. Sadly, the whole thing more and more gets into “old man yelling at the cloud” territory.
I think the article overlooks that it is not really an accident that apps and operating systems are hiding all their user interface affordances. It's an antipattern to create lock in, and it tends to occur once a piece of software has reached what they consider saturation point in terms of growth where keeping existing users in is more important than attracting new ones. It so turns out that the vast majority of software we use is created by companies in exactly that position - Google, Apple, Microsoft, Meta etc.
It might seem counter intuitive that hiding your interface stops your users leaving. But it does it because it changes your basis of assumptions about what a device is and your relationship with it. It's not something you "use", but something you "know". They want you to feel inherently linked to it at an intuitive level such that leaving their ecosystem is like losing a part of yourself. Once you've been through the experience of discovering "wow, you have to swipe up from a corner in a totally unpredictable way to do an essential task on a phone", and you build into your world of assumptions that this is how phones are, the thought of moving to a new type of phone and learning all that again is terrifying. It's no surprise at all that all the major software vendors are doing this.
I think you picked a hypothesis and assumed it was true and ran with it.
Consider that all the following are true (despite their contradictions):
- "Bloated busy interface" is a common complaint of some of Google, Apple, Microsoft, and Meta. people here share a blank vscode canvas and complain about how busy the interface is compared to their 0-interface vim setup.
- flat design and minimalism are/were in fashion (have been for few years now).
- /r/unixporn and most linux people online who "rice" their linux distros do so by hiding all controls from apps because minimalism is in fashion
- Have you tried GNOME recently?
Minimal interface where most controls are hidden is a certain look that some people prefer. Plenty of people prefer to "hide the noise" and if they need something, they are perfectly capable to look it up. It's not like digging in manuals is the only option
If I had to pin most of this on anything I’d pick two:
- Dribbble-driven development, where the goal is to make apps look good in screenshots with little bearing to their practical usability
- The massive influx of designers from other disciplines (print, etc) into UI design, who are great at making things look nice but don’t carry many of the skills necessary to design effective UIs
Being a good UI designer is seeking out existing usability research, conducting new research to fill in the gaps, and understanding the limits of the target platform on top of having a good footing in the fundamentals. The role is part artist, part scientist, and part engineer. It’s knowing when to put ego aside and admit that the beautiful design you just came with isn’t usable enough to ship. It’s not just a sense for aesthetics and the ability to wield Photoshop or Figma or whatever well.
This is not what hiring selects for, though, and that’s reflected in the precipitous fall in quality of software design in the past ~15 years.
I agree with you it's very fashion driven and hence you see it in all kinds of places outside the core drivers of it. But my argument is, those fashions themselves are driven by the major players deciding to do this for less than honorable reasons.
I do think it's likely more passive than active. People at Google aren't deviously plotting to hide buttons from the user. But what is happening is that when these designs get reviewed, nobody is pushing back - when someone says "but how will the user know to do that?", it doesn't get listend to. Instead the people responsible are signing off on it saying, "it's OK, they will just learn that, once they get to know it, then it will be OK". It's all passive but it's based on an implicit assumption that uses are staying around and optimising for the ones that do, making it harder for the ones that want to come and go or stop in temporarily.
Once three or four big companies start doing it, everybody else cargo cults it and before you know it, it looks like fashion and GNOME is doing it too.
I think you picked a hypothesis and assumed it was true and ran with it.
The tone of your post and especially this phrase is inappropriate imo. The GP's comment is plausible. You're welcome to make a counter-argument but you seem to be claiming without evidence their was no thinking behind their post.
UIs tend to have a universality with how people structure their environments. Minimalism is super hot outside of software design too. Millennial Gray is a cliche for a reason. Frutiger Aero wasn't just limited to technology. JLo's debut single is pretty cool about this aesthetic https://www.youtube.com/watch?v=lYfkl-HXfuU
God, no. I switched to xfce when GNOME decided that they needed to compete with Unity by copying whatever it did, no matter how loudly their entire user base complained.
It's a double edged sword though in that it can discourage users from trying their interface.
Apple's interface shits me because it's all from that one button, and I can never remember how to get to settings because I use that interface so infrequently, so Android feels more natural. Ie. Android has done it's lock-in job, but Apple has done itself a disservice.
(Not entirely fair, I also dislike Apple for all the other same old argument reasons).
I see nonprofit OSS projects doing it too, and wonder if they're just trendchasing without thinking. Firefox's aggravating redesigns fall under this category, as does Gnome and the like.
I get why you would hide interface elements to use the screen real estate for something else.
I have no idea why some interfaces hide elements hide and leave the space they'd taken up unused.
IntelliJ does this, for example, with the icons above the project tree. There is this little target disc that moves the selection in the project tree to the file currently open in the active editor tab. You have to know the secret spot on the screen where it is hidden and if you move your mouse pointer to the void there, it magically appears.
Why? What is the rationale behind going out of your way to implement something like this?
Some people complain about "visual clutter". Too many stimuli in the field of view assault their attention, and ruin their concentration. Such people want everything that's not in the focus of attention be gone, or at least be inconspicuous.
Some people are like airliner pilots. They enjoy every indicator to be readily visible, and every control to be easily within reach. They can effortlessly switch their focus.
Of course, there is a full range between these extremes.
The default IDE configuration has to do a balancing act, trying to appeal to very different tastes. It's inevitably a compromise.
Some tools have explicit switches: "no distractions mode", "expert mode", etc, which offer pre-configured levels of detail.
This is why we used to have customizable toolbars, and relevant actions still accessible via context menu and/or main menu, where the respective keyboard shortcuts were also listed. No need to compromise. Just make it customizable using a consistent framework.
Intellij on Windows also buries the top menus into a hamburger icon and leaves the entire area they occupied empty! Thankfully there is an option to reverse it deep in the settings, but having it be the default is absolutely baffling.
Microsoft pulls the same BS. Look at Edge. Absolute mess. No menu. No title bar. What application am I even using?
This stupidity seems to have spread across Windows. No title bars or menus... now you can't tell what application a Window belongs to.
And you can't even bring all of an application's windows to the foreground... Microsoft makes you hover of it in the task bar and choose between indiscernible thumbnails, one at a time. WTF? If you have two Explorer windows open to copy stuff, then switch to other apps to work during the copy... you can't give focus back to Explorer and see the two windows again. You have to hover, click on a thumbnail. Now go back and hover, and click on a thumbnail... hopefully not the same one, because of course you can't tell WTF the difference between two lists of files is in a thumbnail.
And Word... the Word UI is now a clinic on abject usability failure. They have a menu bar... except WAIT! Microsoft and some users claim that those are TABS... except that it's just a row of words, looking exactly like a menu.
So now there's NO menu and no actual tabs... just a row of words. And if you go under the File "menu" (yes, File), there are a bunch of VIEW settings. And in there you can add and remove these so-called "tabs," and when you do remove one, the functionality disappears from the entire application. You're not just customizing the toolbar; you're actually disabling entire swaths of features from the application.
It's an absolute shitshow of grotesque incompetence, in a once-great product. No amount of derision for this steaming pile is too much.
> I get why you would hide interface elements to use the screen real estate for something else.
Except that screens on phones, tablets, laptops and desktops are larger than ever. Consider the original Macintosh from 1984 – large, visible controls took up a significant portion of its 9" display (smaller than a 10" iPad, monochrome, and low resolution.) Arguably this was partially due to users being unfamiliar with graphical interfaces, but Apple still chose to sacrifice precious and very limited resources (screen real estate, compute, memory, etc.) on a tiny, drastically underpowered (by modern standards) system in the 1980s for interface clarity, visibility, and discoverability. And once displays got larger the real estate costs became negligible.
I agree, I know those buttons are there and how to activate them, but I still occasionally stare blankly at the screen wondering where the buttons are before remembering I need to hover them
> I have no idea why some interfaces hide elements hide and leave the space they'd taken up unused.
UI has been taken over by graphic designers and human interaction experts have been pushed out. It happened as we started calling it "user experience" rather than "user interface" because people started to worry about the emotional state of the user, rather than being a tool. It became about being form over function, and now we have to worry about holding it wrong when in reality machines are here to serve humans, not the other way around.
An IDE, and the browser example given below, are tools I'll spend thousands of hours using in my life. The discoverability is only important for a small percentage of that, while viewing the content is important for all of it.
This is exactly when I will have the 'knowledge in the head'.
The other day I was locked out of my car
the key fob button wouldn't work
Why didn't I just use my key to get in?
First, you need to know there is a hidden key inside the fob.
Second, because there doesn't appear to be a keyhole on the car door,
you also have to know that you need to disassemble a portion
of the car door handle to expose the keyhole.
Hiding critical car controls is hostile engineering. In this, it doesn't stand out much in the modern car experience.
While this makes several cars a terrible choice for rentals, I do wish car owners would take maybe half an hour of their day after spending a couple thousand to read through the manual that came with their car. The manual doesn't just tell you how to change the radio station, it also contains a lot of safety information and instructions for how to act when something goes wrong.
How can I trust a driver to take things like safe maximum load into account when they don't even know they can open their car if their battery ever goes flat?
Because cars have had consistent keys and doors for how many years now? That's half the thing about cars - you shouldn't need to read the manual to know how to open a fucking door. Maybe for service timings, etc, but you should be able to get into any car and be able to use it for completely normal use cases *without* reading a manual.
This also happened to me in a rental. We drove it off the lot to our hotel a half-hour away before we discovered the remote was busted, with all of our possessions locked inside.
I did know that there must be a physical key (unless Tesla?), and the only way I found the keyhole was because a previous renter had scratched the doorknob to shit trying to access the very same keyhole.
All of which you should know, and can be easily found with a quick google. The moment we got a car with no physical key my first question was “what’s the backup option and how does it work”.
Basic knowledge about the things you own isn’t hard. My god there is a lot of old man shakes fist at cloud in here.
This is such an Apple user take.
"Yes you can do that, but you're not supposed to so it's hidden behind so many menus that you can't find it except by accident and since I use it, I say sowwy to my phone every night before I go to sleep to make sure Apple doesn't get maddy mad at me"
This is what happens when "designers" who are nothing more than artists take control of UI decisions. They want things to look "clean" at the expense of discoverability and forget that affordances make people learn.
Contrast this with something like an airplane cockpit, which while full of controls and assuming expert knowledge, still has them all labeled.
I still don't understand why desktop OSes now have mobile style taskbar icons that are twice as large as they need to be, grouped together so you need to hover to see which instance of what is what, and then click again to switch to the one you actually want if you can even figure out what it even is with just a thumbnail without any labels. All terminal windows look the fucking same!
Win NT-Vista style, aka the way web browsers show tabs with an icon + label is peak desktop UX for context switching and nobody can convince me otherwise. GNOME can't even render taskbars that way.
Most people coming into the workforce today have grown up on iOS and Android. To them, the phone is the default, the computer used to be what grownups use to do work. Watching them start using computers is very similar to those videos from the 80s and 90s of office workers using a computer for the first time.
The appification of UI is a necessary evil if you want people in their mid twenties or lower to use your OS. The world is moving to mobile-first, and UI is following suit, even in places it doesn't make sense.
Give a kid a UI from the 90s, styled after industrial control panels, and they'll be as confused as you are with touch screen designs. Back in the day, stereos used to provide radio buttons and sliders for tuning, but those devices aren't used anymore. I don't remember the last device I've used that had a physical toggle button, for instance.
UI is moving away from replicating the stereos from the 80s to replicating the electronics young people are actually using. That includes adding mobile paradigms in places that don't necessarily make sense, just like weird stereo controls were all over computers for no good reason.
If you prefer the traditional UX, you can set things up the way you want. Classic Shell will get you your NT-Vista task bar. Gnome Shell has a whole bunch of task bar options. The old approach may no longer be the default one, but it's still an option for those that want it.
Most people are intimidated by airplane cockpits. I think you’re right that specialists in certain situations where they’re familiar have much higher tolerance for visual density because, to them, it isn’t dense, it’s meaningful.
Most people for most situations, using most phone apps, do not have that familiarity. Mobile design has to simultaneously provide a lot of power and progressively disclose it such that it keeps users at or just past their optimal level of comfort, and that involves tradeoffs to hide some things and expose others at different levels of depth.
So while I agree that a lot of mobile design, and OS design in particular, pulls back way too far on providing affordances for actions, I would not use an airplane cockpit as a good guide, unless you’re also talking about a specialist tool.
Next you’ll be complaining that the taps in your house don’t have a label telling you that they need to be twisted and in what direction.
Phones aren’t 747’s, and guess what every normal person that goes into an airplane cockpit who isn’t a pilot is so overwhelmed by all the controls they wouldn’t know what anything did.
Interface designers know what they’re doing. They know what’s intuitive and what isn’t, and they’ve refined down to an art how to contain a complicated feature set in a relatively simple form factor.
The irony of people here with no design training that they could do a better job than any “so called designer” shows incredible levels of egotism and disrespect to a mature field of study.
Also demonstrably, people use their phones really quite well with very little training, that’s a modern miracle.
... and then they ignore it?
It triggers me when someone calls hidden swipe gestures intuitive. It's the opposite of affordance, which these designers should be familiar with if they are worth their salaries.
Very slightly unrelated, but this trend is one of the reasons I went Android after the iPhone removed the home button. I think it became meaningfully harder to explain interactions to older users in my family and just when they got the hang of "force touch" it also went away.
First thing I do on new Pixel phones is enable 3 button navigation, but lately that's also falling out of favor in UI terms, with apps assuming bottom navigation bar and not accounting for the larger spacing of 3 button nav and putting content or text behind it.
Similarly the disappearing menu items in common software.
Take a simple example: Open a read-only file in MS Word. There is no option to save? Where's it gone? Why can I edit but not save the file?
A much better user experience would be to enable and not hide the Save option. When the user tries to save, tell them "I cannot save this file because of blah" and then tell them what they can do to fix it.
The Mac HIG specifies exactly this: don’t hide temporarily unavailable options, disable them. Disabling communicates to the user the relationships between data, state, etc and adds discoverability.
I half agree. The save option should be disabled, since there is something very frustrating about enabling a control that cannot be used. However, there could be a label (or a warning button that displays such a label) explaining why the option is disabled.
I am the same, long time Android user and when I borrow my wife's iPhone it is an exercise in frustration. Interactions are hidden, not intuitive, or just plain missing.
Now that Pixel cameras outclass iPhone cameras, and even Samsung is on par, there is really no reason to ever switch to the Apple ecosystem anymore IMO.
I had the same story, which is why the last phone I got for my grandma was an iPhone SE (which still has the home button). This way, no matter where she ends up, there's this large and obvious thing that she can press to return back to the familiarity of the home screen.
I am firmly in the “key UI elements should be visible” camp. I also agree that Apple violates that rule occasionally.
However, I think they do a decent job at resisting it in general, and specifically I disagree that removing the home button constitutes hiding an UI element. I see it as a change in interaction, after which the gesture is no longer “press” but “swipe” and the UI element is not a button but edge of the screen itself. It is debatable whether it is intuitive or better in general, but I personally think it is rather similar to double-clicking an icon to launch an app, or right-clicking to invoke a context menu: neither have any visual cues, both are used all the time for some pretty key functions, but as soon as it becomes an intuition it does not add friction.
You may say Apple is way too liberal in forcing new intuitions like that, and I would agree in some cases (like address bar drag on Safari!), but would disagree in case of the home button (they went with it and they firmly stuck with it, and they kept around a model with the button for a few more years until 2025).
Regarding explaining the lack of home button: on iOS, there is an accessibility feature that puts on your screen a small draggable circle, which when pressed displays a configurable selection of shortcuts—with text labels—including the home button and a bunch of other pretty useful switches. Believe it or not, I know people who kept this circle around specifically when hardware home button was a thing, because they did not want to wear out the only thing they saw as a moving part!
>the gesture is no longer “press” but “swipe” and the UI element is not a button but edge of the screen itself.
Right, but while it's obvious to everyone that a button is a control, it's not obvious that an edge is a control. On top of that, swiping up from the bottom edge triggers two completely different actions depending on exactly when/where you lift your finger off the screen.
Why not move the physical home button to the back of the phone?
We have a user interface design rule that keyboard shortcuts and context menus must only be "shortcuts" for commands that are discoverable via clear buttons or menus. That probably makes our apps old-fashioned.
I recall learning that the four corners of the screen are the most valuable screen real estate, because it's easy to move the mouse to those locations quickly without fine control. So it's user-hostile that for Windows 11 Microsoft moved the default "Start" menu location to the center. And I don't think they can ascribe it to being mobile-first. Maybe it's "touch-first", where mouse motion doesn't apply.
I think the centered icons on W11 were done for one reason and one reason only: ripping off MacOS (probably because it's what the design team uses themselves and it felt familiar to them). There is no sensible UX reason to do it, and even in MacOS it's a detriment to its interface.
Centered taskbar app indicators work well on 16:9 and wider screens. On such screens, UI elements placed at corners are less prominent, you have to extort a deliberate effort to look there, unlike on tiny, 4:3 of yore. Ever since I got my ultrawide, and long before Windows 11 made this a default, I've been using bottom centered app buttons and Start/widget in the left corner. On a narrower and smaller screens (like a laptop), I find vertical taskbar more ergonomic, and centered/top aligned app buttons don't matter as much there in practice.
I don't think it's a macOS ripoff, they would've also ripped off more of the dock if that was the goal. For instance, you would've been able to do things like "pin the task bar to the side".
I think they wanted the start menu to be front and center. And honestly, that just sounds like a good idea, because it is where you go to do stuff that's not on your desktop already. But clicking a button in the bottom left and having the menu open in the middle would look weird, so centering the icons would make sense.
I think there are better ways to do it and I'm sure they've been tried, but they would probably confuse existing Windows users even more.
Yes! This is how things should be. And additionally, I want to see all the keyboard shortcuts visible on the menu items they activate. And every tool tip that pops up when you hover over a button should also show whatever keyboard shortcut activates that function. It's the best way for novice users to notice and the keyboard shortcuts for the things they care about without having to go elsewhere to look them up.
I think it's user-hostile that 'maximise' is next to 'close'. After moving my mouse so far, I need to start using fine control if I want to maximise it. I want more of the program and, if I fail, I get none of it - destructively!
Corners and edges are rarely used that way. They should be. See "Fitts Law".[1]
My metaverse client normally presents a clean 3D view of the world. If you bring the cursor to the top or bottom of the screen, the menu bar and controls appear. They stay visible as long as the cursor is over some control, then, after a few seconds, they disappear.
This seems to be natural to users. I deliberately don't explain it, but everybody finds the controls, because they'll move the mouse and hit an edge.
Something which drives me mad is how modern operating systems (both desktop and mobile) keep hiding file system paths. There used to be a setting on OSX which let you show the address bar in Finder (though it wasn't default) but nowadays it seems to be impossible (unless you get some third-party extension) and I have to resort to using the terminal. It's bonkers.
It makes it impossible to locate files later when I need to move or transfer them.
My working theory, which I hold quite confidently, is that anything that doesn't test well with new users in usability testing focus groups or A/B testing eventually gets the axe. But the people conducting that testing are - intentionally or unintentionally - optimizing for the wrong metric: "how quickly and easily can someone who has never seen this app before figure out how to do this action." That's the wrong thing to optimize for at a macro scale. It might make your conversions go up for a while, but at a long term cost of usability, capability, and discoverability that enrages the users that you want to convert into advanced, loyal, word of mouth evangelists for your app because they love it.
When people who are not thinking in that bigger-scale, zoomed-out, societal-level perspective conduct A/B testing or usability testing in a lab or focus group setting, they focus on the wrong metrics (the ones that make an immediate, short-term KPI go up) and then promote the resulting objectively worse UX designs as being evidence-based and data-driven.
It has been destroying software usability for the last 20 years and doing a deep disservice to subsequent generations who are growing up without having been exposed to TRULY thoughtful UX except very rarely.
I have this issue when links are shared directly to a file on SharePoint.
It's often more useful to share the directory it's in rather that the file itself. MS Office dies have a way to get that information, but you have to look for it.
Because it can be trivially duplicated, this is minimally capable engineering. Yet automakers everywhere lack even this level of competence. By reasonable measure, they are poor at their job.
> Every control in the car is visible
No. And that would be horrible.
Every control _critically needed while driving_ is visible and accessible. Controls that matter less can be smaller and more convoluted, or straight hidden.
The levers to adjust seat high and positions are hidden while still accessible. The latch to open the car good can (should ?) be less accessible and can be harder to find.
There are a myriad of subtle and opinionated choices to make the interface efficient. There's nothing trivial or really "simple" about that design process, and IMHO brushing over that is part of what leads us to the current situation where car makers just ignore these considerations.
Technically, you never see "all" actions - you only see the actions that make sense for the selected units. However, because there is a predictable place where the actions will show up, and because you know those are all the actions that are there, it never feels confusing.
On the contrary, it lets you quickly learn what the different skills are for each unit.
There is also a "default" action that will happen when you right-click somewhere on the map. What this default action will do is highly context specific and irregular: e.g. right-clicking on an enemy unit will trigger an attack order, but only if your selected unit actually has a matching weapon, otherwise it will trigger a move order. Right-clicking a resource item will issue a "mine" order, but only if you have selected a worker, etc etc.
Instead of trying to teach you all those rules, or to let you guess what the action is doing, the UI has two simple rules:
- How the default action is chosen may be complicated, but it will always be one of the actions from the grid.
- If a unit is following an action, that action will be highlighted in the grid.
This means the grid doubles as a status display to show you not just what the unit could do but also what it is currently doing. It also lets you learn the specifics of the default action by yourself, because if you right-click somewhere, the grid will highlight the action that was issued.
The irony is that in the actual game, you almost always use the default action and very rarely actually click the buttons in the grid. But I think the grid is still essential for those reasons: As a status display and to let you give an order explicitly if the default isn't doing what you want it to do.
The counterexample would be the C&C games: The UI there only has the right-click mechanic, without any buttons, with CTRL and ALT as modifier keys if you want to give different orders. But you're much more on your own to memorize what combination of CTRL, ALT, selected unit, target unit and click will issue which order.
Right, because it's fucking ridiculous to expect a driver to fumble through menus while driving.
I would gladly gladly keep my AC, heat, hazards, blinkers, wipers, maybe a few other buttons and that's it. I don't need back cameras, lane assist, etc.
I find it hard to believe it's cheaper to have all the cameras, chips, and other digital affordances rather than a small number of analog buttons and functions.
(another reason was because it still has a geared transmission instead of a CVT, but that's a separate discussion)
This implies it's a consequential cost. Building with tactile controls would take the (already considerable) purchase price and boost that high enough to impact sales.
If tactile controls were a meaningful cost difference, then budget cars with tactile controls shouldn't be common - in any market.
Most of the cost savings is in having a single bus to wire up through the car, then everything needs a little computer in it to send on that bus...so a screen wins out.
My previous one lasted more than 20 years, from when my parents bought it for me when I went to study until some time in my 40s. It was still functional, but its dial had become loose and it didn't look that great anymore.
The one I bought after that follows the new pattern, it has buttons up the wazoo and who even knows what they do? To be honest I just need one power setting with a time and maybe a defrost option?
At first it was a bit annoying because frozen meals sometimes want you to run it at lower power and this microwave has no power setting. If that's a problem, I imagine there's some other similar model that does. But in practice, just running it at full power for shorter seems to work just as well.
It would look much nicer if it didn't have a cooking guide printed on it.
In Europe, I saw some consumer-grade microwaves with similarly minimalist designs, like these Gorenje microwaves[2] with two dials. I'd have gotten one of those, but I couldn't easily find them in the US. But I also did not look especially hard.
[1]: https://www.amazon.com/dp/B00ZTVIPZ2?ref_=ppx_hzsearch_conn_...
[2]: https://international.gorenje.com/products/cooking-and-bakin...
Power, time, start, stop.
It turns out that luckily there is one like that made. The Y4ZM25MMK. Also as bonus no clock.
That said, I realized only very late that the function dial actually has a marker to show which function it selects. An extremely shallow colorless groove.
The 1967 Amana Radarange (https://media.npr.org/assets/img/2017/08/28/microwave_custom...) had two dials: short duration under 5 minutes and a long duration out to something like 30 minutes.
My parents still have theirs. It needs some resto love, but it’s still fully functional. I’ve already put my foot down in terms of who’s inheriting it.
I stab a potato and cover it in butter and salt, put it on a plate, press "potato" and it's cooked just perfect every time. Doesn't matter if it's big or small, it's just right.
When I have a plate of leftovers I just press reheat and it's perfect pretty much every time. Could be pork chops and Mac and cheese, could be a spaghetti with marinara sauce, could be whatever. Toss it in, lightly cover, press reheat, and it's good.
When I want to quickly thaw out some ground beef or ground sausage, I just toss it in, press defrost, put in a weight to a tenth of a pound, and it's defrosted without really being cooked yet.
Back when I microwaved popcorn, just pressing the popcorn button was spot on. Didn't matter what the bag size was, didn't matter the brand, the bag was always pretty much fully popped and not burned.
Despite being the same age it's still in excellent working order while yours with the dials fell apart.
His perspective was that companies were "run" by engineers first, then a few decades later by managers, and then by marketing.
Who knows what's next, maybe nothing (as in all decisions are accidentally made by AI because everyone at all levels just asks AI). Could be better than our current marketing-driven universe.
Why is this so expensive it can't even be put into a premium car today when it used to be ubiquitous in even the cheapest hardware a few decades ago?
Manufacturing car components already involves designing and custom molds, does it not? Compared to the final purchase price, the cost of adding knobs to that stack seems inconsequential.
So guess what Mr.Auto Manufacturer, you can keep your hifi $30K-70K touchscreen surveillance machine on your lot. I'll keep driving my 20+ year old Corolla until you learn to do better.
I don't think you can make this assertion without knowing what they were tasked with doing. I very much doubt they were tasked with making the most user friendly cockpit possible. I suspect they were required to minimize moving parts (like switches and buttons) and to enable things like Sirius, iPhone and Android integration, etc.
It’s a race to the bottom to be the least enshittified versus your market competitors. Usability takes a backseat to porcine beauty productization.
Deleted Comment
Toyota 4Runner, Toyota Tacoma, Jeep Wrangler, Nissan Frontier, Ford Maverick, Ford Bronco, Jeep Gladiator, Mazda MX-5 Miata
I wonder what kind of cars do you guys drive.
Stranger still, if someone comes up with an idea of how to improve that thing that sucks, frequently the reaction is very negative. Sadly, the whole thing more and more gets into “old man yelling at the cloud” territory.
It might seem counter intuitive that hiding your interface stops your users leaving. But it does it because it changes your basis of assumptions about what a device is and your relationship with it. It's not something you "use", but something you "know". They want you to feel inherently linked to it at an intuitive level such that leaving their ecosystem is like losing a part of yourself. Once you've been through the experience of discovering "wow, you have to swipe up from a corner in a totally unpredictable way to do an essential task on a phone", and you build into your world of assumptions that this is how phones are, the thought of moving to a new type of phone and learning all that again is terrifying. It's no surprise at all that all the major software vendors are doing this.
Consider that all the following are true (despite their contradictions):
- "Bloated busy interface" is a common complaint of some of Google, Apple, Microsoft, and Meta. people here share a blank vscode canvas and complain about how busy the interface is compared to their 0-interface vim setup.
- flat design and minimalism are/were in fashion (have been for few years now).
- /r/unixporn and most linux people online who "rice" their linux distros do so by hiding all controls from apps because minimalism is in fashion
- Have you tried GNOME recently?
Minimal interface where most controls are hidden is a certain look that some people prefer. Plenty of people prefer to "hide the noise" and if they need something, they are perfectly capable to look it up. It's not like digging in manuals is the only option
- Dribbble-driven development, where the goal is to make apps look good in screenshots with little bearing to their practical usability
- The massive influx of designers from other disciplines (print, etc) into UI design, who are great at making things look nice but don’t carry many of the skills necessary to design effective UIs
Being a good UI designer is seeking out existing usability research, conducting new research to fill in the gaps, and understanding the limits of the target platform on top of having a good footing in the fundamentals. The role is part artist, part scientist, and part engineer. It’s knowing when to put ego aside and admit that the beautiful design you just came with isn’t usable enough to ship. It’s not just a sense for aesthetics and the ability to wield Photoshop or Figma or whatever well.
This is not what hiring selects for, though, and that’s reflected in the precipitous fall in quality of software design in the past ~15 years.
I do think it's likely more passive than active. People at Google aren't deviously plotting to hide buttons from the user. But what is happening is that when these designs get reviewed, nobody is pushing back - when someone says "but how will the user know to do that?", it doesn't get listend to. Instead the people responsible are signing off on it saying, "it's OK, they will just learn that, once they get to know it, then it will be OK". It's all passive but it's based on an implicit assumption that uses are staying around and optimising for the ones that do, making it harder for the ones that want to come and go or stop in temporarily.
Once three or four big companies start doing it, everybody else cargo cults it and before you know it, it looks like fashion and GNOME is doing it too.
The tone of your post and especially this phrase is inappropriate imo. The GP's comment is plausible. You're welcome to make a counter-argument but you seem to be claiming without evidence their was no thinking behind their post.
God, no. I switched to xfce when GNOME decided that they needed to compete with Unity by copying whatever it did, no matter how loudly their entire user base complained.
Why would I try GNOME again?
Apple's interface shits me because it's all from that one button, and I can never remember how to get to settings because I use that interface so infrequently, so Android feels more natural. Ie. Android has done it's lock-in job, but Apple has done itself a disservice.
(Not entirely fair, I also dislike Apple for all the other same old argument reasons).
I have no idea why some interfaces hide elements hide and leave the space they'd taken up unused.
IntelliJ does this, for example, with the icons above the project tree. There is this little target disc that moves the selection in the project tree to the file currently open in the active editor tab. You have to know the secret spot on the screen where it is hidden and if you move your mouse pointer to the void there, it magically appears.
Why? What is the rationale behind going out of your way to implement something like this?
Some people are like airliner pilots. They enjoy every indicator to be readily visible, and every control to be easily within reach. They can effortlessly switch their focus.
Of course, there is a full range between these extremes.
The default IDE configuration has to do a balancing act, trying to appeal to very different tastes. It's inevitably a compromise.
Some tools have explicit switches: "no distractions mode", "expert mode", etc, which offer pre-configured levels of detail.
This stupidity seems to have spread across Windows. No title bars or menus... now you can't tell what application a Window belongs to.
And you can't even bring all of an application's windows to the foreground... Microsoft makes you hover of it in the task bar and choose between indiscernible thumbnails, one at a time. WTF? If you have two Explorer windows open to copy stuff, then switch to other apps to work during the copy... you can't give focus back to Explorer and see the two windows again. You have to hover, click on a thumbnail. Now go back and hover, and click on a thumbnail... hopefully not the same one, because of course you can't tell WTF the difference between two lists of files is in a thumbnail.
And Word... the Word UI is now a clinic on abject usability failure. They have a menu bar... except WAIT! Microsoft and some users claim that those are TABS... except that it's just a row of words, looking exactly like a menu.
So now there's NO menu and no actual tabs... just a row of words. And if you go under the File "menu" (yes, File), there are a bunch of VIEW settings. And in there you can add and remove these so-called "tabs," and when you do remove one, the functionality disappears from the entire application. You're not just customizing the toolbar; you're actually disabling entire swaths of features from the application.
It's an absolute shitshow of grotesque incompetence, in a once-great product. No amount of derision for this steaming pile is too much.
Except that screens on phones, tablets, laptops and desktops are larger than ever. Consider the original Macintosh from 1984 – large, visible controls took up a significant portion of its 9" display (smaller than a 10" iPad, monochrome, and low resolution.) Arguably this was partially due to users being unfamiliar with graphical interfaces, but Apple still chose to sacrifice precious and very limited resources (screen real estate, compute, memory, etc.) on a tiny, drastically underpowered (by modern standards) system in the 1980s for interface clarity, visibility, and discoverability. And once displays got larger the real estate costs became negligible.
UI has been taken over by graphic designers and human interaction experts have been pushed out. It happened as we started calling it "user experience" rather than "user interface" because people started to worry about the emotional state of the user, rather than being a tool. It became about being form over function, and now we have to worry about holding it wrong when in reality machines are here to serve humans, not the other way around.
Don’t quote me on this, but I vaguely remember there being an option to toggle hiding it, if not in the settings it is in a context menu on the panel.
That thing is a massive time saver, and I agree—keeping it hidden means most people never learn it exists.
An IDE, and the browser example given below, are tools I'll spend thousands of hours using in my life. The discoverability is only important for a small percentage of that, while viewing the content is important for all of it.
This is exactly when I will have the 'knowledge in the head'.
How can I trust a driver to take things like safe maximum load into account when they don't even know they can open their car if their battery ever goes flat?
For example (Kia Carnival): Holding the lock button on the fob for 20 seconds will automatically close the sliding doors and any open windows.
I did know that there must be a physical key (unless Tesla?), and the only way I found the keyhole was because a previous renter had scratched the doorknob to shit trying to access the very same keyhole.
Basic knowledge about the things you own isn’t hard. My god there is a lot of old man shakes fist at cloud in here.
Contrast this with something like an airplane cockpit, which while full of controls and assuming expert knowledge, still has them all labeled.
Win NT-Vista style, aka the way web browsers show tabs with an icon + label is peak desktop UX for context switching and nobody can convince me otherwise. GNOME can't even render taskbars that way.
The appification of UI is a necessary evil if you want people in their mid twenties or lower to use your OS. The world is moving to mobile-first, and UI is following suit, even in places it doesn't make sense.
Give a kid a UI from the 90s, styled after industrial control panels, and they'll be as confused as you are with touch screen designs. Back in the day, stereos used to provide radio buttons and sliders for tuning, but those devices aren't used anymore. I don't remember the last device I've used that had a physical toggle button, for instance.
UI is moving away from replicating the stereos from the 80s to replicating the electronics young people are actually using. That includes adding mobile paradigms in places that don't necessarily make sense, just like weird stereo controls were all over computers for no good reason.
If you prefer the traditional UX, you can set things up the way you want. Classic Shell will get you your NT-Vista task bar. Gnome Shell has a whole bunch of task bar options. The old approach may no longer be the default one, but it's still an option for those that want it.
Most people for most situations, using most phone apps, do not have that familiarity. Mobile design has to simultaneously provide a lot of power and progressively disclose it such that it keeps users at or just past their optimal level of comfort, and that involves tradeoffs to hide some things and expose others at different levels of depth.
So while I agree that a lot of mobile design, and OS design in particular, pulls back way too far on providing affordances for actions, I would not use an airplane cockpit as a good guide, unless you’re also talking about a specialist tool.
Phones aren’t 747’s, and guess what every normal person that goes into an airplane cockpit who isn’t a pilot is so overwhelmed by all the controls they wouldn’t know what anything did.
Interface designers know what they’re doing. They know what’s intuitive and what isn’t, and they’ve refined down to an art how to contain a complicated feature set in a relatively simple form factor.
The irony of people here with no design training that they could do a better job than any “so called designer” shows incredible levels of egotism and disrespect to a mature field of study.
Also demonstrably, people use their phones really quite well with very little training, that’s a modern miracle.
Stop shaking your fist at a cloud.
No they don't. The article refutes your points entirely, as does everyone else here who has been confounded by puzzling interfaces.
... and then they ignore it? It triggers me when someone calls hidden swipe gestures intuitive. It's the opposite of affordance, which these designers should be familiar with if they are worth their salaries.
First thing I do on new Pixel phones is enable 3 button navigation, but lately that's also falling out of favor in UI terms, with apps assuming bottom navigation bar and not accounting for the larger spacing of 3 button nav and putting content or text behind it.
Take a simple example: Open a read-only file in MS Word. There is no option to save? Where's it gone? Why can I edit but not save the file?
A much better user experience would be to enable and not hide the Save option. When the user tries to save, tell them "I cannot save this file because of blah" and then tell them what they can do to fix it.
Now that Pixel cameras outclass iPhone cameras, and even Samsung is on par, there is really no reason to ever switch to the Apple ecosystem anymore IMO.
Not having anything to do with Google is a pretty good reason I think.
And they aren't even consistent from app to app. That's perhaps the most frustrating thing.
However, I think they do a decent job at resisting it in general, and specifically I disagree that removing the home button constitutes hiding an UI element. I see it as a change in interaction, after which the gesture is no longer “press” but “swipe” and the UI element is not a button but edge of the screen itself. It is debatable whether it is intuitive or better in general, but I personally think it is rather similar to double-clicking an icon to launch an app, or right-clicking to invoke a context menu: neither have any visual cues, both are used all the time for some pretty key functions, but as soon as it becomes an intuition it does not add friction.
You may say Apple is way too liberal in forcing new intuitions like that, and I would agree in some cases (like address bar drag on Safari!), but would disagree in case of the home button (they went with it and they firmly stuck with it, and they kept around a model with the button for a few more years until 2025).
Regarding explaining the lack of home button: on iOS, there is an accessibility feature that puts on your screen a small draggable circle, which when pressed displays a configurable selection of shortcuts—with text labels—including the home button and a bunch of other pretty useful switches. Believe it or not, I know people who kept this circle around specifically when hardware home button was a thing, because they did not want to wear out the only thing they saw as a moving part!
Right, but while it's obvious to everyone that a button is a control, it's not obvious that an edge is a control. On top of that, swiping up from the bottom edge triggers two completely different actions depending on exactly when/where you lift your finger off the screen.
Why not move the physical home button to the back of the phone?
I recall learning that the four corners of the screen are the most valuable screen real estate, because it's easy to move the mouse to those locations quickly without fine control. So it's user-hostile that for Windows 11 Microsoft moved the default "Start" menu location to the center. And I don't think they can ascribe it to being mobile-first. Maybe it's "touch-first", where mouse motion doesn't apply.
I think they wanted the start menu to be front and center. And honestly, that just sounds like a good idea, because it is where you go to do stuff that's not on your desktop already. But clicking a button in the bottom left and having the menu open in the middle would look weird, so centering the icons would make sense.
I think there are better ways to do it and I'm sure they've been tried, but they would probably confuse existing Windows users even more.
My metaverse client normally presents a clean 3D view of the world. If you bring the cursor to the top or bottom of the screen, the menu bar and controls appear. They stay visible as long as the cursor is over some control, then, after a few seconds, they disappear.
This seems to be natural to users. I deliberately don't explain it, but everybody finds the controls, because they'll move the mouse and hit an edge.
[1] https://en.wikipedia.org/wiki/Fitts%27s_law
It makes it impossible to locate files later when I need to move or transfer them.
When people who are not thinking in that bigger-scale, zoomed-out, societal-level perspective conduct A/B testing or usability testing in a lab or focus group setting, they focus on the wrong metrics (the ones that make an immediate, short-term KPI go up) and then promote the resulting objectively worse UX designs as being evidence-based and data-driven.
It has been destroying software usability for the last 20 years and doing a deep disservice to subsequent generations who are growing up without having been exposed to TRULY thoughtful UX except very rarely.
I will die on this hill.
It's often more useful to share the directory it's in rather that the file itself. MS Office dies have a way to get that information, but you have to look for it.