Readit News logoReadit News
haswell · 3 years ago
If my years as a product manager taught me anything, it’s that users absolutely do not know what they want in the moment.

With enough time exploring the problem space, it becomes easier to tease out the real needs of the user. But this doesn’t happen overnight.

Asking a user to interact with one of these chat interfaces is like asking them what they want - every time they use the software.

This cognitive load would make me personally seek alternative tools.

AlotOfReading · 3 years ago
It's important to distinguish between the limitations of current technologies and the possibilities of natural language.

Imagine if all natural language interfaces were like talking to a personal assistant. Sometimes you might not vocalize what you want properly, but we're highly adapted to that sort of communication as humans and the assistant can almost always fill in the gaps based on their knowledge of you or ask clarification questions.

What makes natural language so infuriating as a computer interface is that it's nothing like that. The models are so limited and constrained that you can't actually speak to them like a human, you have to figure out the limitations of the model first and translate your human-centric ideas into it. That's a huge amount of cognitive load and in the worst cases (e.g. Alexa), the result isn't even worth the effort.

kilgnad · 3 years ago
Actually that's not the main problem with the current state of the art LLM (chatGPT). You can speak to chatGPT like a human and it won't necessarily give you the answer you're looking for, but it will more often then not give you an answer that is very inline with what another human expert will give.

The infuriating thing about chatGPT is that it lies and gives inaccurate info. It will often creatively craft an answer that looks remarkably real and just give it to you.

Not sure if you played with chatGPT in-depth but this thing is on another level. I urge you to read this: https://www.engraved.blog/building-a-virtual-machine-inside/. It's mind blowing what happened in that article all the way to the mind blowing ending. This task that the author had chatGPT do, literally shows that you don't actually need to figure it out it's "constraints". It's so unconstrained it can literally do a lot of what you ask it to.

makeitdouble · 3 years ago
> Imagine if all natural language interfaces were like talking to a personal assistant.

I don’t have a personal assistant, so I’ll compare that to hotel receptionists.

Every time I booked a room directly through a human, it was long and (not pleasant to me) series of back and forth to understand what they can offer, what I precisely need in that moment, what price I’m willing to pay, how my mind changed as I understood their pricing structure and I then need to know what happens with two rooms instead of one. And requesting an additional towel because we waterboarded one, but not the “face” towel, the other one of middle size, no actually it was thick and coarse, oh was that a mop actually ? etc.

It would be infuriating except for the fact that I’m talking to an actual human I have empathy with, we’re stuck together in that interaction and we’ll try to make the best of it.

As a contrast online booking and assistance interfaces are sub-par with many other flaws, but I’ll never choose to go the chatty route first if I have a choice.

At their best, ‘general intelligence’ level of AI, I think chat interfaces still won’t be a great interface for anything requiring more than 2 steps of interactions.

Swizec · 3 years ago
> It's important to distinguish between the limitations of current technologies and the possibilities of natural language

And yet any time a person says "Lemme know if I can help" my first thought is that I don't even know what's in their wheelhouse to help me with. Will they help if I ask for someone to shovel snow? Clean out my gutters? Or are they offering to help with introductions people with money? Do they even know people with money?

Existenceblinks · 3 years ago
Narrow down vocabs .. into something something domain specific.
andsoitis · 3 years ago
> Imagine if all natural language interfaces were like talking to a personal assistant. Sometimes you might not vocalize what you want properly, but we're highly adapted to that sort of communication as humans and the assistant can almost always fill in the gaps based on their knowledge of you or ask clarification questions.

There is understanding of natural language and then there's comprehension and critical thinking deeper down. Today's natural language interfaces solve for the former but not the latter. They don't anticipate, they don't originate novel solutions, they can't change their minds, they certainly cannot read the air, etc.

marcosdumay · 3 years ago
Well, if you have an empathic model that can anticipate the needs of the user, yeah, any interface that you put before it will be easy to use.

This is also bordering a human-equivalent intelligence. And it needs at a bare minimum to be a general AI.

jimmaswell · 3 years ago
> you have to figure out the limitations of the model first and translate your human-centric ideas into it

This is the same process as communicating with another human. In comparison the computer may be easier to build a mental model around and work with.

gibspaulding · 3 years ago
Alexa is basically a CLI without man pages or useful error messages.
hinkley · 3 years ago
I think there's a disconnect between the realizations that:

- there are no adults, we are just old children playing at being adults

- "giving people what they want" exists on a spectrum from pandering (up to and including prostitution) to assisted suicide

These are ugly truths and it's down to 'requirements' people and ethicists to find a way to dance this dance. Treating people like they don't know their own minds without letting on that's what you're doing is probably one of the hardest things I've seen done in the software world.

IanCal · 3 years ago
A chat interface is much more tolerant of this, because it implies a back and forth with clarification. Current one step dumb voice interfaces are a problem.
kilgnad · 3 years ago
Yes. This ^. ChatGPT is especially good at evolving and revising the main idea through a full on conversation. It is not just a query and answer machine. It is a full on conversational intelligence. Parent is incorrect. chatGPT is literally perfect for what he is describing.

I feel people are attacking the things chatGPT excels at out of fear. Things like creativity, originality, true understanding of what's going on. chatGPT is GOOD at these things but people try to attack it.

The main problems with chatGPT are truthfulness, honesty, accuracy and consistency. It gets shit wrong but out of fear people need to attack all aspects of chatGPT's intelligence.

I find it unlikely the parent even tried to have a conversation with chatGPT about a product at all. A lot of our dismissals are largely surface level and not evidence based. You can bounce thoughts and product ideas off this thing and it will run with you all the way into a parallel universe if you ask it to.

haswell · 3 years ago
I interpreted the article to be calling attention to the situations when the tolerance of a chat interface is outweighed by a more efficient mode of information discovery that might be better suited to a specific use case.

In other words, if you're building a new product, don't just slap a chat interface on it because AI is good now.

This is not a claim that chat is never the right option.

tbihl · 3 years ago
>Asking a user to interact with one of these chat interfaces is like asking them what they want - every time they use the software.

Asking what a user wants would be having a competent customer service representative, and would be simple, like asking me to drive home from work.

Voice prompts require me to intuit the customer support structure in order to guess where the path is to reach my category of issue. It's like asking me to walk home from work in the sewer system.

haswell · 3 years ago
For use cases that are well-suited to a conversational interface, that's great, and new AI advances will make chatbots more powerful than they've ever been.

But not every use case is a chatbot use case, and I think that's the key point of the article.

The sudden viability of a conversational interface that is good enough at having a fluid conversation to revolutionize the experience of that conversation does not suddenly make this interface the best fit for all use cases.

I still find it far more pleasant to browse to a page and see a list of clearly displayed options that I can absorb at a glance and get on to what I really need to accomplish in the moment.

Even a perfect conversationalist can't remove the extra friction involved in disclosing information. The question is whether that loss of efficiency is outweighed/nullified by a better overall experience.

nanidin · 3 years ago
This resonates with me and my use of Siri. As soon as I get outside of the common tasks I use it for (setting timers, unit conversions in the kitchen, turn on/off the lights), I’m either spending time trying to come up with the correct incantation or pulling out my phone to get to an app that does it better.

Deleted Comment

dmix · 3 years ago
Your analogy is building software though which is an extremely complicated, domain-filled specialization. I don't think people are suggesting user will be asking a chatbot to do crazy hard stuff like translate their complicated business problems into software interfaces (at least not yet).

The usecases for AI/Chatbots will likely remain niche but there's still tons of niche areas a lanugage interface could fill, where the user has the appropriate specialization/skill to do it on their own.

It is still ultimately an interesting design/UX question. It's too bad the OP blog post didn't provide some real life examples.

haswell · 3 years ago
The point was less to draw an analogy and more to reflect on how I've seen users behave when exploring software for the first time.

When testing new concepts, observing users try things out reveals a spectrum of expectations about where things should be, and how to achieve a task. So we try to find the combination of things that surprises people the least, as much of the time as possible.

And when a new user doesn't find the chosen approach perfectly intuitive, this is usually a temporary problem, because learning where something is takes care of this with a few repetitions. Product tours help.

An equivalent chat interface might be able to adapt on the fly to a wide range of user types, but this still doesn't imply anything about the core usability of the product and whether or not someone prefers to interact with a chatbot. Put another way, some use cases just aren't a good fit for a chatbot, even a very very good one.

I do agree that though niche, there are a lot of interesting opportunities with a sufficiently fluent conversational AI.

fragmede · 3 years ago
> users absolutely do not know what they want in the moment.

People know what they want in a general sense. They need to be told they need your one though.

I need new clothes, but I don't know that I specifically wanted a black Nike T-shirt made of special exercise polyester until I saw the model in the ad wearing one.

haswell · 3 years ago
I think this is a very different concept than the state of mind someone is in when trying to understand how a piece of software works.

This obviously depends on the type of software, but users often struggle to articulate the actual problem they're trying to solve, and it's difficult to know what solution to look for when you haven't fully grasped the problem yet.

If I don't know what the solution looks like, I don't know what to look for, and this is where good software steps in and shows the user what to do next without making that an onerous process in perpetuity.

nipponese · 3 years ago
Depends on the need. If they need someone to further explain a specific concept from their homework, they definitely know what they want.

Also, if they just want a refund on an airline ticket, again, they know.

haswell · 3 years ago
In the context of my comment, knowing what they want was more about users being able to tell me what they think they should do next in the software's interface (whether that's a GUI, terminal, or bot) to achieve their desired goal.

In other words, what should we build as a product team to satisfy this user's need?

The thing they need in the moment is often not obvious or apparent to them until they see it. This is why we iterate on UI concepts. Some work, some don't. Most of the things that work don't come from users who tell us "put this button here".

So the point I was making was more about trying to determine: "what are the things I can even ask the computer?".

There are clearly use cases that are better suited for this than others. Anything that follows a simple question/answer format is probably a great fit.

hulitu · 3 years ago
> Asking a user to interact with one of these chat interfaces is like asking them what they want - every time they use the software.

The ribbon is the same. Good luck finding something in it.

But this seems to be the future.

frosted-flakes · 3 years ago
The ribbon (particularly in Microsoft Office) solves the problem that users don't know what they want, because it lays out all the options in front of them in a clearly organized hierarchy, with hover previews and textual labels for every command except for extremely well-known commands (like Bold/Italic). This is very different from chat interfaces, which are opaque black boxes and in order to get anything out of them users have to express their wish in words, with nothing to start from.
haswell · 3 years ago
While I don't mind the ribbon, I also don't think it's a UX triumph. It will always be possible to find examples of bad UX.

That does not mean there isn't a better visual representation out there, or that replacing it with a conversational interface is a natural alternative.

noobermin · 3 years ago
But let's again remind you all. Chatgpt and AI in general is a tool to replace developers and designers with weaker, more generic tools because management do not want to pay for labor costs (some would say they want a free ride). They absolutely do not care that it's a worse solution, they just want to eek out a savings so they can get promoted, then ditch when shtf. It just has to work convincingly until they can move onto greener pastures.
krisoft · 3 years ago
> This cognitive load would make me personally seek alternative tools.

I would prefer a smart and resourcefull personal assistant taking care of me over any other interface ever conceived.

The reason why people use the uber app, or airbnb interface, or the google searchbar instead of texting their personal assistant what they want is that they simply can’t afford one.

The only question is if we can make a cost efficient version of that personal assistant.

haswell · 3 years ago
While I might agree with you for a certain class of use cases, the point is that not all use cases fit this kind of interface.

For the ones that do, they’ll just keep getting better.

layer8 · 3 years ago
> like asking them what they want - every time they use the software.

That reminds me of https://en.wikipedia.org/wiki/Where_do_you_want_to_go_today%..., which apparently wasn’t successful.

garrickvanburen · 3 years ago
This ^

Also, discoverability in modern UIs (including & especially chat UIs) is so poor, how are we supposed to learn/remember what the system can do?

jgautsch · 3 years ago
Some users do, and they're right. Talk to those ones often, it's much faster than the full build/measure/learn ceremony.
haswell · 3 years ago
Oh absolutely. Those users are what made the job rewarding. Learning from them was invaluable.

They’re also rare, at least in the specific domain I was focused on.

jillesvangurp · 3 years ago
True, and yet people employ other people to do things for them and get them to do things simply by talking to them. That works because a smart enough person doesn't need to be given a lot of detail and will understand things from some very high level instructions; or even do things proactively.

AI is heading in a direction where it won't need a whole lot of micro managing to be useful. Things like Alexa, Google Assistant, and Siri are as of now completely obsolete. They were nice five years ago but it got stuck with use cases that are a combination of low value and unimaginative. I mainly use this stuff to do things like setting alarms and timers. Reason: it seems good enough to do that and I don't like messing with my phone when I'm cooking some food since I have to wipe my hands first.

Doing better than that requires a deeper understanding of language (check) and context (no meaningful integrations yet, but seems doable). It's not really going to replace a normal UI but it would be more like managing somebody doing something on your behalf. You are not using, but directing and guiding. The AI does most of the work. It's not tedious because you get results more quickly than anything you would be able to do using any kind of UI.

I'd love an AI secretary that can take simple prompts and prepare emails that reply to other emails, summarize what's in my unread messages, or figure out a good slot in my calendar to invite some people to. This is annoying if you have to go into each application and then type out what you want and then triple check what comes back before sending it off. But that's not how you would work with a human secretary either. You'd give high level prompts, "send them a reply that I'm not interested", "are there any messages that need my attention", "what's a good slot for meeting X to discuss Y ... can you set up the meeting".

These are fairly simple things that well paid executives actually pay people to do. An AI would maybe not be able to do all of these things perfectly right now. But it could compose a message and then allow you to edit and send. Or it could summarize some information for you and give you the highlights. Etc. You could do that over a phone while on the move or just talking to a device. I don't think we're that far away from something usable in this space. I'd use it if something like that came along and worked well enough. And I have a hunch that this could become a lot better than just that fairly soon.

So MS, the world's largest provider of tools used by secretaries, investing in and partnering with OpenAI. This stuff is so obvious that they must have more than a few people with half a brain that figured out something similar ages ago. I would not be surprised to see them launch something fairly concrete fairly soon. Maybe it will fail. Or maybe they get something half decent working.

jermaustin1 · 3 years ago
I can't help but agree fully. Its worse on telephones where they might be doing NLP, but they cannot understand and parse accents and dialects. I remember my late grandmother trying to call AT&T a couple of years ago (just before COVID), and the robot would ask: "What can I help you with?" and then could not understand how she pronounced "pay my bill" because she said "PAY mah BEEEEEL".

But just hitting 0 did nothing, so after 5 minutes of her repeating "PAY mah BEEEEL" over and over, I took the phone from her and did it. From then on she would have to have other people pay her bill over the phone.

Doing this to a much more complex user interface and providing me no clue what I'm supposed to ask for something I have no way of knowing that I don't know it is a dystopian future I'm glad my grandmother won't have to endure.

passwordoops · 3 years ago
As a 40-something white male with a neutral, Urban Canadian English accent (so the ideal NLP user), even I have difficulty with voice assistants and ABHOR being forced to use one. My wife does have an accent and like your late grandmother always has great difficulty with these, usually requiring my intervention.

Unfortunately, NLP is "modern" and "eliminates drag" according to current design-think. What's needed is a shift from thinking about "User Experience" to the real lived human experience when designing interfaces

chinchilla2020 · 3 years ago
That's the issue. The academic and research UI/UX spaces tend to reject user feedback. The explanation is that "You must train the users to like it".

Deeply unpopular changes that are gaining traction in industry but hated by users are: 1. removal of all buttons from devices in favor of screens 2. voice bots and text bots 3. gesture interfaces

morgante · 3 years ago
> Unfortunately, NLP is "modern" and "eliminates drag" according to current design-think.

Citation needed. Most serious UX designers are well aware of the limitations in chat-based interfaces.

Deleted Comment

hacker_9 · 3 years ago
It's poor thinking on them to only provide a talking interface. I don't think I've encountered that personally, there is always a way to use the keypad - which I will always use anyway, even though they understand my voice, it's just x10 faster. And if you've made the call before you can type on the keypad before the robot on the other side is done talking.
tiagod · 3 years ago
There's plenty of services in Portugal that only have the damned robots. They're also adding the most infuriating chatbots that they pretty much force you to go through before getting to a human. Can't wait for the day this is all banned.
passwordoops · 3 years ago
I don't have data but more and more seem to be turning voice only. Some US-based airlines come to mind, and one of the banks I deal with. It's fun when they ask for my "16 to 20 digit client number"
imbnwa · 3 years ago
My grandfather speaks in a thick, archaic Northern Georgia brogue I can't imagine anything parsing his speech correctly since to the untrained ear it sounds like one long continuously occilating grumble sorta like Boomhauer from King of the Hill but deeper and with more bass. You can generally hear him pronounce "shit" pretty clearly though.

Deleted Comment

Thoreandan · 3 years ago
English was my mother's 5th language, I can relate to acting as interpreter for family.

I'm reminded of the "Voice Recognition Lift" sketch from the Scottish comedy Burnistoun - https://www.youtube.com/watch?v=TqAu-DDlINs

GrinningFool · 3 years ago
For future reference, when these systems offer you a choice of things to say like "Pay my bill", "Check my balance", etc, they are usually backed by numeric input too. You can press the number corresponding to the voice option provided - in this example 1) pay bill, 2) check balance.
LouisSayers · 3 years ago
When I moved to London I went to the supermarket and asked in my New Zealand accent where the eggs are.

"The what?" The assistant replied, "the eegs" I replied.

"I don't think we sell those" he said.

I switched to an American accent and he was finally able to understand.

mLuby · 3 years ago
Is there a website or CLI that turns the phone tree back into a proper text interface?
hinkley · 3 years ago
Regional accents are terrible that way. Are you sure it was "BEEEEL"? There are places where "bill" is two syllables. I'm surprised you didn't get "PAY mah BEE ILL"
mk_stjames · 3 years ago
This parallels a longstanding critique I have of many modern user interfaces compared to slightly older software in the field of what I would consider 'power user' engineering tools; programs like FE tools, CAD & other CAE, etc. These are the kind of programs that had a stride starting in the late 90's to 2007-ish where they just slammed tons of toolbars around the edges of the screen, sometimes nesting functions or contexting workbenches but ultimately allowing the user to have everything exposed to them at once if needed. As screen real estate grew with higher res and larger monitors, the icons got smaller which was even better for the power user- you could fit even more on the screen if you wanted!

But starting around 2008-2009 I noticed a trend, and it continues to this day- the power user oriented layouts started being replaced with more 'friendly', larger icon, children's game looking UI. Intuitive graphical icons were replaced with stylish, monotone shit that looks like a graphic design student's dream, but conveyed less instant information.

I blame some of this shift on the move in Office to the Ribbon system and developers trying to imitate that, but some software I've seen takes that and does it much worse.

I want all my functions laid out and accessible. Like this blog post mentions, sometimes I don't know what I am wanting to do until I see it. I want to be able to explore the entire space before I know what it all does, maybe.

Using natural language can be very powerful if it augments these systems, but for many tools it isn't a replacement. Often I think new software is designed around looking impressive and fast to upper level management at the expense of the usability of the power users who ultimately are the users that get things done.

klabb3 · 3 years ago
> Intuitive graphical icons were replaced with stylish, monotone shit that looks like a graphic design student's dream, but conveyed less instant information.

Design is the art of signal-to-noise ratio, or in simpler terms, balance and harmony. If you over-use any modality, lines, text, color, nesting, you increase the noise level. If you underutilize a modality (for instance your whole UI is monochrome), you reduce your signal bandwidth.

Every trend gets mindless followers, who throw the baby out with the bath water without even realizing it. But trends also bring a grain of gold to the table.

For instance, monotone icons allow many more elements in the same screen real estate than text, and by not using color you can have a larger color budget for other elements, which you can use elsewhere to convey progress, status, or anything else important.

A good use of monotone icons are text formatting (bold, justify, etc) and display options (column view, tree view, etc), or toolbars (like in photoshop or 3D tools). Many apps from the 2010 era overused colored icons, and I’m glad those went away. Some FOSS apps still suffer from that.

urthor · 3 years ago
Very interesting actually.
Def_Os · 3 years ago
Instead of the Ribbon, don't you think it was rise of tablets that influenced these design changes?
birdstheword5 · 3 years ago
> Often I think new software is designed around looking impressive and fast to upper level management

Bingo, and it also impresses the upper level management at customer companies - i.e. the ones who make the decision to buy the software, not having to use it themselves

LASR · 3 years ago
100% Agree.

When it comes to SeriousBusiness™, chat bots don't have sufficient constraints to extract specific input from free-form text.

Applications are ultimately delivering value in a specific set of use-cases. Only some of those use-cases can be easily retrofitted with a chat-first interface.

Consider something like Photoshop or Figma. There are so many ways you can issue commands that don't make sense. Eg: "Change the font-size on this color palette."

Any sophisticated app will have these kinds of constraints.

The user interface is not there only to accept input from the user. It also implicitly teaches the user some of the constraints in their apps.

Without that, you're shifting the burden of understanding and maintaining the constraints to the user. And you're left with a (much smarter version) of "Hey Siri, do xyz...".

This is a common ideation trap I see with PMs at the moment. The underlying problem again is that the human doesn't understand the limits of what their apps can do for them. As a second order, even if they did, humans can be bad at describing what they want to do.

kwertyoowiyop · 3 years ago
It could be good, if the interface actually understood more sentences. Usually it’s “tell me in a few words what you want,” which I do, it answers “I’m not sure what you want,” I try again, it gives up and reads off a list of things to say, none of which are what I want, then I start yelling “agent,” “operator,” and various curse words. Or “sales” on the theory that they’re most likely to talk to someone who they think will give them money.
chinabot · 3 years ago
It can only be good when the computer understands EVERY sentence, every accent, every nuance and understands context and threads.
wrycoder · 3 years ago
When I turn on closed captioning on Zoom, I get a very good transcript of what’s being said in real time. It even backtracks and corrects, after it’s had half a second to reconsider the input.
vonnik · 3 years ago
This is a limiting perspective inherently pessimistic about LLMs.

The best NLP interfaces will be asking questions to the users, in order to figure out what their real problem is. This is similar to what teachers and therapists do. It is not a lazy interface, but a natural one. The chatbot will step the user through a decision tree in situations where the user doesn't know how to ask questions or frame the problem.

azhenley · 3 years ago
I had some research grants to investigate these "inquisitive interfaces".

Blog post on the initial idea: An inquisitive code editor: Overcome bugs before you know you have them https://austinhenley.com/blog/inquisitivecodeeditor.html

Grant proposal on the bigger idea: Inquisitive Programming Environments as Learning Environments for Novices and Experts https://austinhenley.com/pubs/Henley2021NSFCAREER.pdf

frosted-flakes · 3 years ago
A decision tree. Also known as a phone tree, which has been around for nigh-on 40 years now. You don't need AI for that.
polygamous_bat · 3 years ago
Decision trees are inherently limited on the different inputs it can take from the end user (yes/no etc.). The hope here, as I understand it, is to take free-form input from the user and map it back to one of the branches of the decision trees.
chinchilla2020 · 3 years ago
The worst online MUD I ever played had a tutorial puzzle that was something like this.

"Now, use the mechanisms on the catapult to launch the catapult"

There was no other explanation of what your options were.

I tried: 'pull the lever' 'release the spring' 'fire the catapult' 'pull back the lever' 'use the lever'

It finally turned out to be something like "release the lever".

The problem with chat is that you are attaching it to a rigid user interface that has a tiny subset of options compared to the breadth of human language. The user has to probe the awful chatbot for these options.

ctoth · 3 years ago
Can't each and every one of these criticisms be also leveled at CLIs? Don't we like CLIs? I notice I am confused.

> The least it could do is intelligently give me a starting point for typing in a prompt. The tyranny of the blank textbox is real.

Seems LLMs would be way better at this sort of thing -- What can I do here, instead of "do I type help? man? apropos?"

__MatrixMan__ · 3 years ago
We expect interactive help in a CLI, and tab completion. We expect errors that tell us what we've done wrong. These things quickly expose the underlying "shape" of the interface.

When chatting with an AI, you don't know that the bot was 41% sure that this was the right path for you, chosen out of five other options with lower scores. It just takes you where it thinks you want to go without sharing that structural information with you.

marcosdumay · 3 years ago
Hum... If your CLI doesn't have a manual, yes, that applies.
simplyluke · 3 years ago
I think this is why chatgpt has done so well with engineers. Engineers like command lines. The vast, vast majority of users of computers however don’t like command lines.

The same thing happened in ~2016 with a flash in the pan of chat bot startups which mostly failed. Building a command line in messenger/slack was cool to engineers but not a super viable business.

ChatGPT is a proof of concept of a transformative technology, but it’s not what the product will look like that gets mass adoption.

hunter2_ · 3 years ago
I agree very much, but CLIs are mostly for power users and developers these days. Normal users were on CLIs decades ago but have been primarily on GUIs since then, for essentially the same reasons that TFA argues. I think we can focus on normal users for the purpose of this discussion.
hulitu · 3 years ago
> Can't each and every one of these criticisms be also leveled at CLIs? Don't we like CLIs? I notice I am confused.

It is not about text. In CLIs you have a set of commands. In those interfaces you have some hidden commands which you must trigger with keywords.

hinkley · 3 years ago
Node and npm are notorious for having command line flags almost none of which are listed in the help, and less than half of which are documented on the website. I'm running node and npm flags in production deployment pipelines right now that I found on Stack Overflow or the bug database and exist nowhere on the official websites. And if you look at the historic documentation, the current situation is a factor of 3 improvement over even Node 10 or 12 era, which is far better than the node < 1.0 period

What you say is true of good CLIs, not tire fires like Node.js. So you're both right depending on context.

Pxtl · 3 years ago
While the discoverability of magic-word linguistic interfaces and command-lines do have some commonalities, my keyboard has never failed because it couldn't understand my accent.
hunter2_ · 3 years ago
If we're comparing CLIs with text chat bots, accents affect neither. I don't think the concept of NLP generally implies voice more than text, and TFA specifically discusses textboxes.