I write documentation for a living. Although my output is writing, my job is observing, listening and understanding. I can only write well because I have an intimate understanding of my readers' problems, anxieties and confusion. This decides what I write about, and how to write about it. This sort of curation can only come from a thinking, feeling human being.
I revise my local public transit guide every time I experience a foreign public transit system. I improve my writing by walking in my readers' shoes and experiencing their confusion. Empathy is the engine that powers my work.
Most of my information is carefully collected from a network of people I have a good relationship with, and from a large and trusting audience. It took me years to build the infrastructure to surface useful information. AI can only report what someone was bothered to write down, but I actually go out in the real world and ask questions.
I have built tools to collect people's experience at the immigration office. I have had many conversations with lawyers and other experts. I have interviewed hundreds of my readers. I have put a lot of information on the internet for the first time. AI writing is only as good as the data it feeds on. I hunt for my own data.
People who think that AI can do this and the other things have an almost insulting understanding of the jobs they are trying to replace.
The problem is that so many things have been monopolized or oligopolized by equally-mediocre actors so that quality ultimately no longer matters because it's not like people have any options.
You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.
Apply the same for software (have you seen how bad tech is lately?) or basically any kind of vertical with a nontrivial barrier to entry where someone can't just say "this sucks and I'm gonna build a better one in a weekend".
You are right. We are seeing a transition from the user as a customer to the user as a resource. It's almost like a cartel of shitty treatment.
I don't work for the public transit company; I introduce immigrants to Berlin's public transit. To answer to the broader question, good documentation is one of the many little things that affect how you feel about a company. The BVG clearly cares about that, because their marketing department is famously competent. Good documentation also means that fewer people will queue at their service centre and waste an employee's time. Documentation is the cheaper form of customer service.
Besides, how people feels about the public transit company does matter, because their funding is partly a political question. No one will come to defend a much-hated, customer-hostile service.
> You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.
Exactly. If the AI-made documentation is only 50% of the quality but can be produced for 10% of the price, well, we all know what the "smart" business move is.
"well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it."
First, I understand what you're saying and generally agree with it, in the sense that that is how the organization will "experience" it.
However, the answer to "will it lead to a noticeable drop in revenue" is actually yes. The problem is that it won't lead to a traceable drop in revenue. You may see the numbers go down. But the numbers don't come with labels why. You may go out and ask users why they are using your service less, but people are generally very terrible at explaining why they do anything, and few of them will be able to tell you "your documentation is just terrible and everything confuses me". They'll tell you a variety of cognitively available stories, like the place is dirty or crowded or loud or the vending machines are always broken, but they're terrible at identifying the real root causes.
This sort of thing is why not only is everything enshittifying, but even as the entire world enshittifies, everybody's metrics are going up up up. It takes leadership willing to go against the numbers a bit to say, yes, we will be better off in the long term if we provide quality documentation, yes, we will be better off in the long term if we use screws that don't rust after six months, yes, we will be better off in the long term if we don't take the cheapest bidder every single time for every single thing in our product but put a bit of extra money in the right place. Otherwise you just get enshittification-by-numbers until you eventually go under and get outcompeted and can't figure out why because all your numbers just kept going up.
Also consider that while the OP looks like a skilled, experienced individual, all too often the documentation is being written by someone with that context, but rather someone unskilled, and with read empathy. Quality is quite often very poor, to the point where as shitty as genai can be, it is still an improvement. Bad UX and writing outnumbers the good. The successes of big companies and the most well known government services are the exception.
That’s one way to frame it. An other one is, sometime people are stuck in a situation where all options that come to their mind have repulsive consequences.
As always some consequences are deemed more immediate, and other will seem remoter. And often the incentives can be quite at odd between expectations in the short/long terms.
>this sucks and I'm gonna build a better one in a weekend
Hey, this is me looking at the world this morning. Bear with me, the bright new harmonious world should be there on Monday. ;)
Coding is like writing documentation for the computer to read. It is common to say that you should write documentation any idiot can understand, and compared to people, computers really are idiots that do exactly as you say with a complete lack of common sense. Computers understand nothing, so all the understanding has to come from the programmer, which is his actual job.
Just because LLMs can produce grammatically correct sentences doesn't mean they can write proper documentation. In the same way, just because they are able to produce code that compiles doesn't mean they can write the program the user needs.
I like to think of coding as gathering knowledge about some problem domain. All that a team learns about the problem becomes encoded in the changes to the program source. Program is only manifestation of the humans minds. Now, if programmers are largely replaced with LLMs, the team is no longer gathering the knowledge, there is no intelligent entity whose understanding of the problem increases with time, who can help drive future changes, make good business decisions.
Well said. I try to capture and express this same sentiment to others through the following expression:
“Technology needs soul”
I suppose this can be generalized to “__ needs soul”. Eg. Technical writing needs soul, User interfaces need soul, etc. We are seriously discounting the value we receive from embedding a level of humanity into the things we choose (or are forced) to experience.
your ability to articulate yourself cleanly comes across in this post in a way that I feel AI is trying to be and never quite reaches as well.
I completely agree that the ambitions of AI proponents to replace workers is insulting. You hit the nail on the head with pointing out that we simply dont write everything down. And the more common sense / well known something is the less likely it is to be written down, yet the more likely it might be needed by an AI to align itself properly.
I like the cut o' your jib. The local public transit guide you write, is that for work or for your own knowledge base? I'm curious how you're organizing this while keeping the human touch.
I'm exploring ways to organize my Obsidian vault such that it can be shared with friends, but not the whole Internet (and its bots). I'm extracting value out the curation I've done, but I'd like to share with others.
In every single discussion AI-sceptics claim "but AI cannot make a Michelin-star five-course gourmet culinary experience" while completely ignoring the fact that most people are perfectly happy with McDonald's, as evidenced by its tremendous economic and cultural success, and the loudest complaint with the latter is the price, not the quality.
Why shouldn't AI be able to sufficiently model all of this in the not far future? Why shouldn't have it have sufficient access to new data and sensors to be able to collect information on its own, or at least the system that feeds it?
Not from a moral perspective of course, but the technical possibility. And the overton window has shifted already so far, the moral aspect might align soon, too.
IMO there is an entirely different problem, that's not going to go away just about ever, but could be solved right now easily. And whatever AI company does so first instantly wipes out all competition:
Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
> Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
That's not sufficient, at least from the likes of OpenAI, because, realistically, that's a liability that would go away in bankruptcy. Companies aren't going to want to depend on it. People _might_ take, say, _Microsoft_ up on that, but Microsoft wouldn't offer it.
> Why shouldn't AI be able to sufficiently model all of this
I call it the banana bread problem.
To curate a list of the best cafés in your city, someone must eventually go out and try a few of them. A human being with taste honed by years of sensory experiences will have to order a coffee, sit down, appreciate the vibe, and taste the banana bread.
At some point, you need someone to go out in the world and feel things. A machine that cannot feel will never be a good curator of human experiences.
See also: librarians, archivists, historians, film critics, doctors, lawyers, docents. The déformation professionnelle of our industry is to see the world in terms of information storage, processing, and retrieval. For these fields and many others, this is like confusing a nailgun for a roofer. It misses the essence of the work.
The hard part is the slow, human work of noticing confusion, earning trust, asking the right follow-up questions, and realizing that what users say they need and what they actually struggle with are often different things
As a counterpoint, the very worst "documentation" (scare quotes intended) I've ever seen was when I worked at IBM. We were all required to participate in a corporate training about IBM's Watson coding assistant. (We weren't allowed to use external AIs in our work.)
As an exercise, one of my colleagues asked the coding assistant to write documentation for a Python source file I'd written for the QA team. This code implemented a concept of a "test suite", which was a CSV file listing a collection of "test sets". Each test set was a CSV file listing any number of individual tests.
The code was straightforward, easy to read and well-commented. There was an outer loop to read each line of the test suite and get the filename of a test set, and an inner loop to read each line of the test set and run the test.
The coding assistant hallucinated away the nested loop and just described the outer loop as going through a test suite and running each test.
There were a number of small helper functions with docstrings and comments and type hints. (We type hinted everything and used mypy and other tools to enforce this.)
The assistant wrote its own "documentation" for each of these functions in this form:
"The 'foo' function takes a 'bar' parameter as input and returns a 'baz'"
Dude, anyone reading the code could have told you that!
All of this "documentation" was lumped together in a massive wall of text at the top of the source file. So:
When you're reading the docs, you're not reading the code.
When you're reading the code, you're not reading the docs.
Even worse, whenever someone updates the actual code and its internal documentation, they are unlikely to update the generated "documentation". So it started out bad and would get worse over time.
Note that this Python source file didn't implement an API where an external user might want a concise summary of each API function. It was an internal module where anyone working on it would go to the actual code to understand it.
The map is not the territory! Documentation is a helpful, curated simplification of the real thing. What to include and what to leave out depends on the audience.
But if you treat "write documentation" as a box-ticking exercise, a line that needs to turn green on your compliance report, then it can just be whatever.
I don't write for a living, but I do consider communication / communicating a hobby of sorts. My observations - that perhaps you can confirm or refute - are:
- Most people don't communicate as thoroughly and complete - written and verbal - as they think they do. Very often there is what I call "assumptive communication". That is, sender's ambiguity that's resolved by the receiver making assumptions about what was REALLY meant. Often, filling in the blanks is easy to do - as it's done all the time - but not always. The resolution doesn't change the fact there was ambiguity at the root.
Next time you're communicating, listen carefully. Make note of how often the other person sends something that could be interpreted differently, how often you assume by using the default of "what they likely meant was..."
- That said, AI might not replace people like you. Or me? But it's an improvement for the majority of people. AI isn't perfect, hardly. But most people don't have the skills a/o willingness to communicate at a level AI can simulate. Improved communication is not easy. People generally want ease and comfort. AI is their answer. They believe you are replaceable because it replaces them and they assume they're good communicators. Classic Dunning-Kruger.
p.s. One of my fave comms' heuristics is from Frank Luntz*:
"It's not what you say, it's what they hear." (<< edit was changing to "say" from "said".)
One of the keys to improved comms is to embrace that clarify and completeness is the sole responsibility of the sender, not the receiver. Some people don't want to hear that, and be accountable, especially then assumption communication is a viable shortcut.
* Note: I'm not a fan of his politics, and perhaps he's not The Source of this heuristic, but read it first in his "Words That Work". The first chapter of "WTW" is evergreen comms gold.
LLMs are good at writing long pages of meaningless words. If you have a number of pages to turn in with your writing assignment and you've only written 3 sentences they will help you produce a low quality result that will pass the requirements.
Spot on! I think LLM's can help greatly in quickly putting that knowledge in writing, including using it to review written materials for hidden prerequisite assumptions that readers might not be aware of that. It can also help newer hires in how to write and more clearly. LLM's are clearly useful in increasing productivity, but management that think that they even close to ready to replace large sections of practically any workforce are delusional.
I think you fundamentally misunderstand how the technology can be used well.
If you are in charge of a herd of bots that are following a prompt scaffolding in order to automate a work product that meets 90% of the quality of the pure human output you produce, that gives you a starting point with only 10% of the work to be done. I'd hazard a guess that if you spent 6 months crafting a prompt scaffold you could reach 99% of your own quality, with the odd outliers here and there.
The first person or company to do that well then has an automation framework, and they can suddenly achieve 10x or 100x the output with a nominal cost in operating the AI. They can ensure that each and every work product is lovingly finished and artisanally handcrafted , go the extra mile, and maybe reach 8x to 80x output with a QA loss.
In order to do 8-80x one expert's output, you might need to hire a bunch of people to do segmented tasks - some to do interviews, build relationships, the other things that require in person socialization. Or, maybe AI can identify commonalities and do good enough at predicting a plausible enough model that anyone paying for what you do will be satisfied with the 90% as good AI product but without that personal touch, and as soon as an AI centric firm decides to eat your lunch, your human oriented edge is gone. If it comes down to beancounting, AI is going to win.
I don't think there's anything that doesn't require physically interacting with the world that isn't susceptible to significant disruption, from augmentation to outright replacement, depending on the cost of tailoring a model to the tasks.
For valuable enough work, companies will pay the millions to fine-tune frontier models, either through OpenAI or open source options like Kimi or DeepSeek, and those models will give those companies an edge over the competition.
I love human customer service, especially when it's someone who's competent, enjoys what they do, and actually gives a shit. Those people are awesome - but they're not necessary, and the cost of not having them is less than the cost of maintaining a big team of customer service agents. If a vendor tells a big company that they can replace 40k service agents being paid ~$3.2 billion a year with a few datacenters, custom AI models, AI IT and Support staff, and totally automated customer service system for $100 million a year, that might well be worth the reputation hit and savings. None of the AI will be able to match the top 20% of human service agents in the edge cases, and there will be a new set of problems that come from customer and AI conflict, etc.
Even so. If your job depends on processing information - even information in a deeply human, emotional, psychologically nuanced and complex context - it's susceptible to automation, because the ones with the money are happy with "good enough." AI just has to be good enough to make more money than the human work it supplants, and frontier models are far past that threshold.
I could say I make my living with various forms of communication.
I am not worried about AI replacing me 1:1 communication wise. What is going to happen is the structure around me that gives my current communication skill set value is going to change so much and be optimized to a degree that my current skill set is no longer going to have much value in the new structures that arise. I will have to figure out how to fit into these new structures and it surely won't be doing the same thing I am doing now. That seems so obvious.
As as writer, you know this makes it seem emotional rather than factual?
Anyway, I agree with what you are saying. I run a scientific blog that gets 250k-1M users per year, and AI has been terrible for article writing. I use AI for ideas on brainstorming and ideas for titles(which ends up being inspiration rather than copypaste).
…says every charlatan who wanted to keep their position. I’m not saying you’re a charlatan but you are likely overestimating your own contributions at work. Your comment about feeding on data - AI can read faster than you can by orders of magnitude. You cannot compete.
"you are likely overestimating your own contributions at work"
Based on what? Your own zero-evidence speculation? How is this anything other than arrogant punting? For sure we know that the point was something other than how fast the author reads compared to an AI, so what are we left with here?
>you are likely overestimating your own contributions at work
That’s the logical fallacy anyone is going to be pushed to as soon as judging their individual worth in an intrinsically collective endeavor will happen.
People in lowest incomes which would not be able to integrate in society without direct social funds will be seen as parasites by some which are wealthier, just like ultra rich will be considered parasites by less wealthy people.
The kind of documentation no one reads, that is just here to please some manager, or meet some compliance requirement. These are, unfortunately, the most common kind I see, by volume. Usually, they are named something like QQF-FFT-44388-IssueD.doc and they are completely outdated with regard to the thing they document despite having seen several revisions, as evidenced by the inconsistent style.
Common features are:
- A glossary that describe terms that don't need describing, such as CPU or RAM, but not ambiguous and domain-specific terms, of which there are many
- References to documents you don't have access to
- UML diagrams, not matching the code of course
- Signatures by people who left the project long ago and are nowhere to be seen
- A bunch of screenshots, all with different UIs taken at different stages of development, would be of great value to archeologists
- Wildly inconsistent formatting, some people realize that Word has styles and can generate a table of contents, others don't, and few care
Of course, no one reads them, besides maybe a depressive QA manager.
Not sure why the /s here, it feels like documentation being read by LLMs is an important part of AI assisted dev, and it's entirely valid for that documentation to be in part generated by the LLM too.
The best tech writers I have worked with don’t merely document the product. They act as stand-ins for actual users and will flag all sorts of usability problems. They are invaluable. The best also know how to start with almost no engineering docs and to extract what they need from 1-1 sit down interviews with engineering SMEs. I don’t see AI doing either of those things well.
> They act as stand-ins for actual users and will flag all sorts of usability problems.
I think everyone on the team should get involved in this kind of feedback because raw first impressions on new content (which you can only experience once, and will be somewhat similar to impatient new users) is super valuable.
I remember as a dev flagging some tech marketing copy aimed at non-devs as confusing and being told by a manager not to give any more feedback like that because I wasn't in marketing... If your own team that's familiar with your product is a little confused, you can probably x10 that confusion for outside users, and multiply that again if a dev is confused by tech content aimed at non-devs.
I find it really common as well that you get non-tech people writing about tech topics for marketing and landing pages, and because they only have a surface level understanding of the the tech the text becomes really vague with little meaning.
And you'll get lots devs and other people on the team agreeing in secret the e.g. the product homepage content isn't great but are scared to say anything because they feel they have to stay inside their bubble and there isn't a culture of sharing feedback like that.
AI may never be able to replace the best tech writers, or even pretty good tech writers.
But today's AI might do better than the average tech writer. AI might be able to generate reasonably usable, if mediocre, technical documentation based on a halfheartedly updated wiki and the README files and comments scattered in the developers' code base. A lot of projects don't just have poor technical documentation, they have no technical documentation.
Exactly. My team's technical documentation is written (in English) by people who don't speak English natively, and it's awful, barely comprehensible many times because these people don't understand articles ("the" and "a") very well and constantly omit them or use the wrong ones. And aside from the poor English, the documentation itself is just bad.
AI would do a great job of fixing their writing, but they don't want to use it, because it's not an official part of "the process".
>and comments scattered in the developers' code base
I'm not so sure about this one. Most devs I've worked with don't use comments.
In my experience, great tech writers quietly function as a kind of usability radar. They're often the first people to notice that a workflow is confusing
Realistically, PMs incentives are often aligned elsewhere.
But even if a PM cares about UX, they are often not in a good position to spot problems with designs and flows they are closely involved in and intimately familiar with.
Having someone else with a special perspective can be very useful, even if their job provides other beneficial functions, too. Using this "resource" is the job of the PM.
Yes, product managers and product owners should also be looking for usability problems. That said, the docs people are often going through procedures step by step, double-checking things, and they will often hit something that the others missed.
I take your point, but a good PM will have been inside the decision-making process and carry embedded assumptions about how things should work, so they'll miss things. An outside eye - whether it's QA, user-testing, (as here) the technical writer, or even asking someone from a different team to take an informal look - is an essential part of designing anything to be used by humans.
> I don’t see AI doing either of those things well.
I think I agree, at least in the current state of AI, but can't quite put my finger on what exactly it's missing. I did have some limited success with getting Claude Code to go through tutorials (actually implementing each step as they go), and then having it iterate on the tutorial, but it's definitely not at the level of a human tech writer.
Would you be willing to take a stab at the competencies that a future AI agent would require to be excellent at this (or possibly never achieve)? I mean, TFA talks about "empathy" and emotions and feeling the pain, but I can't help feel that this wording is a bit too magical to be useful.
I don’t know that it can be well-defined. It might be asking something akin to “What makes something human?” For usability, one needs a sense of what defines “user pain” and what defines “reasonableness.” No product is perfect. They all have usability problems at some level. The best usability experts, and tech writers who do this well, have an intuition for user priorities and an ability to identify and differentiate large usability problems from small ones.
A good tech writer knows why something matters in context: who is using this under time pressure, what they're afraid of breaking, what happens if they get it wrong
> but can't quite put my finger on what exactly it's missing.
We have to ask AI questions for it to do things. We have to probe it. A human knows things and will probe others, unprompted. It's why we are actually intelligent and the LLM is a word guesser.
Current AI writing is slightly incoherent. It's subtle, but the high level flow/direction of the writing meanders so things will sometimes seem a bit non-sequitur or contradictory.
It has no sense of truth or value. You need to check what it wrote and you need to tell it what’s important to a human. It’ll give you the average, but misses the insight.
Also true that most tech writers are bad. And companies aren't going to spend >$200k/year on a tech writer until they hit tens of millions in revenue. So AI fills the gap.
As a horror story, our docs team didn't understand that having correct installation links should be one of their top priorities. Obviously if a potential customer can't install product, they'd assume it's bs and try to find an alternative. It's so much more important than e.g. grammar in a middle of some guide.
Yeah. AI might replace tech writers (just like it might replace anyone), but it won't be a GOOD replacement. The companies with the best docs will absolutely still have tech writers, just with some AI assistance.
Tech writing seems especially vulnerable to people not really understanding the job (and then devaluing it, because "everybody can write" - which, no, if you'll excuse the slight self-promotion but it saves me repeating myself https://deborahwrites.com/blog/nobody-can-write/)
In my experience, tech writers often contribute to UX and testing (they're often the first user, and thus bug reporter). They're the ones who are going to notice when your API naming conventions are out of whack. They're also the ones writing the quickstart with sales & marketing impact. And then, yes, they're the ones bringing a deep understanding of structure and clarity.
I've tried AI for writing docs. It can be helpful at points, but my goodness I would not want to let anything an AI wrote out the door without heavy editing.
>AI might replace tech writers (just like it might replace anyone), but it won't be a GOOD replacement.
That's fine, though: as long as the AI's output is better than "completely and utterly useless", or even "nonexistent", it'll be an improvement in many places.
The best tech writers I've known have been more like anthropologists, bridging communication between product management, engineers, and users. With this perspective they often give feedback that makes the product better.
> bridging communication between product management, engineers, and users.
Thank you for putting this so eloquently into words. At my work (FAANG) tech writers are being let go and their responsibilities are being pushed on developers, who are now supposed to “use AI” to maintain customer facing documentation.
Is this the promise land? It sure doesn’t feel like it.
AI can help with synthesis once those insights exist, but it doesn't naturally occupy that liminal space between groups, or sense the cultural and organizational gaps
And here I am, 2026, and one of my purposes for this year is to learn to write better, communicate more fluently, and convey my ideas in a more attractive way.
I do not think that these skills are so easily replaced; certainly the machine can do a lot, but if you acquire those skills yourself you shape your brain in a way that is definitely useful to you in many other aspects of life.
In my humble opinion we will be losing that from people, the upscaling of skills will be lost for sure, but the human upscaling is the real loss.
It is such a challenge! As English is not my first language I have to do some mind gimnastics to really convey my thoughts. 'On writing well' is on my list to read, it is supposed to help.
I thought it was saying "a letter to those who fired tech writers because they were caught using AI," not "a letter to those who fired tech writers to replace them with AI."
The whole article felt imprecise with language. To be honest, it made me feel LESS confident in human writers, not more.
I was having flashbacks to all of the confusing docs I've encountered over the years, tightly controlled by teams of bad writers promoted from random positions within the company, or coming from outside but having a poor understanding of our tech or how to write well.
I'm writing this as someone who majored in English Lit and CS, taught writing to PhD candidates for several years, and maintains most of my own company's documentation.
Given the steady parade of headlines on HN about workers supposedly being replaced by AI, it seems fairly self-evident that the first interpretation is the less likely of the two.
I revise my local public transit guide every time I experience a foreign public transit system. I improve my writing by walking in my readers' shoes and experiencing their confusion. Empathy is the engine that powers my work.
Most of my information is carefully collected from a network of people I have a good relationship with, and from a large and trusting audience. It took me years to build the infrastructure to surface useful information. AI can only report what someone was bothered to write down, but I actually go out in the real world and ask questions.
I have built tools to collect people's experience at the immigration office. I have had many conversations with lawyers and other experts. I have interviewed hundreds of my readers. I have put a lot of information on the internet for the first time. AI writing is only as good as the data it feeds on. I hunt for my own data.
People who think that AI can do this and the other things have an almost insulting understanding of the jobs they are trying to replace.
You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.
Apply the same for software (have you seen how bad tech is lately?) or basically any kind of vertical with a nontrivial barrier to entry where someone can't just say "this sucks and I'm gonna build a better one in a weekend".
I don't work for the public transit company; I introduce immigrants to Berlin's public transit. To answer to the broader question, good documentation is one of the many little things that affect how you feel about a company. The BVG clearly cares about that, because their marketing department is famously competent. Good documentation also means that fewer people will queue at their service centre and waste an employee's time. Documentation is the cheaper form of customer service.
Besides, how people feels about the public transit company does matter, because their funding is partly a political question. No one will come to defend a much-hated, customer-hostile service.
Exactly. If the AI-made documentation is only 50% of the quality but can be produced for 10% of the price, well, we all know what the "smart" business move is.
First, I understand what you're saying and generally agree with it, in the sense that that is how the organization will "experience" it.
However, the answer to "will it lead to a noticeable drop in revenue" is actually yes. The problem is that it won't lead to a traceable drop in revenue. You may see the numbers go down. But the numbers don't come with labels why. You may go out and ask users why they are using your service less, but people are generally very terrible at explaining why they do anything, and few of them will be able to tell you "your documentation is just terrible and everything confuses me". They'll tell you a variety of cognitively available stories, like the place is dirty or crowded or loud or the vending machines are always broken, but they're terrible at identifying the real root causes.
This sort of thing is why not only is everything enshittifying, but even as the entire world enshittifies, everybody's metrics are going up up up. It takes leadership willing to go against the numbers a bit to say, yes, we will be better off in the long term if we provide quality documentation, yes, we will be better off in the long term if we use screws that don't rust after six months, yes, we will be better off in the long term if we don't take the cheapest bidder every single time for every single thing in our product but put a bit of extra money in the right place. Otherwise you just get enshittification-by-numbers until you eventually go under and get outcompeted and can't figure out why because all your numbers just kept going up.
That’s one way to frame it. An other one is, sometime people are stuck in a situation where all options that come to their mind have repulsive consequences.
As always some consequences are deemed more immediate, and other will seem remoter. And often the incentives can be quite at odd between expectations in the short/long terms.
>this sucks and I'm gonna build a better one in a weekend
Hey, this is me looking at the world this morning. Bear with me, the bright new harmonious world should be there on Monday. ;)
Coding is like writing documentation for the computer to read. It is common to say that you should write documentation any idiot can understand, and compared to people, computers really are idiots that do exactly as you say with a complete lack of common sense. Computers understand nothing, so all the understanding has to come from the programmer, which is his actual job.
Just because LLMs can produce grammatically correct sentences doesn't mean they can write proper documentation. In the same way, just because they are able to produce code that compiles doesn't mean they can write the program the user needs.
“Technology needs soul”
I suppose this can be generalized to “__ needs soul”. Eg. Technical writing needs soul, User interfaces need soul, etc. We are seriously discounting the value we receive from embedding a level of humanity into the things we choose (or are forced) to experience.
I completely agree that the ambitions of AI proponents to replace workers is insulting. You hit the nail on the head with pointing out that we simply dont write everything down. And the more common sense / well known something is the less likely it is to be written down, yet the more likely it might be needed by an AI to align itself properly.
Nicely written (which, I guess, is sort of the point).
See Duolingo :)
I'm exploring ways to organize my Obsidian vault such that it can be shared with friends, but not the whole Internet (and its bots). I'm extracting value out the curation I've done, but I'd like to share with others.
Not from a moral perspective of course, but the technical possibility. And the overton window has shifted already so far, the moral aspect might align soon, too.
IMO there is an entirely different problem, that's not going to go away just about ever, but could be solved right now easily. And whatever AI company does so first instantly wipes out all competition:
Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
You know, just like the human it'd replace.
That's not sufficient, at least from the likes of OpenAI, because, realistically, that's a liability that would go away in bankruptcy. Companies aren't going to want to depend on it. People _might_ take, say, _Microsoft_ up on that, but Microsoft wouldn't offer it.
I call it the banana bread problem.
To curate a list of the best cafés in your city, someone must eventually go out and try a few of them. A human being with taste honed by years of sensory experiences will have to order a coffee, sit down, appreciate the vibe, and taste the banana bread.
At some point, you need someone to go out in the world and feel things. A machine that cannot feel will never be a good curator of human experiences.
Deleted Comment
You may enjoy this story about her work:
https://www.folklore.org/Inside_Macintosh.html
As a counterpoint, the very worst "documentation" (scare quotes intended) I've ever seen was when I worked at IBM. We were all required to participate in a corporate training about IBM's Watson coding assistant. (We weren't allowed to use external AIs in our work.)
As an exercise, one of my colleagues asked the coding assistant to write documentation for a Python source file I'd written for the QA team. This code implemented a concept of a "test suite", which was a CSV file listing a collection of "test sets". Each test set was a CSV file listing any number of individual tests.
The code was straightforward, easy to read and well-commented. There was an outer loop to read each line of the test suite and get the filename of a test set, and an inner loop to read each line of the test set and run the test.
The coding assistant hallucinated away the nested loop and just described the outer loop as going through a test suite and running each test.
There were a number of small helper functions with docstrings and comments and type hints. (We type hinted everything and used mypy and other tools to enforce this.)
The assistant wrote its own "documentation" for each of these functions in this form:
"The 'foo' function takes a 'bar' parameter as input and returns a 'baz'"
Dude, anyone reading the code could have told you that!
All of this "documentation" was lumped together in a massive wall of text at the top of the source file. So:
When you're reading the docs, you're not reading the code.
When you're reading the code, you're not reading the docs.
Even worse, whenever someone updates the actual code and its internal documentation, they are unlikely to update the generated "documentation". So it started out bad and would get worse over time.
Note that this Python source file didn't implement an API where an external user might want a concise summary of each API function. It was an internal module where anyone working on it would go to the actual code to understand it.
But if you treat "write documentation" as a box-ticking exercise, a line that needs to turn green on your compliance report, then it can just be whatever.
- Most people don't communicate as thoroughly and complete - written and verbal - as they think they do. Very often there is what I call "assumptive communication". That is, sender's ambiguity that's resolved by the receiver making assumptions about what was REALLY meant. Often, filling in the blanks is easy to do - as it's done all the time - but not always. The resolution doesn't change the fact there was ambiguity at the root.
Next time you're communicating, listen carefully. Make note of how often the other person sends something that could be interpreted differently, how often you assume by using the default of "what they likely meant was..."
- That said, AI might not replace people like you. Or me? But it's an improvement for the majority of people. AI isn't perfect, hardly. But most people don't have the skills a/o willingness to communicate at a level AI can simulate. Improved communication is not easy. People generally want ease and comfort. AI is their answer. They believe you are replaceable because it replaces them and they assume they're good communicators. Classic Dunning-Kruger.
p.s. One of my fave comms' heuristics is from Frank Luntz*:
"It's not what you say, it's what they hear." (<< edit was changing to "say" from "said".)
One of the keys to improved comms is to embrace that clarify and completeness is the sole responsibility of the sender, not the receiver. Some people don't want to hear that, and be accountable, especially then assumption communication is a viable shortcut.
* Note: I'm not a fan of his politics, and perhaps he's not The Source of this heuristic, but read it first in his "Words That Work". The first chapter of "WTW" is evergreen comms gold.
If you are in charge of a herd of bots that are following a prompt scaffolding in order to automate a work product that meets 90% of the quality of the pure human output you produce, that gives you a starting point with only 10% of the work to be done. I'd hazard a guess that if you spent 6 months crafting a prompt scaffold you could reach 99% of your own quality, with the odd outliers here and there.
The first person or company to do that well then has an automation framework, and they can suddenly achieve 10x or 100x the output with a nominal cost in operating the AI. They can ensure that each and every work product is lovingly finished and artisanally handcrafted , go the extra mile, and maybe reach 8x to 80x output with a QA loss.
In order to do 8-80x one expert's output, you might need to hire a bunch of people to do segmented tasks - some to do interviews, build relationships, the other things that require in person socialization. Or, maybe AI can identify commonalities and do good enough at predicting a plausible enough model that anyone paying for what you do will be satisfied with the 90% as good AI product but without that personal touch, and as soon as an AI centric firm decides to eat your lunch, your human oriented edge is gone. If it comes down to beancounting, AI is going to win.
I don't think there's anything that doesn't require physically interacting with the world that isn't susceptible to significant disruption, from augmentation to outright replacement, depending on the cost of tailoring a model to the tasks.
For valuable enough work, companies will pay the millions to fine-tune frontier models, either through OpenAI or open source options like Kimi or DeepSeek, and those models will give those companies an edge over the competition.
I love human customer service, especially when it's someone who's competent, enjoys what they do, and actually gives a shit. Those people are awesome - but they're not necessary, and the cost of not having them is less than the cost of maintaining a big team of customer service agents. If a vendor tells a big company that they can replace 40k service agents being paid ~$3.2 billion a year with a few datacenters, custom AI models, AI IT and Support staff, and totally automated customer service system for $100 million a year, that might well be worth the reputation hit and savings. None of the AI will be able to match the top 20% of human service agents in the edge cases, and there will be a new set of problems that come from customer and AI conflict, etc.
Even so. If your job depends on processing information - even information in a deeply human, emotional, psychologically nuanced and complex context - it's susceptible to automation, because the ones with the money are happy with "good enough." AI just has to be good enough to make more money than the human work it supplants, and frontier models are far past that threshold.
Nonetheless, I live from that work. If you are correct, there's a fair bit of money on the table for you.
I could say I make my living with various forms of communication.
I am not worried about AI replacing me 1:1 communication wise. What is going to happen is the structure around me that gives my current communication skill set value is going to change so much and be optimized to a degree that my current skill set is no longer going to have much value in the new structures that arise. I will have to figure out how to fit into these new structures and it surely won't be doing the same thing I am doing now. That seems so obvious.
As as writer, you know this makes it seem emotional rather than factual?
Anyway, I agree with what you are saying. I run a scientific blog that gets 250k-1M users per year, and AI has been terrible for article writing. I use AI for ideas on brainstorming and ideas for titles(which ends up being inspiration rather than copypaste).
Based on what? Your own zero-evidence speculation? How is this anything other than arrogant punting? For sure we know that the point was something other than how fast the author reads compared to an AI, so what are we left with here?
That’s the logical fallacy anyone is going to be pushed to as soon as judging their individual worth in an intrinsically collective endeavor will happen.
People in lowest incomes which would not be able to integrate in society without direct social funds will be seen as parasites by some which are wealthier, just like ultra rich will be considered parasites by less wealthy people.
The kind of documentation no one reads, that is just here to please some manager, or meet some compliance requirement. These are, unfortunately, the most common kind I see, by volume. Usually, they are named something like QQF-FFT-44388-IssueD.doc and they are completely outdated with regard to the thing they document despite having seen several revisions, as evidenced by the inconsistent style.
Common features are:
- A glossary that describe terms that don't need describing, such as CPU or RAM, but not ambiguous and domain-specific terms, of which there are many
- References to documents you don't have access to
- UML diagrams, not matching the code of course
- Signatures by people who left the project long ago and are nowhere to be seen
- A bunch of screenshots, all with different UIs taken at different stages of development, would be of great value to archeologists
- Wildly inconsistent formatting, some people realize that Word has styles and can generate a table of contents, others don't, and few care
Of course, no one reads them, besides maybe a depressive QA manager.
And LLM are really good in reading your docs to help someone. So I make sure to add more concrete examples into them
I think everyone on the team should get involved in this kind of feedback because raw first impressions on new content (which you can only experience once, and will be somewhat similar to impatient new users) is super valuable.
I remember as a dev flagging some tech marketing copy aimed at non-devs as confusing and being told by a manager not to give any more feedback like that because I wasn't in marketing... If your own team that's familiar with your product is a little confused, you can probably x10 that confusion for outside users, and multiply that again if a dev is confused by tech content aimed at non-devs.
I find it really common as well that you get non-tech people writing about tech topics for marketing and landing pages, and because they only have a surface level understanding of the the tech the text becomes really vague with little meaning.
And you'll get lots devs and other people on the team agreeing in secret the e.g. the product homepage content isn't great but are scared to say anything because they feel they have to stay inside their bubble and there isn't a culture of sharing feedback like that.
But today's AI might do better than the average tech writer. AI might be able to generate reasonably usable, if mediocre, technical documentation based on a halfheartedly updated wiki and the README files and comments scattered in the developers' code base. A lot of projects don't just have poor technical documentation, they have no technical documentation.
AI would do a great job of fixing their writing, but they don't want to use it, because it's not an official part of "the process".
>and comments scattered in the developers' code base
I'm not so sure about this one. Most devs I've worked with don't use comments.
True, but it raises another question, what were your Product Managers doing in the first place if tech writer is finding out about usability problems
But even if a PM cares about UX, they are often not in a good position to spot problems with designs and flows they are closely involved in and intimately familiar with.
Having someone else with a special perspective can be very useful, even if their job provides other beneficial functions, too. Using this "resource" is the job of the PM.
I think I agree, at least in the current state of AI, but can't quite put my finger on what exactly it's missing. I did have some limited success with getting Claude Code to go through tutorials (actually implementing each step as they go), and then having it iterate on the tutorial, but it's definitely not at the level of a human tech writer.
Would you be willing to take a stab at the competencies that a future AI agent would require to be excellent at this (or possibly never achieve)? I mean, TFA talks about "empathy" and emotions and feeling the pain, but I can't help feel that this wording is a bit too magical to be useful.
We have to ask AI questions for it to do things. We have to probe it. A human knows things and will probe others, unprompted. It's why we are actually intelligent and the LLM is a word guesser.
Also true that most tech writers are bad. And companies aren't going to spend >$200k/year on a tech writer until they hit tens of millions in revenue. So AI fills the gap.
As a horror story, our docs team didn't understand that having correct installation links should be one of their top priorities. Obviously if a potential customer can't install product, they'd assume it's bs and try to find an alternative. It's so much more important than e.g. grammar in a middle of some guide.
Tech writing seems especially vulnerable to people not really understanding the job (and then devaluing it, because "everybody can write" - which, no, if you'll excuse the slight self-promotion but it saves me repeating myself https://deborahwrites.com/blog/nobody-can-write/)
In my experience, tech writers often contribute to UX and testing (they're often the first user, and thus bug reporter). They're the ones who are going to notice when your API naming conventions are out of whack. They're also the ones writing the quickstart with sales & marketing impact. And then, yes, they're the ones bringing a deep understanding of structure and clarity.
I've tried AI for writing docs. It can be helpful at points, but my goodness I would not want to let anything an AI wrote out the door without heavy editing.
See my other comment - I'm afraid quality only matters if there is healthy competition which isn't the case for many verticals: https://news.ycombinator.com/item?id=46631038
[insert Pawn Stars meme]: "GOOD docs? Sorry, best I can do is 'slightly better than useless.'"
That's fine, though: as long as the AI's output is better than "completely and utterly useless", or even "nonexistent", it'll be an improvement in many places.
Thank you for putting this so eloquently into words. At my work (FAANG) tech writers are being let go and their responsibilities are being pushed on developers, who are now supposed to “use AI” to maintain customer facing documentation.
Is this the promise land? It sure doesn’t feel like it.
I do not think that these skills are so easily replaced; certainly the machine can do a lot, but if you acquire those skills yourself you shape your brain in a way that is definitely useful to you in many other aspects of life.
In my humble opinion we will be losing that from people, the upscaling of skills will be lost for sure, but the human upscaling is the real loss.
Yep, and reading you will feel less boring.
The uniform style of LLMs gets old fast and I wouldn't be surprised if it were a fundamental flaw due to how they work.
And it's not even sure speed gains from using LLMs make up for the skill loss in the long term.
I thought it was saying "a letter to those who fired tech writers because they were caught using AI," not "a letter to those who fired tech writers to replace them with AI."
The whole article felt imprecise with language. To be honest, it made me feel LESS confident in human writers, not more.
I was having flashbacks to all of the confusing docs I've encountered over the years, tightly controlled by teams of bad writers promoted from random positions within the company, or coming from outside but having a poor understanding of our tech or how to write well.
I'm writing this as someone who majored in English Lit and CS, taught writing to PhD candidates for several years, and maintains most of my own company's documentation.
They have AI finding reasons to reject totally valid requests
They are putting to court that this is a software bug and they should not be liable.
That will be the standard excuse. I hope it does not work.