I write about German bureaucracy, and I wholeheartedly agree with your approach.
Most of my guides start with: what this is, who needs to do this, why you need to do this. If you don’t confirm that people are on the right page doing the right thing for the right reasons, they can go really far in the wrong direction.
Most government websites don’t explain any of this. They just tell you what they want from you to complete the part of the task that concerns them. They don’t bother to treat the task as part of a bigger decision. They just assume that you are here because you know what you are doing.
Re your last paragraph, the UK gov website excels at this. There are landing pages which describe the process, then warnings and caveats, then you can actually fill in the forms.
> If you don’t confirm that people are on the right page doing the right thing for the right reasons
Based on this comment I think you would appreciate Every Page Is Page One a lot. The basic idea is that people can and will land on any random page of your docs site, so every page needs to quickly ground them and make it super easy for them decide whether they're on the right path or no. That's where the book title is coming from. Literally any page of your site might be page one for a user.
That’s exactly how I design my content! I work really hard on making sure that every entry point leads to the right path. It’s surprisingly challenging.
Thank you for the book recommendation. I will give it a look.
This is a really good approach I wish more technical docs had. My mind is always questioning why things are done a certain way, which leads me to side quests that slow me down.
This fits the way I like to use LLMs: I always ask them for options, then I decide myself which of those options makes the most sense.
Essentially I'm using them as weird magical documentation that can spit out (incomplete but still useful) available options to guide my decision making at any turn.
I like to think of it as the apprentices working for famous artists like Leonardo. The master would draw the outline/sketch, and then the students would fill in the blanks under supervision. Sometimes, the master would steal ideas from the students.
Smells like reinforcement learning in real life. Master sets ups the task environment, collects samples from students, picks the best and maybe even augments them. Students watch the master and learn... and the cycle continues.
And then the master becomes a grandmaster (unless entropy explosion occurs).
Not exactly the same… But recently I wanted to pick a library in Python or Julia for simulating differential equations using a GPU. So I asked ChatGPT which libraries exist for this (JAX, CuPy, etc.), asked it to generate code to solve e.g. the 2D heat equation on a 1000x1000 grid for 100 time steps using each of those frameworks. Then I stepped in and verified that each code appeared to do the same thing, and proceeded to benchmark their performance on my hardware. Afterwards I had an informed choice of which framework to use for my project, even though ChatGPT gave me the benchmark code instead of the answer.
Here's one: if you want it to rewrite something or show you a better way to say something you've already written, just ask for different options. I usually find that mixing what I wanted to write with parts of its suggestions gives me a great result.
I love this. Short, to the point, and insightful. I’m one of those ‘big picture’ type of thinkers. It’s really important to me to just not know the how behind something, but also the why and how it relates to the larger context. I encourage our developers to make liberal use of the Description field in Jira stories and tasks and provide an overview of why we are doing something and how it relates to the bigger picture. Some of them don’t like it I guess, but some are really digging it. I’m happy to provide the big picture behind the project and they are happy with added independence understanding the big picture gives them.
It seems like it's a lot harder to measure whether your docs are helping people make good decisions than it is to measure whether they are helping people successfully accomplish a task. I think we optimize for task-based/procedural docs because the business needs us to prove our value, and there is a need for this type of documentation, and there are lots of ways to measure and report on it over short timelines. But answering the question of, "Did this docset help someone build the right thing in the right way", I mean...organizations struggle to answer this question about their own products, abstracting that to try and measure the effectiveness of your docs seems super fuzzy.
Which is not to say you can't write docs that do this, just that it seems very hard to use numbers to prove that you have done so. I definitely think I could rank how well different docsets support users who need to make decisions, and I could offer up explanations to support my reasoning, but I don't know how to quantify that for the business.
I wonder how the structure of a docset that is designed to support decisions differs from that of a docset that supports tasks. I expect you'll have the same main categories (conceptual, reference, guides) but maybe a lot more conceptual docs, and more space dedicated to contextualizing the concepts. I would expect to see topics become more interdependent, more cross-references, etc.
Interesting that your first thought here is not, oh, how can I use this to improve the docs I am writing, but it is, how can I prove that this improves the docs I am writing. You seem to live in a though environment.
You're getting a taste of the world that a lot of professional technical writers live in. Everyone seems to intuitively understand that you need docs, and that if you don't invest in docs it probably will be bad for the business, yet at the same time it's hard to concretely show business value. So technical writers are incessantly asked to prove their value, even though the managers subconsciously know that they're important for some reason. Over the years I have come to believe that docs are important simply because it's a primary mechanism for sharing knowledge across the company and to customers. Michelle Irvine has been doing great work quantifying this: https://cloud.google.com/blog/products/devops-sre/deep-dive-...
You're not wrong. Business is a tough environment.
At a gut level the post seems sensible to me, and it does generate a lot of ideas about how I can make my own docs better. That's not enough, though, if I want the folks who think about docs at my org to change their approach.
As the OP states in several other comments, most writers and organizations learn to prioritize task-based documentation. If we want to adopt a better way of doing things, we need to be able to communicate why it's better. It's no different in other disciplines.
In my own personal projects, where I'm free to do whatever I think is best for my users, I will probably adopt "support decisions" as the foundation of my docs strategy.
In the spirit of working with integrity, if I feel that "support decisions" is the best approach for my own projects, then I probably have a duty to bring this strategy into the docs I do for work.
Luckily I don't have short-sighted managers breathing down my neck, but if I had to convince people at work I would go about it like this:
* Explain the logic of the strategy. Supporting decisions just seems to make sense and ring true . The tasks will still get documented, but tasks are just a subset of decision support.
* And then I would provide a long list of examples from support tickets, chat room discussions, etc. where lack of decision support seemed to be the problem. For intellectual honesty I would show the complete list of docs-related support tickets (for example) and then the subset that were related to supporting decisions. If it's a non-trivial percent (maybe 25%) then we should really look into "decision support" more.
* Last, I might provide examples that the stakeholders themselves have faced in their own work. "Remember how difficult it was to decide what CMS to switch to??"
Thinking in Bets has been one of the most useful books to how I approach software engineering. It’s not even about code, just how to make decisions effectively in limited information environments.
Love that book. Such a powerful idea to phrase your predictions in terms of percentages rather than absolutes. Apparently the Super Bowl anecdote is controversial though? I.e. the conclusions to draw from that anecdote are very debatable.
Well “fuzzy logic” is kind of a dated term. I don’t think it has been used in software development, for twenty years.
TL;DR, It basically means not having “hard and fast” boundaries, and instead, having ranges of target values, and “rules” for determining target states, as opposed to “milestones,” so targets are determined on a “one at a time” basis.
I've advocated something similar. Don't just describe the tool at a high level (people often seem to go into marketing mode) - tell the story of what problem it was designed to solve and the trade offs you made along the way. Makes it much easier to place the tool and available options/modes/etc in context and quickly decide whether it's a good fit for you.
Most of my guides start with: what this is, who needs to do this, why you need to do this. If you don’t confirm that people are on the right page doing the right thing for the right reasons, they can go really far in the wrong direction.
Most government websites don’t explain any of this. They just tell you what they want from you to complete the part of the task that concerns them. They don’t bother to treat the task as part of a bigger decision. They just assume that you are here because you know what you are doing.
Eg this is the Google result for renewing your driving licence https://www.gov.uk/renew-driving-licence - click Start Now and you'll see what I mean
As a "power user" this can sometimes feel like it gets in the way but I understand they need to consider everyone
Based on this comment I think you would appreciate Every Page Is Page One a lot. The basic idea is that people can and will land on any random page of your docs site, so every page needs to quickly ground them and make it super easy for them decide whether they're on the right path or no. That's where the book title is coming from. Literally any page of your site might be page one for a user.
Thank you for the book recommendation. I will give it a look.
Essentially I'm using them as weird magical documentation that can spit out (incomplete but still useful) available options to guide my decision making at any turn.
And then the master becomes a grandmaster (unless entropy explosion occurs).
"Options for JavaScript to turn a JPEG into a vector SVG"
Result: https://gist.github.com/simonw/d2e724c357786371d7cc4b5b5bb87...
I ended up building this: https://tools.simonwillison.net/svg-render
More details here: https://simonwillison.net/2024/Oct/6/svg-to-jpg-png/
It seems like it's a lot harder to measure whether your docs are helping people make good decisions than it is to measure whether they are helping people successfully accomplish a task. I think we optimize for task-based/procedural docs because the business needs us to prove our value, and there is a need for this type of documentation, and there are lots of ways to measure and report on it over short timelines. But answering the question of, "Did this docset help someone build the right thing in the right way", I mean...organizations struggle to answer this question about their own products, abstracting that to try and measure the effectiveness of your docs seems super fuzzy.
Which is not to say you can't write docs that do this, just that it seems very hard to use numbers to prove that you have done so. I definitely think I could rank how well different docsets support users who need to make decisions, and I could offer up explanations to support my reasoning, but I don't know how to quantify that for the business.
I wonder how the structure of a docset that is designed to support decisions differs from that of a docset that supports tasks. I expect you'll have the same main categories (conceptual, reference, guides) but maybe a lot more conceptual docs, and more space dedicated to contextualizing the concepts. I would expect to see topics become more interdependent, more cross-references, etc.
At a gut level the post seems sensible to me, and it does generate a lot of ideas about how I can make my own docs better. That's not enough, though, if I want the folks who think about docs at my org to change their approach.
As the OP states in several other comments, most writers and organizations learn to prioritize task-based documentation. If we want to adopt a better way of doing things, we need to be able to communicate why it's better. It's no different in other disciplines.
In the spirit of working with integrity, if I feel that "support decisions" is the best approach for my own projects, then I probably have a duty to bring this strategy into the docs I do for work.
Luckily I don't have short-sighted managers breathing down my neck, but if I had to convince people at work I would go about it like this:
* Explain the logic of the strategy. Supporting decisions just seems to make sense and ring true . The tasks will still get documented, but tasks are just a subset of decision support.
* And then I would provide a long list of examples from support tickets, chat room discussions, etc. where lack of decision support seemed to be the problem. For intellectual honesty I would show the complete list of docs-related support tickets (for example) and then the subset that were related to supporting decisions. If it's a non-trivial percent (maybe 25%) then we should really look into "decision support" more.
* Last, I might provide examples that the stakeholders themselves have faced in their own work. "Remember how difficult it was to decide what CMS to switch to??"
I feel that we need to have a "fuzzy logic" approach to our work.
However, that works best, when the engineer is somewhat experienced.
If they are inexperienced (even if very skilled and intelligent), we need to be a lot more dictatorial.
TL;DR, It basically means not having “hard and fast” boundaries, and instead, having ranges of target values, and “rules” for determining target states, as opposed to “milestones,” so targets are determined on a “one at a time” basis.
The talks is called "Design in Practice" but is really about making decisions.
Deleted Comment