- https://www.industrialempathy.com/posts/design-docs-at-googl...
- https://github.com/rust-lang/rfcs
- https://github.com/kubernetes/enhancements/blob/master/keps/...
- https://blog.pragmaticengineer.com/rfcs-and-design-docs/
Hint: tailor the process and template structure based on your org size/maturity and needs. Don’t try to blindly mimic/imitate.
> ... sketching out that API is usually a good idea. In most cases, however, one should withstand the temptation to copy-paste formal interface or data definitions into the doc as these are often verbose, contain unnecessary detail and quickly get out of date.
Using R Markdown (or any Turing Complete documentation system), it's possible to introduce demarcations that allow the source code snippets to be the literal source of truth:
// DOCGEN-BEGIN:API_CLASS_NAME
/**
* <description>
*
* @param arg <description>
* @return <description>
*/
uint8_t method( type arg );
// DOCGEN-ENDED:API_CLASS_NAME
Use a GPT to implement a parser for snippets in a few minutes. Then invoke the function from the living document for given a source file, such as: `r#
snippets -> parse.snippets( "relative/path/to/ClassName.hpp" );
docs -> parse.api( snippets[[ "API_CLASS_NAME" ]] );
export.api( docs );
`
The documentation now cannot ever go stale with respect to the source code. If the comments are too verbose, simplify and capture implementation details elsewhere (e.g., as inline comments).In one system I helped develop, we were asked to document what messages of a standard protocol were supported. The only place this knowledge exists is in a map in the code base. So instead of copy/pasting that knowledge, we have:
MessageMap MESSAGE_MAP = {
// DOCGEN-BEGIN:SUPPORTED_MESSAGES
{ MessageType1, create<MessageClassName1>() },
{ MessageType2, create<MessageClassName2>() },
...
// DOCGEN-ENEDED:SUPPORTED_MESSAGES
}
And something like: `r#
snippets -> parse.snippets( "relative/path/to/MessageMap.hpp" );
df -> parse.messages( snippets[[ "SUPPORTED_MESSAGES" ]] );
export.table( df );
`
This snippet is parsed into an R dataframe. Another function converts dataframes into Markdown tables. Changing the map starts a pipeline that rebuilds the documentation, ensuring that the documentation is always correct with respect to the code.If a future developer introduces an unparseable change, or files are moved, or R code breaks, the documentation build pipeline fails and someone must investigate before the change goes onto main.
Shameless self-plug: The R Markdown documentation system we use is my FOSS application, KeenWrite; however, pandoc and knitr are equally capable.
But this style of custom-elements requires successful javascript program execution to achieve that "HTML" document. Just like markdown requires some parser program to turn it in to HTML. It's not really fully an HTML document.
It's a good idea. It just would be a better one to write the custom-elements as wrappers for actual HTML elements. Like how https://blog.jim-nielsen.com/2023/html-web-components-an-exa... shows instead of trying to do it SPA style and requiring perfect JS execution for anything to show properly.
HTML mark-up really isn't that heavy. The avoidance of it seems mostly to be because it's considered "old" and "old" is bad, or at least not useful on a resume. But it's old because it's so good it's stuck around for a long time. Only machine generated HTML is bulky. Hand written can be just as neat and readable as any Markdown.
pandoc has an extension for this:
https://pandoc.org/demo/example33/8.18-divs-and-spans.html
KeenWrite, my (R) Markdown editor, supports pandoc annotations:
https://youtu.be/7icc4oZB2I4?list=PLB-WIt1cZYLm1MMx2FBG9KWzP...
> Just like markdown requires some parser program to turn it in to HTML.
Or XHTML, which is XML, which can then be transformed into TeX macros, and then typeset into a PDF file with a theme (much like CSS stylizes HTML).
https://youtu.be/3QpX70O5S30?list=PLB-WIt1cZYLm1MMx2FBG9KWzP...
This allows separating content from presentation, allowing them to vary independently.