- splitting up a package into smaller compilation units to speed up builds as llvm doesn't like large compilation units, e.g. I'm assuming this is why uv is split up
- splitting semver concerns (easiest if different parts of the api are versioned independently), e.g. one hope for splitting out `serde_core` is to allow breaking changes of `serde_derive`
- splitting out optional parts to improve build time (`features` can also be used but usability isn't as great), e.g. `clap` (cli parser) and `clap_complete` (completion support)
- local development tools, e.g. https://github.com/matklad/cargo-xtask
- less commonly run tests or benches that have expensive dependencies, e.g. `toml` splits out a benchmark package for comparing performance of `toml`, `toml_edit`, an old version of `toml`, and `serde_json`
- proc-macros generally have a regular library they generate code against
- lower development overhead to develop them together, e.g. anstyle has a lot of related packages and I wouldn't be wanting to maintain a repo per package
Packages also let you mix a library with bins and examples. We talked about multi-purpose packages vs libraries at https://blog.rust-lang.org/inside-rust/2024/02/13/this-devel... (this was before workspace publishing was supported).
Something this doesn't mention is that Cargo workspaces allow for sharing parts of the manifest through "workspace inheritance", including package metadata like the repo, dependency version requirements, lint configuration, or compilation profiles.
Unfortunately, some markdown implementations don't handle this well. We were looking at using code-fence like syntax in Rust and we were worried about people knowing how to embed it in markdown code fences but bad implementations was the ultimate deal breaker. We switched to `---` instead, making basic cases look like yaml stream separators which are used for frontmatter.
Maintainer of Winnow here. I wish there were more details on this. I switched `toml` / `toml_edit` to being hand written and got some performance boost but I feel like the other things listed would have dwarfed the gains that I got. I wonder if there were sub optimal patterns they employeed that we could find ways to help better guide people.
For anyone going on about "hand written is always better", I disagree. Parser combinators offer a great way to map things back to grammar definitions which makes them much easier to maintainer. Only in extreme circumstances of features and/or performance does it seem worth going hand-written to me.
IME, zsh has better autocompletion (which, at the time at least, was a separate install).
- completions being aware of the subcommand
- dynamic look ups for specific values
- completions being aware of previous options, flags, and values
A lot of completions have the first. Some have the second. The last is rare. The completer needs knowledge of when flags, options, and value can be repeated and change which future options and values are suggested.
This isn't strictly accurate: when we designed Trusted Publishing for PyPI, we designed it to be generic across OIDC IdPs (typically CI providers), and explicitly included an accommodation for creating new projects via Trusted Publishing (we called it "pending" publishers[1]). The latter is something that not all subsequent adopters of the Trusted Publishing technique have adopted, which is IMO both unfortunate and understandable (since it's a complication over the data model/assumptions around package existence).
I think a lot of the pains here are self-inflicted on GitHub's part here: deciding to remove normal API credentials entirely strikes me as extremely aggressive, and is completely unrelated to implementing Trusted Publishing. Combining the two together in the same campaign has made things unnecessarily confusing for users and integrators, it seems.
[1]: https://docs.pypi.org/trusted-publishers/creating-a-project-...
If that is correct, I thought this was discussed when Trusted Publishing was proposed for Rust that it was not meant to replace local publishing, only harden CI publishing.
https://c3-lang.org/language-overview/examples/#enum-and-swi...
But the whole point of thiserror style errors is to make the errors part of your public API. This is no different to having a normal struct (not error related) as part of your public API is it?
> Struct variants need non_exhaustive
Can't you just add that tag? I dunno, I've never actually used thiserror.
Your third point makes sense though.
> But the whole point of thiserror style errors is to make the errors part of your public API. This is no different to having a normal struct (not error related) as part of your public API is it?
Likely you should use private fields with public accessors so you can evolve it.
I mean I don't see the difference between having the non-exhaustive enum at the top level vs in a subordinate 'kind'.
- Struct variant fields are public, limiting how you evolve the fields and types
- Struct variants need non_exhaustive
- It shows using `from` on an error. What happens if you want to include more context? Or change your impl which can change the source error type
None of this is syntactically unique to errors. This becomes people's first thought of what to do and libraries like thiserror make it easy and showcase it in their docs.