When working on art projects, my trick is to specifically give all feedback constructively, carefully avoiding framing things in terms of the inverse or parts to remove.
> Delegate work with an "@ mini-cli" tag and the agent can complete a range of tasks, from writing bugs to fixing bugs
I never really advertised it, but what it does is take all the objects inside your static library, and tells the linker to make a static library that contains a single merged object.
https://github.com/tux3/armerge
The huge advantage is that with a single object, everything works just like it would for a dynamic library. You can keep a set of public symbols and hide your private symbols, so you don't have pollution issues.
Objects that aren't needed by any public symbol (recursively) are discarded properly, so unlike --whole-archive you still get the size benefits of static linking.
And all your users don't need to handle anything new or to know about a new format, at the end of the day you still just ship a regular .a static library. It just happens to contain a single object.
I think the article's suggestion of a new ET_STAT is a good idea, actually. But in the meantime the closest to that is probably to use ET_REL, a single relocatable object in a traditional ar archive.
binutils implemented this with `libdep`, it's just that it's done poorly. You can put a few flags like `-L /foo -lbar` in a file `__.LIBDEP` as part of your static library, and the linker will use this to resolve dependencies of static archives when linking (i.e. extend the link line). This is much like DT_RPATH and DT_NEEDED in shared libraries.
It's just that it feels a bit half-baked. With dynamic linking, symbols are resolved and dependencies recorded as you create the shared object. That's not the case when creating static libraries.
But even if tooling for static libraries with the equivalent of DT_RPATH and DT_NEEDED was improved, there are still the limitations of static archives mentioned in the article, in particular related to symbol visibility.
Basically it's considered too hard if not impossible to statically define the target's dependencies. This is now done dynamically with tools like `clang-scan-deps` [2]
[1] https://cmake.org/cmake/help/latest/manual/cmake-cxxmodules....
[2] https://llvm.org/devmtg/2019-04/slides/TechTalk-Lorenz-clang...
Output synchronization which makes `make` print stdout/stderr only once a target finishes. Otherwise it's typically interleaved and hard to follow:
make --output-sync=recurse -j10
On busy / multi-user systems, the `-j` flag for jobs may not be best. Instead you can also limit parallelism based on load average: make -j10 --load-average=10
Randomizing the order in which targets are scheduled. This is useful for your CI to harden your Makefiles and see if you're missing dependencies between targets: make --shuffle # or --shuffle=seed/reverse
In probabilistic programming you (deterministically) define variables and formulas. It's just that the variables aren't instances of floats, but represent stochastic variables over floats.
This is similar to libraries for linear algebra where writing A * B * C does not immediately evaluate, but rather builds an expression tree that represent the computation; you need to do say `eval(A * B * C)` to obtain the actual value, and it gives the library room to compute it in the most efficient way.
It's more related to symbolic programming and lazy evaluation than (non-)determinism.
I really like a lot of what Google produces, but they can't seem to keep a product that they don't shut down and they can be pretty ham-fisted, both with corporate control (Chrome and corrupt practices) and censorship