But what's the pro? The tail alignment is worse than the head alignment since you read head to tail, not the other way aground
sometimes I'm grepping in /nix/store and you have (as shown earlier) a list of derivation paths like this
$ ls /nix/store | grep nodejs-2 | head | sed 's/^/ /'
0a9kkw6mh0f80jfq1nf9767hvg5gr71k-nodejs-22.18.0.drv
0pmximcv91ilgxcf9n11mmxivcwrczaa-nodejs-22.14.0-source.drv
0zzxnv3kap4r4c401micrsr3nrhf87pa-nodejs-20.18.1-fish-completions.drv
2a7y7d38x8kwa8hdj6p93izvrcl9bfga-nodejs-22.11.0-source.drv
2gcjb0dibjw8c1pp45593ykjqzq5sknm-nodejs-20.18.1-source.drv
and thus as designed, your eyes ignore the block of hashes and you see the "nodejs-..." stuffYou might ask why are you grepping? Because it's fast and familiar and I don't know the native tooling as easily (possibly a UX problem).
Then in spack (see https://spack.readthedocs.io/en/latest/package_fundamentals....) they have
$ spack find --paths
==> 74 installed packages.
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
ImageMagick@6.8.9-10 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/ImageMagick@6.8.9-10-4df950dd
adept-utils@1.0 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/adept-utils@1.0-5adef8da
atk@2.14.0 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/atk@2.14.0-3d09ac09
and$ spack find --format "{name}-{version}-{hash}"
autoconf-2.69-icynozk7ti6h4ezzgonqe6jgw5f3ulx4
automake-1.16.1-o5v3tc77kesgonxjbmeqlwfmb5qzj7zy
bzip2-1.0.6-syohzw57v2jfag5du2x4bowziw3m5p67
bzip2-1.0.8-zjny4jwfyvzbx6vii3uuekoxmtu6eyuj
cmake-3.15.1-7cf6onn52gywnddbmgp7qkil4hdoxpcb
you get the package name immediately from the left, which is nice, and you can pipe that straight to `sort`, but where the hash starts is more jagged on the right so there's a bit more noise when you're looking at the numbers. In the end the information is identical and it's a UX difference.Tradeoffs wise I think they both made the right choice. Because for nix, the packages are almost always in /nix/store, so the path length including the hash is almost always the same.
For spack you can place your packages anywhere so the base directories can be highly variable, and so it's sensible to have the package names immediately after the package directory.
Or, I'm just trying to rationalize the choices each designer made post-hoc. But after using both I appreciate the design considerations that went in. In the end, humans are inefficient. When I make name / version / hash identifiers in my own applications I end up using one or the either design.
A different type of madness, but are ugly names so common, why not start with ruby-3.3.9 so any list of files is semantically sorted/readable?
0009flr197p89fz2vg032g556014z7v1-libass-0.17.3.drv
000ghm78048kh2prsfzkf93xm3803m0r-default.md
001f6fysrshkq7gaki4lv8qkl38vjr6a-python-runtime-deps-check-hook.sh.drv
001gp43bjqzx60cg345n2slzg7131za8-nix-nss-open-files.patch
001im7qm8achbyh0ywil6hif6rqf284z-bootstrap-stage0-binutils-wrapper-boot.drv
001pc0cpvpqix4hy9z296qnp0yj00f4n-zbmath-review-template.r59693.tar.xz.drv
Spack, another deterministic builder / package manager, IIRC uses the reversed order so the hash is at the tail. Pros/cons under different search / inspection conditions.> How does this handle data updating / fixing?
In the advanced import settings, you can customize what makes an item unique or a duplicate. You can also configure how to handle duplicates. By default, duplicates are skipped. But they can also be updated, and you can customize what gets updated and which of the two values to keep.
But yes, updates do run an UPDATE query, so they're irreversible. I explored schemas that were purely additive, so that you could traverse through mutations of the timeline, but this got messy real fast, and made exploring (reading) the timeline more complex/slow/error-prone. I do think it would be cool though, and I may still revisit that, because I think it could be quite beneficial.
One interesting scenario re time traveling is if we use an LLM somewhere in data derivation. Say there's a secondary processor of e.g. journal notes that yield one kind of feature extraction, but the model gets updated at some point, then the output possibilities expand very quickly. We might also allow human intervention/correction, which should take priority and resist overwrites. Assuming we're caching these data then they'll also land somewhere in the database and unless provenance is first class, they'll appear just as ground truth as any other.
Bitemporal databases look interesting but the amount of scaffolding above sqlite makes the data harder to manage.
So if I keep ground truth data as text, looks like I'm going to have an import pipeline into timelinize, and basically ensure that there's a stable pkey (almost certainly timestamp + qualifier), and always overwrite. Seems feasible, pretty exciting!
How does this handle data updating / fixing? My use case is importing data that's semi structured. Say you get data from a 3rd party provider from one dump, and it's for an event called "jog". Then they update their data dump format so "jog" becomes subdivided into "light run" vs "intense walk", and they also applied it retroactively. In this case you'd have to reimport a load of overlapping data.
I saw the FAQ and it only talks about imports not strictly additive.
I am dealing with similar use cases of evolving data and don't want to deal with SQL updating, and end up working entirely in plain text. One advantage is that you can use git to enable time traveling (for a single user it still works reasonably).
I pulled some org-babel code to make a code fence evaluator for markdown a while back [2] but haven't found myself needing it that much. So without a need to wrangle reports or run exports, I think it's a 1% feature for text file connoisseurs.
[1] https://ess.r-project.org/ with babel is an experience I find superior to jupyter tools even after all these years
For example, here's a snippet pulled from my dotfiles that does this for multiple dotfiles at once:
home.file =
builtins.mapAttrs
(key: value: {
# symlink ~/dotfiles/configs/{value} to ~/{key}
source = config.lib.file.mkOutOfStoreSymlink "${config.home.homeDirectory}/dotfiles/configs/${value}";
})
{
".zshrc" = "zsh/zshrc";
".p10k.zsh" = "zsh/p10k.zsh";
".config/sway/config" = "sway/config";
".config/nvim/init.lua" = "nvim/init.lua";
};At the same time it often feels like a veneer of control, like you can control exactly where to place the door, but what's in the messy room (like emacs profiles if you do that) might be hidden behind the very nice and solid door.
It's like in python projects I lock python3 and uv, and beyond that it's wild west. Still beats everything else, still feels a bit incomplete, and still feels somewhat unresolvable.
it's simultaneously awesome but "can I really recommend this to <colleague>?"