- create your frozen ducklake
- run whatever "normal" mutation query you want to run (DELETE, UPDATE, MERGE INTO)
- use `ducklake_rewrite_data_files` to make new files w/ mutations applied, then optionally run `ducklake_merge_adjacent_files` to compact the files as well (though this might cause all files to change).
- call `ducklake_list_files` to get the new set of active files.
- update your upstream "source of truth" with this new list, optionally deleting any files no longer referenced.
The net result should be that any files "touched" by your updates will have new updated versions alongside them, while any that were unchanged should just be returned in the list files operation as is.
Or we could do the same for an adventure game which has more story than Doom.
- rendering Doom then using an LLM to get a text description of the scene
- asking the user what to do
- converting their text action into a couple seconds of doom input
- re-render and repeat until dead or stage clear.
I was wrong but still fun to think about!
I think one of the big differences between AI and most other previous technologies is that the potential impact different people envision is very high variance, anywhere from extremely negative to positive with almost all points in between as well.
So it’s not just different risk tolerances its also that different people see the risks and rewards very differently.
This looks very cool to me, though. If you have used it, can you share how you’ve used the generated diagrams? I can imagine a screenshot of the diagram, but the raw text would probably be too big in (e.g. a terminal).
Making readable diagrams with 80 character width can be a challenge.
I bought it back in either late 2017 or early 2018 and used it a fair amount at first but will admit it’s been a couple years since and haven’t tried reinstalling since my last clean OS wipe.
> Nevertheless, re-identification risk in the wild does not appear to be especially high. While we observe a success rate as high as 25%, this is only achieved when the genomic dataset is extremely small, on the order of 10 individuals. In contrast, success rate for top 1 matching drops quickly and is negligible for populations of more than 100 individuals. Moreover, it should be kept in mind that this result assumes that we can predict the phenotypes perfectly.
This sort of “HN coincidence” has happened to me several times this week, is there a term for it I don’t know?