1. Are you ordering the jobs by any parameter? I don't see an ORDER BY in this clause: https://github.com/oneapplab/lq/blob/8c9f8af577f9e0112767eef...
2. I see you're using a UUID for the primary key on the jobs, I think you'd be better served by an auto-inc primary key (bigserial or identity columns in Postgres) which will be slightly more performant. This won't matter for small datasets.
3. I see you have an index on `queue`, which is good, but no index on the rest of the parameters in the processor query, which might be problematic when you have many reserved jobs.
4. Since this is an in-process queue, it would be awesome to allow the tx to be passed to the `Create` method here: https://github.com/oneapplab/lq/blob/8c9f8af577f9e0112767eef... -- so you can create the job in the same tx when you're performing a data write.
And the way you word it makes it seem like you think CoCs are bad or “forced”; I don’t particularly want to engage there, but I’d encourage you to reflect on why you think that.
gcc -o sqldiff sqldiff.c ../sqlite3.c -I.. -ldl -lpthread
Which was enough for me to figure out I needed to get the sqlite3.c amalgamation to build and then run gcc in the tool/ directory. I landed on this after a bit more poking: gcc -o sqlite3-rsync sqlite3-rsync.c ../sqlite3.c -DSQLITE_ENABLE_DBPAGE_VTAB
I did have a much more useful interaction with an LLM later on: I was curious about the protocol used over SSH, so I copied the whole of this 1800 line C file:https://github.com/sqlite/sqlite/blob/sqlite3-rsync/tool/sql...
And pasted it into OpenAI o1-preview with the prompt "Explain the protocol over SSH part of this" - and got back a genuinely excellent high-level explanation of how that worked: https://chatgpt.com/share/6701450c-bc9c-8006-8c9e-468ab6f67e...
Maybe "don't have thousands of servers" is just a bad take :)