Readit News logoReadit News
mildbyte commented on Solving Wordle with uv's dependency resolver   mildbyte.xyz/blog/solving... · Posted by u/mildbyte
simonw · 6 months ago
Here's my favorite of the Soduku attempts at this (easier to get your head around than Wordle since it's a much simpler problem): https://github.com/konstin/sudoku-in-python-packaging

Here's the same Sudoku trick from 2008 using Debian packages: https://web.archive.org/web/20080823224640/https://algebraic...

mildbyte · 6 months ago
Funnily enough, I did a Sudoku one too (albeit with Poetry) a few years ago: https://github.com/mildbyte/poetry-sudoku-solver
mildbyte commented on Show HN: Pg_analytica – Speed up queries by exporting tables to columnar format   github.com/sushrut141/pg_... · Posted by u/wanderinglight
wanderinglight · 2 years ago
This is definitely something I intend to fix.

My initial intent was to use duckdb for fast vectored query execution but I wasn't able to create a planner / execution hook that uses duckdb internally. Will definitely checkout pg_analytics / Datafusion to see if the same can be integrated here as well. Thanks for the pointers.

mildbyte · 2 years ago
Have you seen duckdb_fdw (https://github.com/alitrack/duckdb_fdw)? IIRC it's built based on sqlite_fdw, but points the outbound queries to DuckDB instead of SQLite, and it does handle running aggregations inside of DuckDB. Could be useful.
mildbyte commented on Show HN: Pg_analytica – Speed up queries by exporting tables to columnar format   github.com/sushrut141/pg_... · Posted by u/wanderinglight
xiasongh · 2 years ago
How does this compare to pg_analytics?

https://github.com/paradedb/pg_analytics

mildbyte · 2 years ago
Another difference is that this solution uses parquet_fdw, which handles fast scans through Parquet files and filter pushdown via row group pruning, but doesn't vectorize the groupby / join operations above the table scan in the query tree (so you're still using the row-by-row PG query executor in the end).

pg_analytics uses DataFusion (dedicated analytical query engine) to run the entire query, which can achieve orders of magnitude speedups over vanilla PG with indexes on analytical benchmarks like TPC-H. We use the same approach at EDB for our Postgres Lakehouse (I'm part of the team that works on it).

mildbyte commented on LLaVA-1.6: Improved reasoning, OCR, and world knowledge   llava-vl.github.io/blog/2... · Posted by u/tosh
mildbyte · 2 years ago
Damn, literally a day after I wrote up my experiments[0] with LLaVA 1.5 and computing image embeddings. Interesting to see the performance with the fine-tuned Mistral-7B variant being pretty close to the one with Vicuna-13B - using Mistral 7B is what BakLLaVA did back with LLaVA 1.5.

[0] https://mildbyte.xyz/blog/llama-cpp-python-llava-gpu-embeddi...

Deleted Comment

u/mildbyte

KarmaCake day838November 28, 2017
About
Co-founder of Splitgraph (www.splitgraph.com), acquired by EnterpriseDB.

Can otherwise be found at https://mildbyte.xyz.

View Original