Readit News logoReadit News
farkin88 · 22 days ago
Big takeaway for me: the win isn’t better prompts, it’s semantic guarantees. By proving at the bytecode level that the pixel loop is side-effect-free, you can safely split it into long-lived workers and use an order-preserving queue. It's an aggressive transform copilots won’t attempt because they can’t verify invariants. That difference in guarantees (deterministic analysis vs. probabilistic suggestion) explains the 2× gap more than anything else.
top256 · 22 days ago
Yes exactly and that was the hard part (extract and verify the invariants). Still it's surprising because llm needs to be able to do that for any complex code.

What you wrote is great can I copy/paste it in the blog post? (Referring you of course)

farkin88 · 22 days ago
For sure. Feel free to copy/paste it. Great blog by the way. Will keep any eye out more of your posts.
dhosek · 22 days ago
Anything built to purpose (by a competent dev) will usually beat out a general purpose tool. I remember burntsushi being surprised that my purpose-built unicode segmentation code so dramatically outperformed the unicode segmentation he had in bytestring which was based on regular expressions, but personally I would be surprised if it were any different.
burntsushi · 22 days ago
Do you have a link to my surprised? I would be surprised if I were surprised by a purpose built thing beating something more general purposed. :P
dhosek · 22 days ago
It was on reddit. Maybe I misremembered your reaction.
amelius · 22 days ago
I'm totally not surprised by this. It would be strange if, at this point, we couldn't find anything that a specialized tool could do better.

But rest assured that the LLM folks are watching, and learning from this, so the issue will probably be resolved in the next version. Of course without thanking/crediting the author of the article.

Deleted Comment

sriram_malhar · 22 days ago
Isn't the original reason for LLMs, language translation, the classic example where LLMs handily beat out bespoke translation tools?
44za12 · 22 days ago
I have breach parser that i had written to parse through over 3 billion rows of compressed data (by parsing i simply mean searching for a particular substring), I’ve tried multiple LLMs to make it faster (currently it does so in <45 seconds on an M3 pro mac) none have been able to do that yet.

https://github.com/44za12/breach-parse-rs

Feel free to drop ideas if any.

rented_mule · 22 days ago
For simple string search (i.e., not regular expressions) ripgrep is quite fast. I just generated a simple 20 GB file with 10 random words per line (from /usr/share/dict/words). `rg --count-matches funny` takes about 6 seconds on my M2 Pro. Compressing it using `zstd -0` and then searching with `zstdcat lines_with_words.txt.zstd | rg --count-matches funny` takes about 25 seconds. Both timings start with the file not cached in memory.
44za12 · 22 days ago
Tried that it’s taking exactly as much time as my program.
justinsaccount · 22 days ago
I have an older breach data set that I loaded into clickhouse:

  SELECT *
  FROM passwords
  WHERE (password LIKE '%password%') AND (password LIKE '%123456%')
  ORDER BY user ASC
  INTO OUTFILE '/tmp/res.txt'

  Query id: 9cafdd86-2258-47b2-9ba3-2c59069d7b85

  12209 rows in set. Elapsed: 2.401 sec. Processed 1.40 billion rows, 25.24 GB (583.02 million rows/s., 10.51 GB/s.)
Peak memory usage: 62.99 MiB.

And this is on a Xeon W-2265 from 2020.

If you don't want to use clickhouse you could try duckdb or datafusion (which is also rust).

In general, the way I'd make your program faster is to not read the data line by line... You probably want to do something like read much bigger chunks, ensure they are still on a line boundary, then search those larger chunks for your strings. Or look into using mmap and search for your strings without even reading the files.

brunocvcunha · 22 days ago
What about AlphaEvolve / OpenEvolve https://github.com/codelion/openevolve? It has a more structured way of improving / evolving code, as long as you setup the correct evaluator.
top256 · 22 days ago
It's a great idea but yeah the evaluator (especially in this case) seems hard to build. I'll think about this because it's a great idea
Someone · 22 days ago
I would start by figuring out where there is room for improvement. Experiments to do:

- how long does it take to just iterate over all bytes in the file?

- how long does it take to decompress the file and iterate over all bytes in the file?

To ensure the compiler doesn’t outsmart you, you may have to do something with the data read. Maybe XOR all 64-bit longs in the data and print the result?

You don’t mention file size but I guess the first takes significantly less time than 45 seconds, and the second about 45 seconds. If so, any gains should be sought in improving the decompression.

Other tests that can help locate the bottleneck are possible. For example, instead of processing a huge N megabyte file once, you may process a 1 MB file N times, removing disk speed from the equation.

varispeed · 22 days ago
You can't just tell LLM "make it faster, no mistakes or else". You may need to nudge it to use specific techniques (good idea to ask it first what techniques it is aware of), then give it comparison before and after, maybe with assembly. You can even get assembly output to another LLM session and ask it to count cycles, then feed the result to another session. You can also look yourself what seems excessive, consult CPU datasheets and nudge LLM to work on that area. This workflow isn't much faster than just optimising by hand, but if you are bored with typing code is a bit refreshing. Like you can focus on "high level" and LLM does the rest.
lawlessone · 22 days ago
>You can't just tell LLM "make it faster, no mistakes or else".

Just told the LLM to create a GUI in visual basic. I am a hacker now.

lawlessone · 22 days ago
Regex.
cr125rider · 22 days ago
Whew compilers are still better than LLMs.
Lockal · 22 days ago
It is very likely that LLM will be able to plagiarize https://ispc.github.io/example.html and steal ready to use optimal code for Mandelbrot, while specialized optimizers are locked within a domain. Not even speaking of the fact, that author is producing graphics: the task should be solved on the GPU in the first place.

Deleted Comment

furyofantares · 22 days ago
I certainly expect a human to do better here but if you wanna show it, giving a one line prompt to 2nd best LLMs to one-shot it isn't really the way to do it. Use Opus and o3, and give it to an agent that can measure things and try more than once.
top256 · 22 days ago
Great idea. Which agent to use?

I tried with opus and o3 but I had to copy/paste the code and I wasn't sure it was the best way.

I tried 10 prompts and the simplest was the best (probably due to the code being simplistic)

top256 · 22 days ago
Also it wasn't done by a human but by my tool (the code in the repo is decompiled bytecode)
furyofantares · 22 days ago
After reading another comment I'm not sure my suggestion is any good, it may not test looking at code and improving it and instead test "writing optimized mandlebrot in java" which it has probably seen some great examples of.