Readit News logoReadit News
cat_plus_plus commented on A better zip bomb (2019)   bamsoftware.com/hacks/zip... · Posted by u/kekqqq
arjie · 16 hours ago
The fact that ZIP files include the catalog/directory at the end is such nostalgia fever. Back in the day it meant that if you naïvely downloaded the file, a partial download would be totally useless. Fortunately, in the early 2000s, we got HTTP's Range and a bunch of zip-aware downloaders that would fetch the catalog first so that you could preview a zip you were downloading and even extract part of a file! Good times. Well, not as good as now, but amusing to think of today.
cat_plus_plus · 13 hours ago
Well what do you want it to do, it doesn't know full directory with offsets until it's done compressing and dispersed directory would have lousy access pattern for quick listing. And you know, if you are compressing you probably want the smallest file so duplicate directories are not idea.
cat_plus_plus commented on Ask HN: How can I get better at using AI for programming?    · Posted by u/lemonlime227
cat_plus_plus · 7 days ago
AI is great at pattern matching. Set up project instructions that give several examples of old code, new code and detailed explanations of choices made. Also add a negative prompt, a list of things you do not want AI to do based on past frustrations.
cat_plus_plus commented on Sycophancy is the first LLM "dark pattern"   seangoedecke.com/ai-sycop... · Posted by u/jxmorris12
cat_plus_plus · 19 days ago
It's just a matter of system prompt. Create a nagging spouse Gemini Gem / Grok project. Give good step by step instructions about shading your joy, latching on to small inaccuracies, scrutinizing your choices and your habits. Emphasize catching signs of intoxication like typos. Give half a dozen examples of stelar nags in different conversations. There is enough reddit training data that model went through to follow well given a good pattern to latch on to.

Then see how many takers you find. There are already nagging spouses / critical managers, people want AI to do something they are not getting elsewhere.

cat_plus_plus commented on When did people favor composition over inheritance?   sicpers.info/2025/11/when... · Posted by u/ingve
cat_plus_plus · a month ago
Is your code simple? Then use whatever helps you finish it fast and rewrite later if needed. Or is it complicated? Then don't rely on any canned advice. If you are implementing a virtual machine on an embedded chip, maybe parallel arrays and gotos are the way to go, nobody except you knows. Everything else is just overpaid senior architects trying to justify their own existence by not allowing working code to be merged.
cat_plus_plus commented on Ask HN: How to deal with long vibe-coded PRs?    · Posted by u/philippta
cat_plus_plus · 2 months ago
Vibe review with all the reasons it should not be merged obviously.
cat_plus_plus commented on When O3 is 2x slower than O2   cat-solstice.github.io/te... · Posted by u/keyle
cat_plus_plus · 2 months ago
As a denser gas, Ozone would have greater friction getting through small pores, so that would be one example?
cat_plus_plus commented on Getting DeepSeek-OCR working on an Nvidia Spark via brute force with Claude Code   simonwillison.net/2025/Oc... · Posted by u/simonw
htrp · 2 months ago
serious q, why grok vs another frontier model?
cat_plus_plus · 2 months ago
Grok browses a large number of websites for queries that need recent information, which is super handy for new hardware like Thor.
cat_plus_plus commented on Getting DeepSeek-OCR working on an Nvidia Spark via brute force with Claude Code   simonwillison.net/2025/Oc... · Posted by u/simonw
syntaxing · 2 months ago
Ehh, is it cool and time savings that it figured it out? Yes. But the solution was to get a “better” version prebuilt wheel package of PyTorch. This is a relatively “easy” problem to solve (figuring out this was the problem does take time). But it’s (probably, I can’t afford one) going to be painful when you want to upgrade the cuda version or specify a specific version. Unlike a typical PC, you’re going to need to build a new image and flash it. I would be more impressed when a LLM can do this end to end for you.
cat_plus_plus · 2 months ago
You can still upgrade CUDA within forward compatibility range and install new packages without reflashing.
cat_plus_plus commented on Getting DeepSeek-OCR working on an Nvidia Spark via brute force with Claude Code   simonwillison.net/2025/Oc... · Posted by u/simonw
cat_plus_plus · 2 months ago
No idea why Nvidia has such crusty torch prebuilds on their own hardware. Just finished installing unsloth on a Thor box for some finnetuning, it's a lengthy build marathon, thankfully aided by Grok giving commands/environment variables for the most part (one finishing touch is to install latest CUDA from nvidia website and then replace compiler executables in triton package with newer ones from CUDA).
cat_plus_plus commented on NanoChat – The best ChatGPT that $100 can buy   github.com/karpathy/nanoc... · Posted by u/huseyinkeles
cat_plus_plus · 2 months ago
End to end training is a different beast, but finetuning and inference of impressive LLMs like QWEN3 can be done on pretty run of the mill hardware like Apple Silicon macs and gaming PCs if anyone wants a personalized assistant with character. Just ask AI how to finetune AI using unsloth (if using NVIDIA) or MLX (for apple) and it will give you ready to run python scripts.

u/cat_plus_plus

KarmaCake day1470May 31, 2016View Original