Readit News logoReadit News
diggan commented on FFmpeg 8.0   ffmpeg.org/index.html#pr8... · Posted by u/gyan
beala · a day ago
Tough crowd.

fwiw, `tar xzf foobar.tgz` = "_x_tract _z_e _f_iles!" has been burned into my brain. It's "extract the files" spoken in a Dr. Strangelove German accent

Better still, I recently discovered `dtrx` (https://github.com/dtrx-py/dtrx) and it's great if you have the ability to install it on the host. It calls the right commands and also always extracts into a subdir, so no more tar-bombs.

If you want to create a tar, I'm sorry but you're on your own.

diggan · a day ago
I used tar/unzip for decades I think, before moving to 7z which handles all formats I throw at it, and have the same switch for when you want to decompress into a specific directory, instead of having to remember which one of tar and unzip uses -d, and which one uses -C.

"also always extracts into a subdir" sounds like a nice feature though, thanks for sharing another alternative!

diggan commented on AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'   theregister.com/2025/08/2... · Posted by u/JustExAWS
cambaceres · 2 days ago
> “I think the skills that should be emphasized are how do you think for yourself? How do you develop critical reasoning for solving problems? How do you develop creativity? How do you develop a learning mindset that you're going to go learn to do the next thing?”

In the Swedish schoolsystem, the idea for the past 20 years has been exactly this, that is to try to teach critical thinking, reasoning, problem solving etc rather than hard facts. The results has been...not great. We discovered that reasoning and critical thinking is impossible without a foundational knowledge about what to be critical about. I think the same can be said about software development.

diggan · a day ago
> In the Swedish schoolsystem, the idea for the past 20 years has been exactly this, that is to try to teach critical thinking, reasoning, problem solving etc rather than hard facts. The results has been...not great.

I'm not sure I'd agree that it's been outright "not great". I myself am the product of that precise school-system, being born in 1992 in Sweden (but now living outside the country). But I have vivid memories of some of the classes where we talked about how to learn, how to solve problems, critical thinking, reasoning, being critical of anything you read in newspapers, difference between opinions and facts, how propaganda works and so on. This was probably through year/class 7-9 if I remember correctly, and both me and others picked up on it relatively quick, and I'm not sure I'd have the same mindset today if it wasn't for those classes.

Maybe I was just lucky with good teachers, but surely there are others out there who also had a very different experience than what you outline? To be fair, I don't know how things are working today, but at least at that time it actually felt like I had use of what I was thought in those classes, compared to most other stuff.

diggan commented on DeepSeek-v3.1   api-docs.deepseek.com/new... · Posted by u/wertyk
guerrilla · a day ago
So, is the output price there why most models are extremely verbose? Is it just a ploy to make extra cash? It's super annoying that I have to constantly tell it to be more and more concise.
diggan · a day ago
> It's super annoying that I have to constantly tell it to be more and more concise.

While system promting is the easy way of limiting the output in a somewhat predictable manner, have you tried setting `max_tokens` when doing inference? For me that works very well for constraining the output, if you set it to 100 you get very short answers while if you set it to 10,000 you can very long responses.

diggan commented on DeepSeek-v3.1   api-docs.deepseek.com/new... · Posted by u/wertyk
danielhanchen · 2 days ago
For local runs, I made some GGUFs! You need around RAM + VRAM >= 250GB for good perf for dynamic 2bit (2bit MoE, 6-8bit rest) - can also do SSD offloading but it'll be slow.

./llama.cpp/llama-cli -hf unsloth/DeepSeek-V3.1-GGUF:UD-Q2_K_XL -ngl 99 --jinja -ot ".ffn_.*_exps.=CPU"

More details on running + optimal params here: https://docs.unsloth.ai/basics/deepseek-v3.1

diggan · a day ago
> More details on running + optimal params here: https://docs.unsloth.ai/basics/deepseek-v3.1

Was that document almost exclusively written with LLMs? I looked at it last night (~8 hours ago) and it was riddled with mistakes, most egregious was that the "Run with Ollama" section had instructions for how to install Ollama, but then the shell commands were actually running llama.cpp, a mistake probably no human would make.

Do you have any plans on disclosing how much of these docs are written by humans vs not?

Regardless, thanks for the continued release of quants and weights :)

diggan commented on Mid-Year Online Safety Update – The UK Online Safety Act Deadline Has Arrived   newgrounds.com/bbs/topic/... · Posted by u/diggan
diggan · a day ago
Sounds like a much better approach than what many others are resorting to, by just using some basic heuristics to automatically validate a ton of users:

> Regarding age verification, here is our current plan for UK users: If your account is more than ten years old, we will assume you are currently over 18 [...] If your account ever bought Supporter status with a credit card and we can confirm that with the payment processor, we will assume you are over 18 [...] If your account ever bought Supporter status more than two years ago, we will assume you are over 18 [...] If none of the above applies, you will have the opportunity to pay a small one-time fee

> We are not planning to offer things like ID checks or facial recognition because these require us to pay a third party to confirm each person

> One positive is that charging small verification fees will hopefully get Newgrounds closer to break-even, assuming we don’t run into trouble with payment processors

diggan commented on Building AI products in the probabilistic era   giansegato.com/essays/pro... · Posted by u/sdan
bithive123 · 2 days ago
Strictly speaking, yes, but there is so much variability introduced by prompting that even keeping the seed value static doesn't change the "slot machine" feeling, IMHO. While prompting is something one can get better at, you're still just rolling the dice and waiting to see whether the output is delightful or dismaying.
diggan · 2 days ago
> IMHO. While prompting is something one can get better at, you're still just rolling the dice and waiting to see whether the output is delightful or dismaying.

You yourself acknowledge someone can better than another on getting good results from Stable Diffusion, how is that in any way similar to slot machine or rolling the dice? The point of those analogies is precisely that it doesn't matter what skill/knowledge you have, you'll get a random outcome. The same is very much not true for Stable Diffusion usage, something you seem to know yourself too.

diggan commented on AI crawlers, fetchers are blowing up websites; Meta, OpenAI are worst offenders   theregister.com/2025/08/2... · Posted by u/rntn
BrenBarn · 2 days ago
Okay, but are OpenAI and Meta straight up buying botnets on the black market?
diggan · 2 days ago
OpenAI i'm not so sure about, but since Meta already got caught downloading copyrighted material to train LLMs, I think it isn't far fetched for them to also use borderline illegal methods for acquiring IPs to use.
diggan commented on Building AI products in the probabilistic era   giansegato.com/essays/pro... · Posted by u/sdan
bithive123 · 2 days ago
It became evident to me while playing with Stable Diffusion that it's basically a slot machine. A skinner box with a variable reinforcement schedule.

Harmless enough if you are just making images for fun. But probably not an ideal workflow for real work.

diggan · 2 days ago
> It became evident to me while playing with Stable Diffusion that it's basically a slot machine.

It can be, and usually is by default. If you set the seeds to deterministic numbers, and everything else remains the same, you'll get deterministic output. A slot machine implies you keep putting in the same thing and get random good/bad outcomes, that's not really true for Stable Diffusion.

u/diggan

KarmaCake day23799March 2, 2012
About
https://notes.victor.earth

https://bsky.app/profile/victor.earth

hn@victor.earth

aspe:keyoxide.org:Q6B7ZBQITV7IE2RG4EMVKWT4VI

View Original