In the Swedish schoolsystem, the idea for the past 20 years has been exactly this, that is to try to teach critical thinking, reasoning, problem solving etc rather than hard facts. The results has been...not great. We discovered that reasoning and critical thinking is impossible without a foundational knowledge about what to be critical about. I think the same can be said about software development.
I'm not sure I'd agree that it's been outright "not great". I myself am the product of that precise school-system, being born in 1992 in Sweden (but now living outside the country). But I have vivid memories of some of the classes where we talked about how to learn, how to solve problems, critical thinking, reasoning, being critical of anything you read in newspapers, difference between opinions and facts, how propaganda works and so on. This was probably through year/class 7-9 if I remember correctly, and both me and others picked up on it relatively quick, and I'm not sure I'd have the same mindset today if it wasn't for those classes.
Maybe I was just lucky with good teachers, but surely there are others out there who also had a very different experience than what you outline? To be fair, I don't know how things are working today, but at least at that time it actually felt like I had use of what I was thought in those classes, compared to most other stuff.
While system promting is the easy way of limiting the output in a somewhat predictable manner, have you tried setting `max_tokens` when doing inference? For me that works very well for constraining the output, if you set it to 100 you get very short answers while if you set it to 10,000 you can very long responses.
./llama.cpp/llama-cli -hf unsloth/DeepSeek-V3.1-GGUF:UD-Q2_K_XL -ngl 99 --jinja -ot ".ffn_.*_exps.=CPU"
More details on running + optimal params here: https://docs.unsloth.ai/basics/deepseek-v3.1
Was that document almost exclusively written with LLMs? I looked at it last night (~8 hours ago) and it was riddled with mistakes, most egregious was that the "Run with Ollama" section had instructions for how to install Ollama, but then the shell commands were actually running llama.cpp, a mistake probably no human would make.
Do you have any plans on disclosing how much of these docs are written by humans vs not?
Regardless, thanks for the continued release of quants and weights :)
> Regarding age verification, here is our current plan for UK users: If your account is more than ten years old, we will assume you are currently over 18 [...] If your account ever bought Supporter status with a credit card and we can confirm that with the payment processor, we will assume you are over 18 [...] If your account ever bought Supporter status more than two years ago, we will assume you are over 18 [...] If none of the above applies, you will have the opportunity to pay a small one-time fee
> We are not planning to offer things like ID checks or facial recognition because these require us to pay a third party to confirm each person
> One positive is that charging small verification fees will hopefully get Newgrounds closer to break-even, assuming we don’t run into trouble with payment processors
You yourself acknowledge someone can better than another on getting good results from Stable Diffusion, how is that in any way similar to slot machine or rolling the dice? The point of those analogies is precisely that it doesn't matter what skill/knowledge you have, you'll get a random outcome. The same is very much not true for Stable Diffusion usage, something you seem to know yourself too.
Harmless enough if you are just making images for fun. But probably not an ideal workflow for real work.
It can be, and usually is by default. If you set the seeds to deterministic numbers, and everything else remains the same, you'll get deterministic output. A slot machine implies you keep putting in the same thing and get random good/bad outcomes, that's not really true for Stable Diffusion.
fwiw, `tar xzf foobar.tgz` = "_x_tract _z_e _f_iles!" has been burned into my brain. It's "extract the files" spoken in a Dr. Strangelove German accent
Better still, I recently discovered `dtrx` (https://github.com/dtrx-py/dtrx) and it's great if you have the ability to install it on the host. It calls the right commands and also always extracts into a subdir, so no more tar-bombs.
If you want to create a tar, I'm sorry but you're on your own.
"also always extracts into a subdir" sounds like a nice feature though, thanks for sharing another alternative!