However, in EU there is a legal limit of 1% duty cycle on 868MHz band and collision avoidance mechanism, meaning on average you can send a packet (up to 255 bytes) once a minute.
In mountainous area LoRa on 868MHz band reaches over 100km. Last month we had a stratospheric balloon with a Meshtastic node attached. It established direct (albeit intermittent) connection between Warsaw and Berlin.
I'm a c# dev with near 20 years experience, and I finally got the shits with advertising in the start menu. Arch Linux, because I figured why not do it properly?
I game a fair bit, and find most things on steam just work.
Which IDE do you use? JetBrains Rider?
just install lmstudio and run the q8_0 version of it i.e. here https://huggingface.co/bartowski/Qwen_Qwen3-4B-Instruct-2507....
you can even run it on a 4gb raspberry pi Qwen_Qwen3-4B-Instruct-2507-Q4_K_L.gguf https://lmstudio.ai/
Keep in mind if you run it at the full 262144 tokens of context youll need ~65gb of ram.
Anyway if you're on mac you can search for "qwen3 4b 2507 mlx 4bit" and run the mlx version which is often faster on m chips. Crazy impressive what you get from a 2gb file in my opinion.
It's pretty good for summaries etc, can even make simple index.html sites if you're teaching students but it can't really vibecode in my opinion. However for local automation tasks like summarizing your emails, or home automation or whatever it is excellent.
It's crazy that we're at this point now.
What is the relationship between context size and RAM required? Isn't the size of RAM related only to number of parameters and quantization?
> We’re open sourcing the model weights so the community can build fast, privacy-preserving autocomplete for every IDE - VSCode, Neovim, Emacs, and beyond.
https://blog.sweep.dev/posts/oss-next-edit