Readit News logoReadit News
tgtweak commented on I got hacked: My Hetzner server started mining Monero   blog.jakesaunders.dev/my-... · Posted by u/jakelsaunders94
fragmede · 10 hours ago
The other thing to note is that docker is for the most part, stateless. So if you're running something that has to deal with questionable user input (images and video or more importantly PDFs), is to stick it on its own VM and then cycle the docker container every hour and the VM every 12, and then still be worried about it getting hacked and leaking secrets.
tgtweak · 4 hours ago
Most of this is mitigated by running docker in an LXC containers (like proxmox does) which grants a lot more isolation than docker on it's own - closer in nature to running separate VMs.
tgtweak commented on I got hacked: My Hetzner server started mining Monero   blog.jakesaunders.dev/my-... · Posted by u/jakelsaunders94
meisel · 11 hours ago
Is mining via CPU even worthwhile for the hackers? I thought ASICs dominated mining
tgtweak · 11 hours ago
Monero's proof of work (RandomX) is very asic-resistant and although it generates a very small amount of earnings, if you exploit a vulnerability like this with thousands or tens of thousands of nodes, it can add up (8 modern cores 24/7 on Monero would be in the 10-20c/day per node range). OPs Vps probably generated about $1 for those script kiddies.
tgtweak commented on I got hacked: My Hetzner server started mining Monero   blog.jakesaunders.dev/my-... · Posted by u/jakelsaunders94
qingcharles · 11 hours ago
As an aside, if you're using a Hetzner VPS for Umami you might be over-specced. I just cut my Hetzner bill by $4/mo by moving my Umami box to one of the free Oracle Cloud VPS after someone on here pointed out the option to me. Depends whether this is a hobby thing or something more serious, but that option is there.
tgtweak · 11 hours ago
The manageability of having everything on one host is kind of nice at that scale, but yeah you can stack free tiers on various providers for less.
tgtweak commented on I got hacked: My Hetzner server started mining Monero   blog.jakesaunders.dev/my-... · Posted by u/jakelsaunders94
tgtweak · 11 hours ago
Just a note - you can very much limit cpu usage on the docker containers by setting --cpus="0.5" (or cpus:0.5 in docker compose) if you expect it to be a very lightweight container, this isolation can help prevent one roudy container from hitting the rest of the system regardless of whether it's crypto-mining malware, a ddos attempt or a misbehaving service/software.
tgtweak commented on Elevated errors across many models   status.claude.com/inciden... · Posted by u/pablo24602
tgtweak · 3 days ago
I trust companies that immediately and regularly update their status/issues page and follow up any outages with proper and comprehensive post-mortems. Sadly this is becoming the exception these days and not the norm.
tgtweak commented on Elevated errors across many models   status.claude.com/inciden... · Posted by u/pablo24602
tgtweak · 3 days ago
There really should be an http header dedicated to "outage status" with a link to the service outage details page... clients (for example, in this case, your code IDE) could intercept this and notify users.

503 is cool and yes, there is the "well if it's down how are they going to put that up" but in reality most downtimes you see are on the backend and not on the reverse proxies/gateways/cdns where it would be pretty trivial to add a issues/status header with a link to the service status page and a note.

tgtweak commented on GPT-5.2   openai.com/index/introduc... · Posted by u/atgctg
patates · 6 days ago
I don't have that experience with gemini. Up to 90% full, it's just fine.
tgtweak · 3 days ago
If the models are designed around it, and not resorting to compression to get to higher input token lengths, they don't 'fall off' as they get near the context window limit. When working with large codebases, exhausting or compressing the context actually causes more issues since the agent forgets what was in the other libraries and files. Google has realized this internally and were among the first to get to 2M token context length (internally then later released publicly).
tgtweak commented on The tiniest yet real telescope I've built   lucassifoni.info/blog/min... · Posted by u/chantepierre
tgtweak · 6 days ago
So what are these tiny portable ones? I always assumed they were digitally augmented or virtual even - is there a minimum size for it to be a "real" telescope?
tgtweak commented on GPT-5.2   openai.com/index/introduc... · Posted by u/atgctg
mmaunder · 6 days ago
Weirdly, the blog announcement completely omits the actual new context window size which is 400,000: https://platform.openai.com/docs/models/gpt-5.2

Can I just say !!!!!!!! Hell yeah! Blog post indicates it's also much better at using the full context.

Congrats OpenAI team. Huge day for you folks!!

Started on Claude Code and like many of you, had that omg CC moment we all had. Then got greedy.

Switched over to Codex when 5.1 came out. WOW. Really nice acceleration in my Rust/CUDA project which is a gnarly one.

Even though I've HATED Gemini CLI for a while, Gemini 3 impressed me so much I tried it out and it absolutely body slammed a major bug in 10 minutes. Started using it to consult on commits. Was so impressed it became my daily driver. Huge mistake. I almost lost my mind after a week of this fighting it. Isane bias towards action. Ignoring user instructions. Garbage characters in output. Absolutely no observability in its thought process. And on and on.

Switched back to Codex just in time for 5.1 codex max xhigh which I've been using for a week, and it was like a breath of fresh air. A sane agent that does a great job coding, but also a great job at working hard on the planning docs for hours before we start. Listens to user feedback. Observability on chain of thought. Moves reasonably quickly. And also makes it easy to pay them more when I need more capacity.

And then today GPT-5.2 with an xhigh mode. I feel like xmass has come early. Right as I'm doing a huge Rust/CUDA/Math-heavy refactor. THANK YOU!!

tgtweak · 6 days ago
have been on 1M context window with claude since 4.0 - it gets pretty expensive when you run 1M context on a long running project (mostly using it in cline for coding). I think they've realized more context length = more $ when dealing with most agentic coding workflows on api.
tgtweak commented on Auto-grading decade-old Hacker News discussions with hindsight   karpathy.bearblog.dev/aut... · Posted by u/__rito__
tgtweak · 7 days ago
Cool - now make it analyze all of those and come up with the 10 commandments of commenting factually and insightfully on HN posts...

u/tgtweak

KarmaCake day4020January 18, 2016View Original