Readit News logoReadit News
oofbey commented on Testing a cheaper laminar flow hood   chillphysicsenjoyer.subst... · Posted by u/surprisetalk
ProAm · a day ago
lol I wanted hood of a car flow. This is cool, but not that cool
oofbey · 18 hours ago
I always thought these hoods were for sucking away germs or fumes so they didn’t get out into the room. Nope. That’s backwards. Trying to keep the sample clean here.
oofbey commented on AIsbom – open-source CLI to detect "Pickle Bombs" in PyTorch models   github.com/Lab700xOrg/ais... · Posted by u/lab700xdev
rafram · a day ago
> It looks for GLOBAL or STACK_GLOBAL instructions referencing dangerous modules like os.system, subprocess, or socket.

This seems like a doomed approach. You can’t make a list of every “dangerous” function in every library.

oofbey · a day ago
Agree an explicit block list is not very robust. I imagine the vast majority of legit ML models use only a very limited set of math functions and basically no system interaction. Would be good to fingerprint a big set of assumed-safe models and flag anything which diverges from that.
oofbey commented on AIsbom – open-source CLI to detect "Pickle Bombs" in PyTorch models   github.com/Lab700xOrg/ais... · Posted by u/lab700xdev
lab700xdev · a day ago
Hi HN,

I’ve been working with ML infrastructure for a while and realized there’s a gap in the security posture: we scan our requirements.txt for vulnerabilities, but blindly trust the 5GB binary model files (.pt) we download from Hugging Face.

Most developers don't realize that standard PyTorch files are just Zip archives containing Python Pickle bytecode. When you run torch.load(), the unpickler executes that bytecode. This allows for arbitrary code execution (RCE) inside the model file itself - what security researchers call a "Pickle Bomb."

I built AIsbom (AI Software Bill of Materials) to solve this without needing a full sandbox.

How it works: 1. It inspects the binary structure of artifacts (PyTorch, Pickle, Safetensors) without loading weights into RAM. 2. For PyTorch/Pickles, it uses static analysis (via pickletools) to disassemble the opcode stream. 3. It looks for GLOBAL or STACK_GLOBAL instructions referencing dangerous modules like os.system, subprocess, or socket. 4. It outputs a CycloneDX v1.6 JSON SBOM compatible with enterprise tools like Dependency-Track. 5. It also parses .safetensors headers to flag "Non-Commercial" (CC-BY-NC) licenses, which often slip into production undetected.

It’s open source (Apache 2.0) and written in Python/Typer. Repo: https://github.com/Lab700xOrg/aisbom Live Demo (Web Viewer): https://aisbom.io

Why I built a scanner? https://dev.to/labdev_c81554ba3d4ae28317/pytorch-models-are-...

I’d love feedback on the detection logic (specifically safety.py) or if anyone has edge cases of weird Pickle protocols that break the disassembler.

oofbey · a day ago
Thanks for starting to address the gap. When would this tool be best used? As a post commit hook? In the CI/CD chain? At runtime?
oofbey commented on Linux Sandboxes and Fil-C   fil-c.org/seccomp... · Posted by u/pizlonator
oofbey · 4 days ago
Nit:The word “orthogonal” should not mean merely “different”. It should mean “completely unrelated” if we are drawing a proper analogy from linear algebra. Orthogonal vectors have a dot product of zero. No correlation whatsoever. As ML and linear algebra terms spread to more common language of course the terms will change their meaning. Just as “literally” now often means “figuratively” I’m not going to die on this hill. But I will try to resist degradation of terms that have specific technical meaning.

So I would very much disagree with the statement that memory safety and sandboxing are orthogonal. They are certainly different. Linearly independent even. But with a fair amount of overlap.

oofbey commented on The highest quality codebase   gricha.dev/blog/the-highe... · Posted by u/Gricha
maddmann · 6 days ago
Ah true, that also can happen — in aggregate I think models will tend to expand codebases versus contract. Though, this is anecdotal and probably is something ai labs and coding agent companies are looking at now.
oofbey · 6 days ago
It’s the same bias for action which makes them code up a change when you genuinely are just asking a question about something. They really want to write code.
oofbey commented on An Orbital House of Cards: Frequent Megaconstellation Close Conjunctions   arxiv.org/abs/2512.09643... · Posted by u/rapnie
emtel · 6 days ago
There actually is one idea for cleaning up debris in high orbit: You launch tons of very fine powder into the orbits you wish to clear. These orbiting particles create drag on anything up there, so that their orbits degrade much faster. But the because the particles themselves are so tiny, they have a very low ballistic coefficient, and will deorbit quickly.

More: https://caseyhandmer.wordpress.com/2019/10/25/space-debris-p...

oofbey · 6 days ago
That’s a solid idea. Never heard that before. And it really seems like it would solve an otherwise extremely difficult problem.

It would not discriminate though. Everything in that orbit would be taken down - debris and any functional satellites.

oofbey commented on The highest quality codebase   gricha.dev/blog/the-highe... · Posted by u/Gricha
maddmann · 6 days ago
lol 5000 tests. Agentic code tools have a significant bias to add versus remove/condense. This leads to a lot of bloat and orphaned code. Definitely something that still needs to be solved for by agentic tools.
oofbey · 6 days ago
Oh I’ve had agents remove tests plenty of times. Or cripple the tests so they pass but are useless - more common and harder to prompt against.
oofbey commented on Amazon EC2 M9g Instances   aws.amazon.com/ec2/instan... · Posted by u/AlexClickHouse
diath · 7 days ago
No benchmarks. No FLOPs. No comparison to commodity hardware. I hate the cloud servers. "9 is faster than 8 which is faster than 7 which is faster than 6, ..., which is faster than 1, which has unknown performance".
oofbey · 7 days ago
As soon as they're publicly usable people benchmark them carefully. All currently available models have clear metrics.
oofbey commented on Perl's decline was cultural   beatworm.co.uk/blog/compu... · Posted by u/todsacerdoti
wlonkly · 11 days ago
Larry was (and presumably is, but I'm out of that loop) a gem. The Weird Al of programming languages. Hilarious and kind.

But those who remember the regulars of, say, efnet #perl (THIS ISN'T A HELP CHANNEL), there was a dearth of kindness for sure. I was probably part of it too, because that was the culture! This is where the wizards live, why are you here asking us questions?

Like cms, I'm also hesitant to name names, but the folks I'm thinking of were definitely perl-famous in their day.

There were also a bunch of great people in the community, and they helped me launch my career in tech in the 90s, and I have close internet friends from that community to this day (and great memories of some who have passed on). But there were definitely also jerks.

oofbey · 10 days ago
Maybe in retrospect instead of “this isn’t a help channel” it should have said “go to #perl-help for questions” or made #perl be an open forum and moved the wizard discussion to #perl-experts?
oofbey commented on IBM CEO says there is 'no way' spending on AI data centers will pay off   businessinsider.com/ibm-c... · Posted by u/nabla9
zeckalpha · 15 days ago
Reminds me of all the dark fiber laid in the 1990s before DWDM made much of the laid fiber redundant.

If there is an AI bust, we will have a glut of surplus hardware.

oofbey · 15 days ago
For the analogy to fiber & DWDM to hold, we'd need some algorithmic breakthrough that makes current GPUs much faster / more efficient at running AI models. Something that makes the existing investment in hardware unneeded, even though the projected demand is real and continues to grow. IMNSHO that's not going to happen here. The foreseeable efficiency innovations are generally around reduced precision, which almost always require newer hardware to take advantage of. Impossible to rule out brilliant innovation, but I doubt it will happen like that.

And of course we might see an economic bubble burst for other reasons. That's possible again even if the demand continues to go up.

u/oofbey

KarmaCake day601December 15, 2020View Original