Readit News logoReadit News
Veserv commented on I couldn't find a logging library that worked for my library, so I made one   hackers.pub/@hongminhee/2... · Posted by u/todsacerdoti
mashepp · 5 days ago
So you don’t use any software that has had a security vulnerability?

What operating system and browser did you use to write your post?

Veserv · a day ago
Unary thinking has no place when considering software quality or security. Just because things have vulnerabilities does not mean that the category, severity, and frequency of them is a irrelevant consideration.

The Log4j vulnerability was effectively calling eval() on user input strings. That is utter incompetence to the extreme with immediately obvious, catastrophic consequences to anybody with any knowledge whatsoever of software security. That should be immediately disqualifying like a construction company delivering a house without a roof. "Oh yeah, anybody could forget to put a roof on a house. We can not hold them responsible for every little mistake." is nonsense. Basic, egregious errors are disqualifying.

Now, it could be the case that everything is horribly defective and inadequate and everybody is grossly incompetent. That does not somehow magically make inadequacy adequate and appropriate for use. It is just that in software people get away with using systems unfit for purpose because they had "no choice but to use substandard components and harm their users so they could make money".

Veserv commented on Tesla reports another Robotaxi crash   electrek.co/2025/12/15/te... · Posted by u/hjouneau
93po · a day ago
This is the state of accepted journalism now? Fabricate ridiculous claims and then make the target of your hit piece responsible for refuting it?
Veserv · a day ago
Wait, are you talking about Tesla? Since they are the ones who fabricated ridiculous claims like old versions of FSD using old hardware on arbitrary roads using untrained customers as safety drivers average ~5 million miles per collision and are thus ~2-7x safer than human drivers. Given that they present no credible, auditable evidence for that claim following your logic it should be unnecessary for anybody to refute their ridiculous claim and their systems can not be demonstrated to be safe despite billions of miles.

Despite that, the article and the public (the target of the hit piece that encourages people to endanger themselves with a system that has not been demonstrated to be safe with the direct intent of enriching the owners of Tesla) directly refute Tesla's ridiculous claims demonstrating they are off by multiple orders of magnitude using basic mandatory data reporting for their Robotaxi program which is using systems more advanced, fine-tuned, geofenced, with professional safety drivers (thus we can only reasonably assume that their normal system is worse), but which actually has scrutinized reporting requirements.

And yet now you argue that the entity fabricating ridiculous claims for their own enrichment, Tesla, is not only not responsible, but target of the hit piece, the ones that clearly and debunked Tesla's claims as deceptive, are not only responsible for refuting it but are responsible for demonstrating a level of rigor that is unimpeachable when the original fabricated claim lacks even the elements of rigor we expect out of your average middle school science fair, let alone a literal trillion dollar company.

Talk about double standards.

Veserv commented on Tesla reports another Robotaxi crash   electrek.co/2025/12/15/te... · Posted by u/hjouneau
Veserv · 2 days ago
The most damning thing is that the most advanced version, with the most modern hardware, with perfectly maintained vehicles, running in a pre-trained geofence that is pre-selected to work well [1] with trained, professional safety drivers, with scrutinized data and reporting average a upper bound of 40,000 miles per collision (assuming the mileage numbers were not puffery [3]).

Yet somehow they claim that old versions, using old hardware, on arbitrary roads, using untrained customers as safety drivers somehow average 2.9 million miles per collision in non-highway environments [2], a ~72.5x difference in collision frequency, and 5.1 million miles per collision in all environments, a ~175x(!) difference in collision frequency, when their reporting and data are not scrutinized.

I guess their most advanced software and hardware and professional safety drivers just make it 175x more dangerous.

[1] https://techcrunch.com/2025/05/20/musk-says-teslas-self-driv...

[2] https://www.tesla.com/fsd/safety

[3] https://www.forbes.com/sites/alanohnsman/2025/08/20/elon-mus...

[3.a] Tesla own attorneys have argued that statements by Tesla executives are such nonsense that no reasonable person would believe them.

Veserv commented on High Performance SSH/SCP   psc.edu/hpn-ssh-home/... · Posted by u/gslin
adolph · 2 days ago
> The fastest protocols are the TLS/HTTP based ones which stream data.

I think maybe you are referring to QUIC [0]? It'd be interesting to see some userspace clients/servers for QUIC that compete with Aspera's FASP [1] and operate on a point to point basis like scp. Both use UDP to decrease the overhead of TCP.

0. https://en.wikipedia.org/wiki/QUIC

1. https://en.wikipedia.org/wiki/Fast_and_Secure_Protocol

Veserv · 2 days ago
Available QUIC implementations are very slow. MsQUIC is one of the fastest and can only reach a meager ~7 Gb/s [1]. Most commercial implementations sit in the 2-4 Gb/s range.

To be fair, that is not really a problem of the protocol, just the implementations. You can comfortably drive 10x that bandwidth with a reasonable design.

[1] https://microsoft.github.io/msquic/

Veserv commented on High Performance SSH/SCP   psc.edu/hpn-ssh-home/... · Posted by u/gslin
riobard · 2 days ago
"(sftp) packetizes the data and waits for responses, effectively re-implementing the TCP window inside a TCP stream."

why is it designed this way? what problems it's supposed to solve?

Veserv · 2 days ago
Because that is a poor characterization of the problem.

It just has a in-flight message/queue limit like basically every other communication protocol. You can only buffer so many messages and space for responses until you run out of space. The problem there is just that the default amount of buffering is very low and is not adaptive to the available space/bandwidth.

Veserv commented on Capsudo: Rethinking sudo with object capabilities   ariadne.space/2025/12/12/... · Posted by u/fanf2
kccqzy · 6 days ago
I am the owner and only user of the computer. Does that mean I should run everything with root? Of course not. It’s simply better to start with little privileges and then elevate when needed. Using any additional privileges should be an intentional act. I also do it the other way: reduce my privileges via sudo -u nobody.
Veserv · 6 days ago
No, you should run every program with only the privileges it needs. The very concept of running your programs with all your privileges as a user by default is wrong-headed to begin with. To strain the "user" model you should have a distinct "user" for every single program which has only the resources and privileges needed by/allocated to that program. The actual user can allocate their resources to these "users" as needed. This is a fairly primitive version of the idea due to having to torture fundamentally incompatible insecure building blocks to fit, but points in the direction of the correct idea.
Veserv commented on Async DNS   flak.tedunangst.com/post/... · Posted by u/todsacerdoti
AndyKelley · 6 days ago
There's a second problem here that musl also solves. If the signal is delivered in between checking for cancelation and the syscall machine code instruction, the interrupt is missed. This can cause a deadlock if the syscall was going to wait indefinitely and the application relies on cancelation for interruption.

Musl solves this problem by inspecting the program counter in the interrupt handler and checking if it falls specifically in that range, and if so, modifying registers such that when it returns from the signal, it returns to instructions that cause ECANCELED to be returned.

Blew my mind when I learned this last month.

Veserv · 6 days ago
Introspection windows from a interrupting context are a neat technique. You can use it to implement “atomic transaction” guarantees for the interruptee as long as you control all potential interrupters. You can also implement “non-interruption” sections and bailout logic.
Veserv commented on Show HN: Wirebrowser – A JavaScript debugger with breakpoint-driven heap search   github.com/fcavallarin/wi... · Posted by u/fcavallarin
Veserv · 8 days ago
BDHS seems strictly less powerful than a time travel debugger. You can just set a hardware breakpoint and run backwards until the value is set.

Why not just do proper time travel? Is that absent for Javascript?

Veserv commented on Latency Profiling in Python: From Code Bottlenecks to Observability   quant.engineering/latency... · Posted by u/rundef
ajb · 10 days ago
Interesting, but "FunctionTrace is opensourced under the Prosperity Public License 3.0 license."

"This license allows you to use and share this software for noncommercial purposes for free and to try this software for commercial purposes for thirty days."

This is not an open source license. "Open Source" is a trademarked term meaning without restrictions of this kind; it is not a generic term meaning "source accessible".

You can also just use perf, but it does require an extra package from the python build (which uv frustratingly doesn't supply)

Veserv · 9 days ago
perf is a sampling profiler, not a function tracing profiler, so that fails the criteria I presented.

I used FunctionTrace as a example and evidence for my position that tracing Python is low overhead with proper design to bypass claims like: “You can not make it that low overhead or someone would have done it already, thus proving the negative.” I am not the author or in any way related to it, so you can bring that up with them.

Veserv commented on Latency Profiling in Python: From Code Bottlenecks to Observability   quant.engineering/latency... · Posted by u/rundef
hansvm · 10 days ago
That depends on the code you're profiling. Even good line profilers can add 2-5x overhead on programs not optimized for them, and you're in a bit of a pickle because the programs least optimized for line profiling are those which are already "optimized" (fast results for a given task when written in Python).
Veserv · 10 days ago
It does not, those are just very inefficient tracing profilers. You can literally trace C programs in 10-30% overhead. For Python you should only accept low single-digit overhead on average with 10% overhead only in degenerate cases with large numbers of tiny functions [1]. Anything more means your tracer is inefficient.

[1] https://functiontrace.com/

u/Veserv

KarmaCake day4355May 4, 2019View Original