Readit News logoReadit News
vasilvv commented on Banned C++ features in Chromium   chromium.googlesource.com... · Posted by u/szmarczak
nomel · 2 months ago
Yeap. forgetting to propagate or handle an error provided in a return value is very very easy. If you fail to handle an exception, you halt.
vasilvv · 2 months ago
For what it's worth, C++17 added [[nodiscard]] to address this issue.
vasilvv commented on A P2P Vision for QUIC (2024)   seemann.io/posts/2024-10-... · Posted by u/mooreds
embedding-shape · 4 months ago
Someone correct me if I'm wrong, but I think p2p-webtransport was superseded by "webtransport" (https://github.com/w3c/webtransport). Supposedly, the webtransport design should be flexible enough to support p2p even though focus is the traditional server<>client.
vasilvv · 4 months ago
The story here is a bit complicated. WebTransport is, in some sense, an evolution of RTCQuicTransport API, which was originally meant to solve the issues people had with SCTP/DTLS stack used by RTCDataChannel. At some point, the focus switched to client-server use cases, with an agreement that we can come back to the P2P scenario after we solve the client-server one.
vasilvv commented on A safe, non-owning C++ pointer class   techblog.rosemanlabs.com/... · Posted by u/niekb
vasilvv · 5 months ago
This sounds very similar to how base::WeakPtr works in Chromium [0]. It's a reasonable design, but it only works as long as the pointer is only accessed from the same thread it is created.

[0] https://chromium.googlesource.com/chromium/src/+/HEAD/base/m...

vasilvv commented on There isn't much point to HTTP/2 past the load balancer   byroot.github.io/ruby/per... · Posted by u/ciconia
vasilvv · a year ago
The article seems to make an assumption that the application backend is in the same datacenter as the load balancer, which is not necessarily true: people often put their load balancers at the network edge (which helps reduce latency when the response is cached), or just outsource those to a CDN vendor.

> In addition to the low roundtrip time, the connections between your load balancer and application server likely have a very long lifetime, hence don’t suffer from TCP slow start as much, and that’s assuming your operating system hasn’t been tuned to disable slow start entirely, which is very common on servers.

A single HTTP/1.1 connection can only process one request at a time (unless you attempt HTTP pipelining), so if you have N persistent TCP connections to the backend, you can only handle N concurrent requests. Since all of those connections are long-lived and are sending at the same time, if you make N very large, you will eventually run into TCP congestion control convergence issues.

Also, I don't understand why the author believes HTTP/2 is less debuggable than HTTP/1; curl and Wireshark work equally well with both.

vasilvv commented on JSON parsers that can accept comments   douglascrockfordisnotyour... · Posted by u/todsacerdoti
vasilvv · a year ago
Isn't this the problem that JSON5 (and probably other similar projects) is supposed to solve?

Both JSON (as defined in the RFC) and JSON5 have a nice property of being well-defined, meaning that you can use different libraries in different languages on different platforms to parse them, and expect the same result. "JSON but parser behaves reasonably (as defined by the speaker)" does not have this property.

vasilvv commented on Stop Requiring CRLF Line Endings   fossil-scm.org/home/ext/s... · Posted by u/smartmic
vasilvv · a year ago
> HTTP → RFC-2616 says in section 19.3 says "we recommend that applications ... recognize a single LF as a line terminator...." In other words it is perfectly OK for an HTTP client or server to accept CR-less HTTP requests or replies. It is not a violation of the HTTP standard to do so. Therefore they should.

The most up-to-date version of HTTP/1.1 spec is RFC 9112, which says:

> Although the line terminator for the start-line and fields is the sequence CRLF, a recipient MAY recognize a single LF as a line terminator and ignore any preceding CR.

"MAY", of course, is different from "MUST" or "SHOULD", so I feel like the author's claim that implementations rejecting bare NLs are broken is at odds with the specification.

vasilvv commented on What's Next for WebGPU   developer.chrome.com/blog... · Posted by u/mariuz
worik · a year ago
They say:

"This is the next step in the standardization process, and it comes with stronger guarantees of stability and intellectual property protection."

I understand stability, and in the general sense I see that people feel they need to protect their IP, but in this specific case what is meant by "intellectual property protection"?

vasilvv · a year ago
W3C generally requires Working Group participants to provide IPR licensing commitments for the spec in question [0]. As far as I understand, higher level of specification maturity implies stronger level of obligations, though the specifics of what specifically changes when were never clear to me.

[0] https://www.w3.org/policies/patent-policy/#sec-Requirements

vasilvv commented on QUIC is not quick enough over fast internet   arxiv.org/abs/2310.09423... · Posted by u/carlos-menezes
kachapopopow · a year ago
This is actually very well known: current QUIC implementation in browsers is *not stable* and is built of either rustls or in another similar hacky way.
vasilvv · a year ago
I'm not sure where rustls comes from -- Chrome uses BoringSSL, and last time I checked, Mozilla implementation used NSS.
vasilvv commented on Nyxpsi – A Next-Gen Network Protocol for Extreme Packet Loss   github.com/nyxpsi/nyxpsi... · Posted by u/nyxpsi
ggm · a year ago
What are the conditions leading to extreme packet loss in layers 2&3 in the first place?

I can imagine noisy RF, industrial, congested links, new queueing at the extremes in densely loaded switches, but the thing is: usually out there are strategies to reduce the congestion. External noise, factory/industrial/adversarial, sure. This is going to exist.

vasilvv · a year ago
Generally, L2 networks are engineered with the assumption that they will carry TCP, and TCP performs really poorly with high loss rates (depends on the specific congestion control used, but the boundary can be anywhere between 1% and 25%), so they try to make sure on L2 level that losses are minimal. There are some scenarios in which a network can be engineered around high loss rates (e.g. some data center networks), but those don't use TCP, at least with traditional loss recovery.

Error correction codes on L4 level are generally only useful for very low latency situations, since if you can wait for one RTT, you can just have the original sender retransmit the exact packets that got lost, which is inherently more efficient than any ECC.

vasilvv commented on HTTP/2 and HTTP/3 explained   alexandrehtrb.github.io/p... · Posted by u/thunderbong
golebiewsky · 2 years ago
I was actually curious why SACK's don't resolve issue, but according to https://stackoverflow.com/questions/67773211/why-do-tcp-sele... > Even with selective ACK it is still necessary to get the missing data before forwarding the data stream to the application.
vasilvv · 2 years ago
It's possible to build something similar on top of TCP, see Minion [0] for an example. There are multiple reasons why this is less practical than building on top of UDP, the main two being, from my perspective: (1) this requires cooperation from the OS (either in form of giving you advanced API or having high enough privilege level to write TCP manually), and (2) this falls apart in presence of TCP middleboxes.

[0] https://dedis.cs.yale.edu/2009/tng/papers/nsdi12-abs/

u/vasilvv

KarmaCake day119January 4, 2020
About
https://github.com/vasilvv

Work on QUIC protocol and other networking things at Google. Long time ago I was quite active in various open source projects, notably MediaWiki.

All opinions are my own, and do not reflect the official position of my employer or any standards organization I'm involved in.

View Original