Readit News logoReadit News
exDM69 commented on Iran's internet blackout may become permanent, with access for elites only   restofworld.org/2026/iran... · Posted by u/siev
edg5000 · 15 days ago
My wild guess is that jamming is local. Major cities may be fully jammed. To get an idea about GNSS jamming range (different signal of course, probably much easier to jam), there are maps online where you can see which parts of Europe are currently GNSS-jammed. But I have the same question as you.
exDM69 · 15 days ago
The GPS jamming maps are based on commercial air traffic flying in the area.

While that gives some ideas of how widespread the jamming is, it won't give accurate information about the range (air traffic avoids areas with jamming) of the interference or any information from places where there is no commercial air traffic (war zones, etc).

exDM69 commented on Iran's internet blackout may become permanent, with access for elites only   restofworld.org/2026/iran... · Posted by u/siev
nroets · 15 days ago
Here's crazy idea: Instead of the US spending all this money on restraining the Iranian government through military build ups and sanctions, rather drop hundreds of thousands of Starlink kits by drones.

Firstly the protesters will be able to communicate in private.

And secondly, Iranians will continue to be reminded of the freedoms most other Muslims enjoy: As in free speech and free trade.

One of the reasons the Berlin wall fell was that East Europeans saw on TV that how prosperous Western Europe became.

exDM69 · 15 days ago
Starlink was also blocked by radio frequency interference.

Granted that can't possibly cover the entire area of the country.

exDM69 commented on An Experimental Approach to Printf in HLSL   abolishcrlf.org//2025/12/... · Posted by u/ibobev
pjmlp · a month ago
Maybe because Pix is quite good?
exDM69 · a month ago
Using printf and debuggers are complementary.

Finding which pixel to debug, or just dumping some info from the pixel under mouse cursor (for example) is better done with a simple printf. Then you can pick up the offending pixel/vertex/mesh/compute in the debugger if you still need it.

You get both, a debugger and printf related tooling in Renderdoc and it's better than either of those alone.

I've been writing a lot of GPU code over the past few years (and the few decades before it) and shader printf has been a huge productivity booster.

exDM69 commented on An Experimental Approach to Printf in HLSL   abolishcrlf.org//2025/12/... · Posted by u/ibobev
exDM69 · a month ago
Using printf in shaders is awesome, it makes a huge difference when writing and debugging shaders. Vulkan and GLSL (and Slang) have a usable printf out of the box, but HLSL and D3D do not.

Afaik the way it works in Vulkan is that all the string formatting is actually done on the CPU. The GPU writes only writes the data to buffers with structs based on the format string.

All the shader prints are captured by tools such as Renderdoc, so you can easily find the vertex or pixel that printed something and then replay the shader execution in a debugger.

I only wish that we would've had this 20 years ago, it would have saved me so much time, effort and frustration.

exDM69 commented on Lessons from Hash Table Merging   gist.github.com/attractiv... · Posted by u/attractivechaos
exDM69 · a month ago
Maybe this would be a suitable application for "Fibonacci hashing" [0][1], which is a trick to assign a hash table bucket from a hash value. Instead of just taking the modulo with the hash table size, it first multiplies the hash with a constant value 2^64/phi where phi is the golden ratio, and then takes the modulo.

There may be better constants than 2^64/phi, perhaps some large prime number with roughly equal number of one and zero bits could also work.

This will prevent bucket collisions on hash table resizing that may lead to "accidentally quadratic" behavior [2], while not requiring rehashing with a different salt.

I didn't do detailed analysis on whether it helps on hash table merging too, but I think it would.

[0] https://probablydance.com/2018/06/16/fibonacci-hashing-the-o... [1] https://news.ycombinator.com/item?id=43677122 [2] https://accidentallyquadratic.tumblr.com/post/153545455987/r...

exDM69 commented on Vector graphics on GPU   gasiulis.name/vector-grap... · Posted by u/gsf_emergency_6
jayd16 · a month ago
So without blowing up the traditional shader pipeline, why is it not trivial to add a path stage as an alternative to the vertex stage? It seems like GPUs and shader language could implement a standard way to turn vector paths into fragments and keep the rest of the pipeline.

In fact, you could likely use the geometry stage to create arbitrarily dense vertices based on path data passed to the shader without needing any new GPU features.

Why is this not done? Is the CPU render still faster than these options?

exDM69 · a month ago
> why is it not trivial to add a path stage as an alternative to the vertex stage?

Because paths, unlike triangles are not fixed size or have screen space locality. Paths consist of multiple contours of segments, typically cubic bezier curves and a winding rule.

You can't draw one segment out of a contour on the screen and continue to the next one, let alone do them in parallel. A vertical line segment on the left hand side going bottom to top of your screen will make every pixel to the right of it "inside" the path, but if there's another line segment going top to bottom somewhere the pixel and it's outside again.

You need to evaluate the winding rule for every curve segment on every pixel and sum it up.

By contrast, all the pixels inside the triangle are also inside the bounding box of the triangle and the inside/outside test for a pixel is trivially simple.

There are at least four popular approaches to GPU vector graphics:

1) Loop-Blinn: Use CPU to tessellate the path to triangles on the inside and on the edges of the paths. Use a special shader with some tricks to evaluate a bezier curve for the triangles on the edges.

2) Stencil then cover: For each line segment in a tessellated curve, draw a rectangle that extends to the left edge of the contour and use two sided stencil function to add +1 or -1 to the stencil buffer. Draw another rectangle on top of the whole path and set the stencil test to draw only where the stencil buffer is non-zero (or even/odd) according to the winding rule.

3) Draw a rectangle with a special shader that evaluates all the curves in a path, and use a spatial data structure to skip some. Useful for fonts and quadratic bezier curves, not full vector graphics. Much faster than the other methods for simple and small (pixel size) filled paths. Example: Lengyel's method / Slug library.

4) Compute based methods such as the one in this article or Raph Levien's work: use a grid based system with tessellated line segments to limit the number of curves that have to be evaluated per pixel.

Now this is only filling paths, which is the easy part. Stroking paths is much more difficult. Full SVG support has both and much more.

> In fact, you could likely use the geometry stage to create arbitrarily dense vertices based on path data passed to the shader without needing any new GPU features.

Geometry shaders are commonly used with stencil-then-cover to avoid a CPU preprocessing step.

But none of the GPU geometry stages (geometry, tessellation or mesh shaders) are powerful enough to deal with all the corner cases of tessellating vector graphics paths, self intersections, cusps, holes, degenerate curves etc. It's not a very parallel friendly problem.

> Why is this not done?

As I've described here: all of these ideas have been done with varying degrees of success.

> Is the CPU render still faster than these options?

No, the fastest methods are a combination of CPU preprocessing for the difficult geometry problems and GPU for blasting out the pixels.

exDM69 commented on Vector graphics on GPU   gasiulis.name/vector-grap... · Posted by u/gsf_emergency_6
badlibrarian · a month ago
Author uses a lot of odd, confusing terminology and brings CPU baggage to the GPU creating the worst of both worlds. Shader hacks and CPU-bound partitioning and choosing the Greek letter alpha to be your accumulator in a graphics article? Oh my.

NV_path_rendering solved this in 2011. https://developer.nvidia.com/nv-path-rendering

It never became a standard but was a compile-time option in Skia for a long time. Skia of course solved this the right way.

https://skia.org/

exDM69 · a month ago
> NV_path_rendering solved this in 2011.

By no means is this a solved problem.

NV_path_rendering is an implementation of "stencil then cover" method with a lot of CPU preprocessing.

It's also only available on OpenGL, not on any other graphics API.

The STC method scales very badly with increasing resolutions as it is using a lot of fill rate and memory bandwidth.

It's mostly using GPU fixed function units (rasterizer and stencil test), leaving the "shader cores" practically idle.

There's a lot of room for improvement to get more performance and better GPU utilization.

exDM69 commented on Rust's Block Pattern   notgull.net/block-pattern... · Posted by u/zdw
andrepd · 2 months ago
It's a differentiator wrt C and C++, is what I said.
exDM69 · 2 months ago
Yes, sadly this isn't a part of standard C or C++.

It is available as a language extension in Clang and GCC and widely used (e.g. by the Linux kernel).

Unfortunately it is not supported by the third major compiler out there so many projects can't or don't want to use it.

exDM69 commented on No Graphics API   sebastianaaltonen.com/blo... · Posted by u/ryandrake
vblanco · 2 months ago
This is a fantastic article that demonstrates how many parts of vulkan and DX12 are no longer needed.

I hope the IHVs have a look at it because current DX12 seems semi abandoned, with it not supporting buffer pointers even when every gpu made on the last 10 (or more!) years can do pointers just fine, and while Vulkan doesnt do a 2.0 release that cleans things, so it carries a lot of baggage, and specially, tons of drivers that dont implement the extensions that really improve things.

If this api existed, you could emulate openGL on top of this faster than current opengl to vulkan layers, and something like SDL3 gpu would get a 3x/4x boost too.

exDM69 · 2 months ago
> tons of drivers that dont implement the extensions that really improve things.

This isn't really the case, at least on desktop side.

All three desktop GPU vendors support Vulkan 1.4 (or most of the features via extensions) on all major platforms even on really old hardware (e.g. Intel Skylake is 10+ years old and has all the latest Vulkan features). Even Apple + MoltenVK is pretty good.

Even mobile GPU vendors have pretty good support in their latest drivers.

The biggest issue is that Android consumer devices don't get GPU driver updates so they're not available to the general public.

exDM69 commented on Young journalists expose Russian-linked vessels off the Dutch and German coast   digitaldigging.org/p/they... · Posted by u/harshreality
zelphirkalt · 2 months ago
I guess the only solution then is to already have people in places where it counts. I would suspect military bases have more than enough people. But then again the drones can just fly too high, at which point it becomes a cost/benefit tradeoff, or futuristic laser weapons.
exDM69 · 2 months ago
Anything involving people on the ground is just too slow.

It takes radars, interceptor drones, sensor networks, etc. Stuff like this is in active development but not widely deployed yet.

u/exDM69

KarmaCake day10529February 2, 2010View Original