Readit News logoReadit News
mfabbri77 commented on Drawing Text Isn't Simple: Benchmarking Console vs. Graphical Rendering   cv.co.hu/csabi/drawing-te... · Posted by u/PaulHoule
jayd16 · 2 months ago
SDF is awesome but even then it's not a silver bullet. It's a raster technique and some people want vector fonts or subpixel rendering.
mfabbri77 · 2 months ago
I don't know how common it is in fonts, but for generic 2D vector graphics, problems arise from the management of self-intersections, i.e., the pixels where they fall. With an SDF rasterizer, how do you handle the pixel where two Bezier curves intersect in a fish-shaped path? For this reason, more conventional rasterizers with multisampling are often used, or rasterizers that calculate pixel coverage analytically, also finding intersections (sweepline, Bentley-Ottmann).
mfabbri77 commented on Rasterizer: A GPU-accelerated 2D vector graphics engine in ~4k LOC   github.com/mindbrix/Raste... · Posted by u/mindbrix
mfabbri77 · 4 months ago
I'm always interested in new 2D vector rendering algorithms, so if you make a blog post explaining your approach, with enough detail, I'd be happy to read it!
mfabbri77 commented on GPU Prefix Sums: A nearly complete collection   github.com/b0nes164/GPUPr... · Posted by u/coffeeaddict1
m-schuetz · 4 months ago
That and https://github.com/b0nes164/GPUSorting have been a tremendous help for me, since CUB does not nicely work with the Cuda Driver Api. The author is doing amazing work.
mfabbri77 · 4 months ago
At what order of magnitude in the number of elements to be sorted (I'm thinking to the overhead of the GPU setup cost) is the break-even point reached, compared to a pure CPU sort?
mfabbri77 commented on Ask HN: Need C code for drawing graphic primitives to a framebuffer    · Posted by u/Surac
Surac · 6 months ago
Thanks. This librarys are so full of features that they literally burry the real drawing part. I had hoped for something more basic. No anti aliasing etc.
mfabbri77 · 6 months ago
You have to look for "stroking": there are several ways to do it, in CPU you usually first perform a piecewise linear approximation of the curve, then offset each segment along the normals, add the caps, get a polygon and draw it.
mfabbri77 commented on Ask HN: Need C code for drawing graphic primitives to a framebuffer    · Posted by u/Surac
mfabbri77 · 6 months ago
You need to look at a 2d vector graphics library eg: Skia, Cairo, Agg, NanoVG, Blend2D, OpenVG (and many other, in no particular order).
mfabbri77 commented on Faster, easier 2D vector rendering [video]   youtube.com/watch?v=_sv8K... · Posted by u/raphlinus
morio · 7 months ago
You got a long way to go. Writing a rasterizer from scratch is a huge undertaking.

What's the internal color space, I assume it is linear sRGB? It looks like you are going straight to RGBA FP32 which is good. Think how you will deal with denormals as the CPU will deal with those differently compared to the GPU. Rendering artifacts galore once you do real world testing.

And of course IsInf and NaN need to be handled everywhere. Just checking for F::ZERO is not enough in many cases, you will need epsilon values. In C++ doing if(value==0.0f){} or if (value==1.0f){} is considered a code smell.

Just browsing the source I see Porter Duff blend modes. Really, in 2025? Have fun dealing with alpha compositing issues on this one. Also most of the 'regular' blend modes are not alpha compositing safe, you need special handling of alpha values in many cases if you do not want to get artifacts. The W3C spec is completely underspecified in this regard. I spent many months dealing with this myself.

If I were to redo a rasterizer from scratch I would push boundaries a little more. For instance I would target full FP32 dynamic range support and a better internal color space, maybe something like OKLab to improve color blending and compositing quality. And coming up with innovative ways to use this gained dynamic range.

mfabbri77 · 7 months ago
You didn't mention one of the biggest source of 2d vector graphic artifacts: mapping polygon coverage to the alpha channel, which is what virtually all engines do, and is the main reason why we at Mazatech are writing a new version of our engine, AmanithVG, based on a simple idea: draw all the paths (polygons) at once. Well, the idea is simple, the implementation... not so much ;)
mfabbri77 commented on Exact Polygonal Filtering: Using Green's Theorem and Clipping for Anti-Aliasing   jonathanolson.net/exact-p... · Posted by u/muyyatin2
dahart · a year ago
That isn’t true. Again, please look more closely at the first example in the article, and take the time to understand it. It demonstrates there’s a better method than what you’re suggesting, proving that clipping to pixels and summing the area is not the best visual quality you can get.
mfabbri77 · a year ago
As pointed out by Raphlinus, the moire pattern in the Siemens star isn't such a significant quality indicator for the type of content usually encountered in 2D vector graphics. With the analytical coverage calculation you can have perfect font/text rendering, perfect thin lines/shapes and, by solving all the areas at once, no conflating artifacts.
mfabbri77 commented on Exact Polygonal Filtering: Using Green's Theorem and Clipping for Anti-Aliasing   jonathanolson.net/exact-p... · Posted by u/muyyatin2
raphlinus · a year ago
I think the story is a lot more complicated. Talking about "the best possible output quality" is a big claim, and I have no reason to believe it can be achieved by mathematically simple techniques (ie linear convolution with a kernel). Quality is ultimately a function of human perception, which is complex and poorly understood, and optimizing for that is similarly not going to be easy.

The Mitchell-Netravali paper[1] correctly describes sampling as a tradeoff space. If you optimize for frequency response (brick wall rejection of aliasing) the impulse response is sinc and you get a lot of ringing. If you optimize for total rejection of aliasing while maintaining positive support, you get something that looks like a Gaussian impulse response, which is very smooth but blurry. And if you optimize for small spatial support and lack of ringing, you get a box filter, which lets some aliasing through.

Which is best, I think, depends on what you're filtering. For natural scenes, you can make an argument that the oblique projection approach of Rocha et al[2] is the optimal point in the tradeoff space. I tried it on text, though, and there were noticeable ringing artifacts; box filtering is definitely better quality to my eyes.

I like to think about antialiasing specific test images. The Siemens star is very sensitive in showing aliasing, but it also makes sense to look at a half-plane and a thin line, as they're more accurate models of real 2D scenes that people care about. It's hard to imagine doing better than a box filter for a half-plane; either you get ringing (which has the additional negative impact of clipping when the half-planes are at the gamut boundary of the display; not something you have to worry about with natural images) or blurriness. In particular, a tent filter is going to be softer but your eye won't pick up the reduction in aliasing, though it is certainly present in the frequency domain.

A thin line is a different story. With a box filter, you get basically a non antialiased line of single pixel thickness, just less alpha, and it's clearly possible to do better; a tent filter is going to look better.

But a thin line is just a linear combination of two half-planes. So if you accept that a box filter is better visual quality than a tent filter for a half-plane, and the other way around for a thin line, then the conclusion is that linear filtering is not the correct path to truly highest quality.

With the exception of thin lines, for most 2D scenes a box filter with antialiasing done in the correct color space is very close to the best quality - maybe the midwit meme applies, and it does make sense to model a pixel as a little square in that case. But I am interested in the question of how to truly achieve the best quality, and I don't think we really know the answer yet.

[1] https://www.cs.utexas.edu/~fussell/courses/cs384g-fall2013/l...

[2] https://www.inf.ufrgs.br/~eslgastal/SBS3/Rocha_Oliveira_Gast...

mfabbri77 · a year ago
In my opinion, if you break down all the polygons in your scene into non-overlapping polygons, then clip them into pixels, calculate the color of each piece of polygon (applying all paints, blend modes, etc) and sum it up, ...in the end that's the best visual quality you can get. And that's the idea i'm working on, but it involves the decomposition/clip step on the CPU, while sum of paint/blend is done by the GPU.
mfabbri77 commented on Exact Polygonal Filtering: Using Green's Theorem and Clipping for Anti-Aliasing   jonathanolson.net/exact-p... · Posted by u/muyyatin2
muyyatin2 · a year ago
I've been looking into how viable this is as a performant strategy. If you have non-overlapping areas, then contributions to a single pixel can be made independently (since it is just the sum of contributions). The usual approach (computing coverage and blending into the color) is more constrained, where the operations need to be done in back-to-front order.
mfabbri77 · a year ago
I've been researching this field for 20 years (I'm one of the developers of AmanithVG). Unfortunately, no matter how fast they are made, all the algorithms to analytically decompose areas involve a step to find intersections and therefore sweepline approaches that are difficult to parallelize and therefore must be done in CPU. However, we are working on it for the next AmanithVG rasterizer, so I'm keeping my eyes open for all possible alternatives.
mfabbri77 commented on Exact Polygonal Filtering: Using Green's Theorem and Clipping for Anti-Aliasing   jonathanolson.net/exact-p... · Posted by u/muyyatin2
mfabbri77 · a year ago
I am quite convinced that if the goal is the best possible output quality, then the best approach is to analytically compute the non-overlapping areas of each polygon within each pixel. Resolving all contributions (areas) together in the same single pass for each pixel.

u/mfabbri77

KarmaCake day169November 14, 2012
About
Founder at Mazatech, working on AmanithVG (www.amanithvg.com), a crossplatform OpenVG 1.1 implemantation, and AmanithSVG (www.amanithsvg.com), SVG rendering middleware available as unity plugin and standalone library.
View Original