Readit News logoReadit News
mschuetz commented on Bilinear down/upsampling, aligning pixel grids, and that infamous GPU half pixel (2021)   bartwronski.com/2021/02/1... · Posted by u/fanf2
Const-me · a year ago
> Have you never downscaled and upscaled images in a non-3D-rendering context?

Indeed, and I found that leveraging hardware texture samplers is the best approach even for command-like tools which don’t render anything.

Simple CPU running C++ is just too slow for large images.

Apart from a few easy cases like 2x2 downsampling discussed in the article, SIMD optimized CPU implementations are very complicated for non-integer or non-uniform scaling factors, as they often require dynamic dispatch to run on older computers without AVX2. And despite the SIMD, still couple orders of magnitude slower compared to GPU hardware while only delivering barely observable quality win.

mschuetz · a year ago
You're skillfully dodging the point: Trilinear filtering is in no way an "ideal" alternative.

It has worse quality than something like a Lanczos filter, and it requires computing image pyramids first, i.e., it is also slower for the very common use case of rescaling images just once. And that article isn't really about projected/distorted textures, where trilinear filtering actually makes sense.

mschuetz commented on Bilinear down/upsampling, aligning pixel grids, and that infamous GPU half pixel (2021)   bartwronski.com/2021/02/1... · Posted by u/fanf2
Const-me · a year ago
Good article, but IMO none of the discussed downsampling methods are actually ideal.

The ideal is trilinear sampling instead of bilinear. Easy to do with GPU APIs, in D3D11 it’s ID3D11DeviceContext.GenerateMips API call to generate full set of mip levels of the input texture, then generate each output pixel with trilinear sampler. When doing non-uniform downsampling, use anisotropic sampler instead of trilinear.

Not sure high level image processing libraries like the mentioned numpy, PIL or TensorFlow are doing anything like that, though.

mschuetz · a year ago
Trilinear requires an image pyramid. Without downsampling to create that image pyramid, you can't even do trilinear sampling, so your argument strikes me as odd and circular. Like telling developers of APIs such as ID3D11DeviceContext.GenerateMips to simply use ID3D11DeviceContext.GenerateMips instead of developing ID3D11DeviceContext.GenerateMips. Also, I never took this article to be about 3D rendering and utilizing mip maps for trilinear interpolation. More about 2D image scaling.

Have you never downscaled and upscaled images in a non-3D-rendering context?

mschuetz commented on Run CUDA, unmodified, on AMD GPUs   docs.scale-lang.com/... · Posted by u/Straw
spfd · a year ago
Very impressive!

But I can't help but think if something like this can be done to this extend, I wonder what went wrong/why it's a struggle for OpenCL to unify the two fragmentized communities. While this is very practical and has a significant impact for people who develop GPGPU/AI applications, for the heterogeneous computing community as a whole, relying on/promoting a proprietary interface/API/language to become THE interface to work with different GPUs sounds like bad news.

Can someone educate me on why OpenCL seems to be out of scene in the comments/any of the recent discussions related to this topic?

mschuetz · a year ago
OpenCL isn't nice to use and lacks tons of quality of life features. I wouldn't use it, even if it was double as fast as CUDA.
mschuetz commented on U.S. clears way for antitrust inquiries of Nvidia, Microsoft and OpenAI   nytimes.com/2024/06/05/te... · Posted by u/okdood64
shmerl · 2 years ago
CUDA is pure lock-in, so it only makes sense for antitrust regulators to evaluate if it's causing anti-competitive damage to the market. Hint - it is. Lock-in is never good.
mschuetz · 2 years ago
Sure is, but there is nothing stopping AMD or Intel from building a working alternative to CUDA, so how is it anti-competitive? The problem with OpenCL, Sycl, ROC, etc. is that the developer experience is terrible. Cumbersome to set up, difficiult to get working accross platforms, lack of major quality of life features, etc.
mschuetz commented on U.S. clears way for antitrust inquiries of Nvidia, Microsoft and OpenAI   nytimes.com/2024/06/05/te... · Posted by u/okdood64
roenxi · 2 years ago
There probably isn't anything in CUDA that makes it special. They are well optimised math libraries and the math for most of the important stuff is somewhat trivial. AI seems to be >80% matrix multiplication - well optimised BLAS is tricky to implement, but even a bad implementation would see all the major libraries support AMD.

The vendor "lock in" is because it takes a few years for decisions to be expressed in marketable silicon and literally only Nvidia was trying to be in the market 5 years ago. I've seen a lot of AMD cards that just crashed when used for anything outside OpenGL. I had a bunch of AI related projects die back in 2019 because initialising OpenCL crashed the drivers. If you believe the official docs everything would work fine. Great card except for the fact that compute didn't work.

At the time I thought it was maybe just me. After seeing geohotz's saga trying to make tinygrad work on AMD cards and having a feel for how badly unsupported AMD hardware is by the machine learning community, it makes a lot of sense to me that it is a systemic issue and AMD didn't have any corporate sense of urgency about fixing those problems.

Maybe there is something magic in CUDA, but if there is it is probably either their memory management model or something quite technical like that. Not the API.

mschuetz · 2 years ago
The magic is that CUDA actually works well. There is no reason to pick OpenCL, ROCm, Sycl or others if you get a 10x better developer experience with CUDA.
mschuetz commented on U.S. clears way for antitrust inquiries of Nvidia, Microsoft and OpenAI   nytimes.com/2024/06/05/te... · Posted by u/okdood64
oytis · 2 years ago
The alternative to CUDA is called OpenCL. There has been speculations that poor performance of NVIDIA GPUs with OpenCL compared to CUDA is an intentional anticompetitive practice, but I don't feel confident to tell for sure.
mschuetz · 2 years ago
OpenCL is an alternative to CUDA just like Legos are an alternative to bricks. The problem with OpenCL isn't even the performance, it's everything. If OpenCL were any good, people could use it to build similarly powerful applications on cheaper AMD GPUs.
mschuetz commented on CityGaussian: Real-time high-quality large-scale scene rendering with Gaussians   dekuliutesla.github.io/ci... · Posted by u/smusamashah
forrestthewoods · 2 years ago
Can someone convince me that 3D gaussian splatting isn't a dead end? It's an order of magnitude too slow to render and order of magnitude too much data. It's like raster vs raytrace all over again. Raster will always be faster than raytracing. So even if raytracing gets 10x faster so too will raster.

I think generating traditional geometry and materials from gaussian point clouds is maybe interesting. But photogrammetry has already been a thing for quite awhile. Trying to render a giant city in real time via splats doesn't feel like "the right thing".

It's definitely cool and fun and exciting. I'm just not sure that it will ever be useful in practice? Maybe! I'm definitely not an expert so my question is genuine.

mschuetz · 2 years ago
It's currently unparalleled when it comes to realism as in realistic 3D reconstruction from the real world. Photogrammetry only really works for nice surfacic data, whereas gaussian splats work for semi-volumetric data such as fur, vegetation, particles, rough surfaces, and also for glossy/specular surfaces and volumes with strong subdivision surface properties, or generally stuff with materials that are strongly view-dependent.
mschuetz commented on Surgeons transplant pig kidney into a patient   nytimes.com/2024/03/21/he... · Posted by u/jonwachob91
ejolto · 2 years ago
I'm curious, when did you think patients had to take immunosuppressants?
mschuetz · 2 years ago
Immunosuppressants are very commonly used for various diseases that are caused by overreactions or undesired reactions of your immune system. Glucocorticoids, for example, are very widespread for all sorts of stuff (rashes, asthma, allergies, inflammations, ...). Monoclonal antibodies are also getting popular as a way to treat allergic reactions by means such as "killing" your IgE antibodies (They're basically antibodies that, in this instance, are used against your own body's antibodies).
mschuetz commented on WebGPU is now available on Android   developer.chrome.com/blog... · Posted by u/astlouis44
kevingadd · 2 years ago
Historically any time an attack surface as big as WebGPU has been exposed, "the worst you can do is crash the tab" has not ever been true.

Also note that for an unstable graphics driver, the way you usually crash the system is by touching memory you shouldn't (through the rendering API), which is definitely something that could be exploited by an attacker. It could also corrupt pages that later get flushed to disk and destroy data instead of just annoy you.

Though I am skeptical as to whether it would happen, security researchers have previously come up with some truly incredible browser exploit chains in the past, so I'm not writing it off.

mschuetz · 2 years ago
WebGL has been around for more than a decade and didn't turn out to be a security issue, other than occasionally crashing tabs. Neither will WebGPU be.
mschuetz commented on WebGPU is now available on Android   developer.chrome.com/blog... · Posted by u/astlouis44
hutzlibu · 2 years ago
"trying to replace native code with web pages? "

No one wants that. But many like to write their apps only for one plattform - and then still have them run allmost everywhere.

The web is the best we have to achieve this. And this will greatly improve the possibilities.

Edit: My app will soon finally use no more html elements. It is not a "webpage".

mschuetz · 2 years ago
> No one wants that.

I very much do want that since the WebGPU API is far easier and nicer to use than Vulkan or OpenGL. Also, it makes apps much more accessible to distribute them over web, and it is much more secure to use web apps than native apps. Unfortunately WebGPU is way too limited compared to desktop APIs.

u/mschuetz

KarmaCake day1414September 13, 2015View Original