Readit News logoReadit News
fleabitdev commented on Unexpected productivity boost of Rust   lubeno.dev/blog/rusts-pro... · Posted by u/bkolobara
mcherm · 3 days ago
> If setting href worked like that, I think it would be less confusing.

How do you imagine this would interact with try-finally being used to clean up resources, release locks, close files, and so forth?

fleabitdev · 3 days ago
try-finally is leaky in JavaScript, in any case. If your `try` block contains an `await` point, its finaliser may never run. The browser also has the right to stop running your tab’s process partway through a JavaScript callback without running any finalisers (for example, because the computer running the browser has been struck by lightning).

For this reason, try-finally is at best a tool for enforcing local invariants in your code. When a function like process.exit() completely retires the current JavaScript environment, there’s no harm in skipping `finally` blocks.

fleabitdev commented on FFmpeg 8.0   ffmpeg.org/index.html#pr8... · Posted by u/gyan
gmueckl · 9 days ago
I haven't even had a cursory look at decoders state of the art for 10+ years. But my intuition would say that decoding for display could profit a lot from GPU acceleration for later parts of the process when there is already pixel data of some sort involved. Then I imagine thet the initial decompression steps could stay on the CPU and the decompressed, but still (partially) encoded data is streamed to the GPU for the final transformation steps and application to whatever I-frames and other base images there are. Steps like applying motion vectors, iDCT... look embarrassingly parallel at a pixel level to me.

When the resulting frame is already in a GPU texture then, displaying it has fairly low overhead.

My question is: how wrong am I?

fleabitdev · 9 days ago
I'm not an expert, but in the worst case, you might need to decode dense 4x4-pixel blocks which each depend on fully-decoded neighbouring blocks to their west, northwest, north and northeast. This would limit you to processing `frame_height * 4` pixels in parallel, which seems bad, especially for memory-intensive work. (GPUs rely on massive parallelism to hide the latency of memory accesses.)

Motion vectors can be large (for example, 256 pixels for VP8), so you wouldn't get much extra parallelism by decoding multiple frames together.

However, even if the worst-case performance is bad, you might see good performance in the average case. For example, you might be able to decode all of a frame's inter blocks in parallel, and that might unlock better parallel processing for intra blocks. It looks like deblocking might be highly parallel. VP9, H.265 and AV1 can optionally split each frame into independently-coded tiles, although I don't know how common that is in practice.

fleabitdev commented on FFmpeg 8.0   ffmpeg.org/index.html#pr8... · Posted by u/gyan
fleabitdev · 9 days ago
Happy to hear that they've introduced video encoders and decoders based on compute shaders. The only video codecs widely supported in hardware are H.264, H.265 and AV1, so cross-platform acceleration for other codecs will be very nice to have, even if it's less efficient than fixed-function hardware. The new ProRes encoder already looks useful for a project I'm working on.

> Only codecs specifically designed for parallelised decoding can be implemented in such a way, with more mainstream codecs not being planned for support.

It makes sense that most video codecs aren't amenable to compute shader decoding. You need tens of thousands of threads to keep a GPU busy, and you'll struggle to get that much parallelism when you have data dependencies between frames and between tiles in the same frame.

I wonder whether encoders might have more flexibility than decoders. Using compute shaders to encode something like VP9 (https://blogs.gnome.org/rbultje/2016/12/13/overview-of-the-v...) would be an interesting challenge.

fleabitdev commented on Events   developer.mozilla.org/en-... · Posted by u/aanthonymax
Waterluvian · 21 days ago
I often imagine state and events as the two impulses that drive an application. I like React a lot, but a common pitfall is that it is 95% focused on state, and so you get odd cases where you end up trying to encode events as state.

You’ll see this anywhere you see a usePrevious-like hook that you then use to determine if something changed and act on it (eg. I hold state that a robot is offline, but I want to do something special when a robot goes offline). This is inferring an event from state.

I’ve had luck adding an event bus as a core driver of a complex react application for events I don’t want to track as state. But it always feels that it’s a bit in conflict with the state-driven nature of the application.

fleabitdev · 21 days ago
Reactivity works by replaying code when its inputs have changed. Events can make this very expensive and impractical, because to properly replay event-driven code, you'd need to replay every event it's ever received.

When we replace an event stream with an observable variable, it's like a performance optimisation: "you can ignore all of the events which came before; here's an accumulated value which summarises the entire event stream". For example, a mouse movement event listener can often be reduced to an "is hovered" flag.

Serialising program state to plain data isn't always easy or convenient, but it's flexible enough. Reducing all events to state almost solves the problem of impure inputs to reactive functions.

Unfortunately, reactive functions usually have impure outputs, not just impure inputs. UI components might need to play a sound, write to a file, start an animation, perform an HTTP request, or notify a parent component that the "close" button has been clicked. It's really difficult to produce instantaneous side effects if you don't have instantaneous inputs to build on.

I can't see an obvious solution, but until we come up with one, reactive UI toolkits will continue to be ill-formed. For example, a React component <ClickCounter mouseButton> would be broken by default: clicks are delivered by events, so they're invisible to React, so the component will display an incorrect click count when the mouseButton prop changes.

fleabitdev commented on Linear sent me down a local-first rabbit hole   bytemash.net/posts/i-went... · Posted by u/jcusch
layer8 · 23 days ago
150 ms is definitely on the “not instantaneous” side: https://ux.stackexchange.com/a/42688

The stated 500 ms to 1500 ms are unfortunately quite frequent in practice.

fleabitdev · 23 days ago
Interesting fact: the 50ms to 100ms grace period only works at the very beginning of a user interaction. You get that grace period when the user clicks a button, but when they're typing in text, continually scrolling, clicking to interrupt an animation, or moving the mouse to trigger a hover event, it's better to provide a next-frame response.

This means that it's safe for background work to block a web browser's main thread for up to 50ms, as long as you use CSS for all of your animations and hover effects, and stop launching new background tasks while the user is interacting with the document. https://web.dev/articles/optimize-long-tasks

fleabitdev commented on Linear sent me down a local-first rabbit hole   bytemash.net/posts/i-went... · Posted by u/jcusch
incorrecthorse · 23 days ago
> For the uninitiated, Linear is a project management tool that feels impossibly fast. Click an issue, it opens instantly. Update a status and watch in a second browser, it updates almost as fast as the source. No loading states, no page refreshes - just instant, interactions.

How garbage the web has become for a low-latency click action being qualified as "impossibly fast". This is ridiculous.

fleabitdev · 23 days ago
I was also surprised to read this, because Linear has always felt a little sluggish to me.

I just profiled it to double-check. On an M4 MacBook Pro, clicking between the "Inbox" and "My issues" tabs takes about 100ms to 150ms. Opening an issue, or navigating from an issue back to the list of issues, takes about 80ms. Each navigation includes one function call which blocks the main thread for 50ms - perhaps a React rendering function?

Linear has done very good work to optimise away network activity, but their performance bottleneck has now moved elsewhere. They've already made impressive improvements over the status quo (about 500ms to 1500ms for most dynamic content), so it would be great to see them close that last gap and achieve single-frame responsiveness.

fleabitdev commented on Trying to play an isomorphic piano (2022) [video]   youtube.com/watch?v=j4itL... · Posted by u/surprisetalk
fleabitdev · a month ago
I can see some real advantages to this layout. There are only two key shapes rather than twelve, so transposing at sight would become much easier. A printed stave would span sixteen semitones rather than thirteen. The hand positions for chords and scales look about as comfortable as a normal piano.

I thought this keyboard layout might make the pianist's hand-span one tone wider, but unfortunately, that wouldn't be the case. A normal piano spaces its black keys further apart than its white keys. On my digital piano, an isomorphic layout would bring the raised keys about 4mm closer together, which seems unplayable - but leaving the octave span unchanged would win those 4mm back.

It would be much more difficult to reposition your hands without looking at them, but changing the texture of the white keys and black keys might help.

fleabitdev commented on Games Look Bad: HDR and Tone Mapping (2017)   ventspace.wordpress.com/2... · Posted by u/uncircle
markus_zhang · a month ago
One big issue I never understood is why do we need photorealism in games at all. They seem to benefit card manufacturers and graphic programmers, but other than that I feel it has nothing to do — and in fact may have negative impact on game quality.
fleabitdev · a month ago
I've wondered whether photorealism creates its own demand. Players spend hours in high-realism game worlds, their eyes adjust, and game worlds from ten years ago suddenly feel wrong; not just old-fashioned, but fake.

This is also true for non-photorealistic 3D games. They benefit from high-tech effects like outline shaders, sharp shadows, anti-aliasing and LoD blending - but all of that tech is improving over time, so older efforts don't look quite right any more, and today's efforts won't look quite right in 2045.

When a game developer decides to step off this treadmill, they usually make a retro game. I'd like to see more deliberately low-tech games which aren't retro games. If modern players think your game looks good on downlevel hardware, then it will continue to look good as hardware continues to improve - I think this is one reason why Nintendo games have so much staying power.

This has been the norm in 2D game development for ages, but it's much more difficult in 3D. For example, if the player is ever allowed to step outdoors, you'll struggle to meet modern expectations for draw distance and pop-in - and even if your game manages to have cutting-edge draw distance for 2025, who can say whether future players will still find it convincing? The solution is to only put things in the camera frustum when you know you can draw them with full fidelity; everything in the game needs to look as good as it's ever going to look.

fleabitdev commented on Rendering Crispy Text on the GPU   osor.io/text... · Posted by u/ibobev
meindnoch · 3 months ago
Impressive work!

But subpixel AA is futile in my opinion. It was a nice hack in the aughts when we had 72dpi monitors, but on modern "retina" screens it's imperceptible. And for a teeny tiny improvement, you get many drawbacks:

- it only works over opaque backgrounds

- can't apply any effect on the rasterized results (e.g. resizing, mirroring, blurring, etc.)

- screenshots look bad when viewed on a different display

fleabitdev · 3 months ago
Getting rid of subpixel AA would be a huge simplification, but quite a lot of desktop users are still on low-DPI monitors. The Firefox hardware survey [1] reports that 16% of users have a display resolution of 1366x768.

This isn't just legacy hardware; 96dpi monitors and notebooks are still being produced today.

[1]: https://data.firefox.com/dashboard/hardware

fleabitdev commented on Pope Francis has died   reuters.com/world/pope-fr... · Posted by u/phillipharris
fastball · 4 months ago
Nothing from the Bible indicates that hell is empty, so that is indeed an interesting response from the Pope.
fleabitdev · 4 months ago
Yes - I think it caught my attention because it was such a mystery. It was a welcome thing to hear from one of the most powerful people in the world, but it came like a bolt from the blue. As far as I know, he never revisited the topic.

u/fleabitdev

KarmaCake day302June 12, 2020View Original