Readit News logoReadit News
pfedak commented on I don't like curved displays   blog.danielh.cc/blog/curv... · Posted by u/max__dev
pfedak · 3 months ago
This is nonsense, at least in part because it's mixing two different ideas. The notion that the image "looks exactly the same as how it originally appeared" is only true when one of your eyes is positioned exactly where the camera sensor would have been, which requires a specific distance away from the screen.

Lines in 3D remaining straight in a photo is unrelated and not actually demonstrated by the image. I'm having trouble imagining why this matters - you're trying to find the intersection of two lines in an image without drawing anything?

pfedak commented on The Ski Rental Problem   lesves.github.io/articles... · Posted by u/skywalqer
cwmoore · 5 months ago
Maybe the relatable concept is just a stepladder to the general ongoing scenario, eg. modeling all consumers from a retailer’s perspective. Otherwise, the continuous to discrete assumption reads as a hand-wavy fiat.

Could someone who groks this math tell me why not buy the skis once you’ve paid half their price on rentals?

pfedak · 5 months ago
Another aspect of the solution that makes it rather abstract is it effectively assumes we know nothing about the distribution of the number of days.

Paying at 1/2 will be optimal if it ends before you buy, very bad (3x optimal) if it ends right after you buy, and slightly better than the solution in the post if it lasts at least twice that long (1.5x optimal vs e/(e-1)).

The metric in the post is just the worst of those ratios. Assuming the unproven statement in the post (that the solution which is a constant factor worse than optimal is best), any solution of the form you suggest is going to have similar tradeoffs. If we had a distribution, we could choose.

pfedak commented on Derivation and Intuition behind Poisson distribution   antaripasaha.notion.site/... · Posted by u/sebg
fc417fc802 · 8 months ago
> What happened here?

You went astray when you declared the expected wait and expected passed.

Draw a number line. Mark it at intervals of 10. Uniformly randomly select a point on that line. The expected average wait and passed (ie forward and reverse directions) are both 5, not 10. The range is 0 to 10.

When you randomize the event occurrences but maintain the interval as an average you change the range maximum and the overall distribution across the range but not the expected average values.

pfedak · 8 months ago
If it wasn't clear, their statements are all true when the events follow a poisson distribution/have exponentially distributed waiting times.
pfedak commented on 108B Pixel Scan of Johannes Vermeer's Girl with a Pearl Earring   hirox-europe.com/gigapixe... · Posted by u/twalichiewicz
bombcar · 8 months ago
Why does "Details 90x" seem to zoom in more than "Details 140x"?
pfedak · 8 months ago
The main image is all at the same 90x level, and those buttons just zoom in (more or less) all the way on the points, while the "140x" are separate scan patches at higher magnification (though the real point is they have 3D/height data, too).
pfedak commented on They Might Be Giants Flood EPK Promo (1990) [video]   youtube.com/watch?v=C-tQS... · Posted by u/CaliforniaKarl
jyounker · 9 months ago
I've been to only one They Might Be Giants concert. Half the audience were little kids, and yet it's the only concert I've ever been too that was shut down by the cops.

It was hilarious to see one of the John's being hauled off stage by the police as he was playing Edgar Winter's "Frankenstein".

pfedak · 9 months ago
https://tmbw.net/wiki/Shows/1992-07-23

sounds like the concert in question

pfedak commented on Studies correlating IQ to genius are mostly bad science   theseedsofscience.pub/p/y... · Posted by u/paulpauper
nick__m · 10 months ago
I am pretty sure that the central limit theorem applies to a sample size as big as all living humans.
pfedak · 10 months ago
That isn't at all what the central limit theorem says. The whole point is it holds independent of the actual shape of distribution of the population. You could use the same argument to say social security numbers are normally distributed.

One way to explain things like height being normally distributed is that there are a bunch of independent factors which contribute, and the central limit theorem applied to those factors would then suggest the observed variable looking normal-ish.

pfedak commented on Visualizing all books of the world in ISBN-Space   phiresky.github.io/blog/2... · Posted by u/phiresky
SrTobi · a year ago
Hi, I was the one nerdsniped :) In the end I don't think blub space is the best way to do the whole zoom thing, but I was intrigued by the idea and had already spend too much time on it and the result turned out quite good.

The problem is twofold: which path should we take through the zoom levels,x,y and how fast should we move at any given point (and here "moving" includes zooming in/out as well). That's what the blub space would have been cool for, because it combines speed and path into one. So when you move linearly with constant speed through the blub space you move at different speeds at different zoom levels in normal space and also the path and speed changes are smooth.

Unfortunately that turned out not to work quite as well... even though the flight path was alright (although not perfect), the movement speeds were not what we wanted...

I think that comes from the fact that blub space is linear combination of speed and z component. So if you move with speed s at ground level (let's say z=1) you move with speed z at zoom level z (higher z means more zoomed out). But as you pointed out normal zoom behaviour is quadratic so at zoom level z you move with speed z². But I think there is no way to map this behaviour to a euclidean 2d/3d space (or at least I didn't find any. I can't really prove it right now that it's not possible xD)

So to fix the movement speed we basically sample the flight path and just move along it according to the zoom level at different points on the curve... Basically, even though there are durations in the flight path calculation, they get overwritten by TimeInterpolatingTrajectory, which is doing all the heavy work for the speed.

For the path... maybe a quadratic form with something like x^4 with some tweaking would have been better, but the behaviour we had was good enough :) Maybe the question we should ask is not about the interesting properties of non-euclidean spaces, but what makes a flightpath+speed look good

pfedak · a year ago
The nice thing about deciding on a distance metric is that it gives you both a path (geodesics) and the speed, and if you trust your distance metric it should be perceptually constant velocity. I agree it's non-euclidean, I think the hyperbolic geometry description works pretty well (and has the advantage of well-studied geodesics).

I did finally find the duration logic when I was trying to recreate the path, I made this shader to try to compare: https://www.shadertoy.com/view/l3KBRd

pfedak commented on Visualizing all books of the world in ISBN-Space   phiresky.github.io/blog/2... · Posted by u/phiresky
pfedak · a year ago
I think you can reasonably think about the flight path by modeling the movement on the hyperbolic upper half plane (x would be the position along the linear path between endpoints, y the side length of the viewport).

I considered two metrics that ended up being equivalent. First, minimizing loaded tiles assuming a hierarchical tiled map. The cost of moving x horizontally is just x/y tiles, using y as the side length of the viewport. Zooming from y_0 to y_1 loads abs(log_2(y_1/y_0)) tiles, which is consistent with ds = dy/y. Together this is just ds^2 = (dx^2 + dy^2)/y^2, exactly the upper-half-plane metric.

Alternatively, you could think of minimizing the "optical flow" of the viewport in some sense. This actually works out to the same metric up to scaling - panning by x without zooming, everything is just displaced by x/y (i.e. the shift as a fraction of the viewport). Zooming by a factor k moves a pixel at (u,v) to (k*u,k*v), a displacement of (u,v)*(k-1). If we go from a side length of y to y+dy, this is (u,v)*dy/y, so depending how exactly we average the displacements this is some constant times dy/y.

Then the geodesics you want are just the horocycles, circles with centers at y=0, although you need to do a little work to compute the motion along the curve. Once you have the arc, from θ_0 to θ_1, the total time should come from integrating dtheta/y = dθ/sin(θ), so to be exact you'd have to invert t = ln(csc(θ)-cot(θ)), so it's probably better to approximate. edit: mathematica is telling me this works out to θ = atan2(1-2*e^(2t), 2*e^t) which is not so bad at all.

Comparing with the "blub space" logic, I think the effective metric there is ds^2 = dz^2 + (z+1)^2 dx^2, polar coordinates where z=1/y is the zoom level, which (using dz=dy/y^2) works out to ds^2 = dy^2/y^4 + dx^2*(1/y^2 + ...). I guess this means the existing implementation spends much more time panning at high zoom levels compared to the hyperbolic model, since zooming from 4x to 2x costs twice as much as 2x to 1x despite being visually the same.

pfedak · a year ago
Actually playing around with it the behavior was very different from what I expected - there was much more zooming. Turns out I missed some parts of the zoom code:

Their zoom actually is my "y" rather than a scale factor, so the metric is ds^2 = dy^2 + (C-y)^2 dx^2 where C is a bit more than the maximal zoom level. There is some special handling for cases where their curve would want to zoom out further.

Normalizing to the same cost to pan all the way zoomed out (zoom=1), their cost for panning is basically flat once you are very zoomed in, and more than the hyperbolic model when relatively zoomed out. I think this contributes to short distances feeling like the viewport is moving very fast (very little advantage to zooming out) vs basically zooming out all the way over larger distances (intermediate zoom levels are penalized, so you might as well go almost all the way).

pfedak commented on Visualizing all books of the world in ISBN-Space   phiresky.github.io/blog/2... · Posted by u/phiresky
pfedak · a year ago
I think you can reasonably think about the flight path by modeling the movement on the hyperbolic upper half plane (x would be the position along the linear path between endpoints, y the side length of the viewport).

I considered two metrics that ended up being equivalent. First, minimizing loaded tiles assuming a hierarchical tiled map. The cost of moving x horizontally is just x/y tiles, using y as the side length of the viewport. Zooming from y_0 to y_1 loads abs(log_2(y_1/y_0)) tiles, which is consistent with ds = dy/y. Together this is just ds^2 = (dx^2 + dy^2)/y^2, exactly the upper-half-plane metric.

Alternatively, you could think of minimizing the "optical flow" of the viewport in some sense. This actually works out to the same metric up to scaling - panning by x without zooming, everything is just displaced by x/y (i.e. the shift as a fraction of the viewport). Zooming by a factor k moves a pixel at (u,v) to (k*u,k*v), a displacement of (u,v)*(k-1). If we go from a side length of y to y+dy, this is (u,v)*dy/y, so depending how exactly we average the displacements this is some constant times dy/y.

Then the geodesics you want are just the horocycles, circles with centers at y=0, although you need to do a little work to compute the motion along the curve. Once you have the arc, from θ_0 to θ_1, the total time should come from integrating dtheta/y = dθ/sin(θ), so to be exact you'd have to invert t = ln(csc(θ)-cot(θ)), so it's probably better to approximate. edit: mathematica is telling me this works out to θ = atan2(1-2*e^(2t), 2*e^t) which is not so bad at all.

Comparing with the "blub space" logic, I think the effective metric there is ds^2 = dz^2 + (z+1)^2 dx^2, polar coordinates where z=1/y is the zoom level, which (using dz=dy/y^2) works out to ds^2 = dy^2/y^4 + dx^2*(1/y^2 + ...). I guess this means the existing implementation spends much more time panning at high zoom levels compared to the hyperbolic model, since zooming from 4x to 2x costs twice as much as 2x to 1x despite being visually the same.

pfedak commented on Two auto-braking systems can't see people in reflective garb: report   usa.streetsblog.org/2025/... · Posted by u/Kye
thorncorona · a year ago
If you want the unsummarized source and not the chatgpt summarized version:

https://www.iihs.org/news/detail/high-visibility-clothing-ma...

pfedak · a year ago
the chart in the streetsblog article puts some values in the wrong boxes, too. pathetic

u/pfedak

KarmaCake day241August 9, 2016View Original