Readit News logoReadit News

Deleted Comment

alxv commented on The Remarkable Persistence of 24x36   theonlinephotographer.typ... · Posted by u/daxelrod
alxv · 7 years ago
The 35mm format is rather amazing balance of design trade-offs. No wonder it is so enduring.

It's large enough for the double-Gauss lens (a.k.a. normal prime lens) to have a nice shallow deep of field wide open. f/1.2 is close to the limit of a typical SLR mount. So a normal 50mm f/1.2 lens gets us 42mm of aperture. This means we can get the same depth of field and angle of view as a 6x7 medium format with a 110mm f/2.8 lens or a 4x5 large format with a 180mm f/4.5 (but with a much smaller system!) And we get a faster lens as a bonus.

Smaller formats lose some of that versatility of composition. The focal length of the normal lens on APS-C is 32mm. We would need f/0.8 lens to the same DOF. That's not possible on an SLR mount. Mirrorless systems with their shorter flange distances could get us there. But even then there are limits to how short the flange can be because image sensors become much less efficient as the angle of incidence of the light hitting them increases.

Deleted Comment

alxv commented on Compute Engine machine types with up to 96 vCPUs and 624GB of memory   cloudplatform.googleblog.... · Posted by u/ramshanker
iofiiiiiiiii · 8 years ago
I imagine you get that 96 if you stick together 4x Xeon CPUs with 24 cores each onto one server.

https://www.intel.com/content/www/us/en/products/processors/...

alxv · 8 years ago
You only need 2 processors, since each cores gives you 2 vCPUs. "For the n1 series of machine types, a virtual CPU is implemented as a single hardware hyper-thread" --https://cloud.google.com/compute/docs/machine-types
alxv commented on When Simple Wins: Power of 2 Load Balancing   fly.io/articles/simple-wi... · Posted by u/mattdennewitz
mnutt · 8 years ago
Consistent hashing is a bit cleaner way to do it, but pretty much the same result as modulo-ing the user id against number of servers. At least as I understand it, you consistently hash something (a user id, a request URL, etc) into N buckets, where N is the number of servers, so changing N re-shuffles all of the buckets anyway.

Short of something like cassandra's ring topology, how would you use consistent hashing add new servers and assign them requests?

alxv · 8 years ago
You are missing a crucial piece here to have consistent hashing: you also need hash the names of the servers. With consistent hashing you hash both the names of the requests and of the servers, then you assign the request to the server with closest hash (under the modulus). With this scheme, you only need to remap 1/n of the keys (where n is the number of servers).
alxv commented on When Simple Wins: Power of 2 Load Balancing   fly.io/articles/simple-wi... · Posted by u/mattdennewitz
euph0ria · 8 years ago
Regarding the math section, could someone please describe it like you were talking to a 5 year old?

1) Θ( log n = log / log n )

2) Θ(log log n)

alxv · 8 years ago
There is a proof shown in this handout: https://people.eecs.berkeley.edu/~sinclair/cs271/n15.pdf

It's hard to understand why this technique works so well without digging deep in the math. Roughly speaking, if you throw n balls in n bins at random, the maximum of number balls in any bins will grow surprisingly quickly (because of the birthday paradox). However, if we allow ourselves to choose between two random bins instead of one, and put the ball in the one with the fewest balls in it, the maximum number of balls in any bins grow much more slowly (i.e., O(ln ln n)). Hence, having that one extra random choice allows us to get surprisingly close to the optimal approach of comparing all bins (which would give us O(1)), without doing all that work.

alxv commented on When Simple Wins: Power of 2 Load Balancing   fly.io/articles/simple-wi... · Posted by u/mattdennewitz
alxv · 8 years ago
The method is called "Power of Two Random Choices" (http://www.eecs.harvard.edu/~michaelm/postscripts/handbook20...). And the two-choices paradigm is widely applicable beyond load balancing. In particular, it applies to hash table design (e.g. cuckoo hashing) and cache eviction schemes (https://danluu.com/2choices-eviction/).
alxv commented on When Simple Wins: Power of 2 Load Balancing   fly.io/articles/simple-wi... · Posted by u/mattdennewitz
throwaway13337 · 8 years ago
The simplest load balancing I've done is modulo the user ID by the number of servers then point at that server.

This solves caching too since you are only ever receiving and caching user data on a single server. No cache communication required. You can enforce it on the server side for security as well.

Doesn't require a load balance server - just an extra line of code.

Keep it simple.

alxv · 8 years ago
What happens when the number of servers changes? The cache hit rate would likely drop to zero until it warms up again, which is a good way to accidentally overload your systems.

Load balancing based on consistent hashing is the better way to implement this.

alxv commented on Americans Are Putting Billions More Than Usual in Their 401(k)s   bloomberg.com/news/articl... · Posted by u/uptown
Zaheer · 9 years ago
Note that Roth IRA has an Income Limit. Ex. You can't contribute at all as a single filer if you make more than $131k.

https://en.wikipedia.org/wiki/Roth_IRA#Income_limits

alxv · 9 years ago
If your employer 401k plan allows it, you can roll over after-tax 401k contributions to a Roth IRA (a.k.a. mega-backdoor Roth IRA).
alxv commented on Show HN: Chessboardify – Make the grid a chessboard   chessboardifygame.xyz/... · Posted by u/mapehe
tgb · 9 years ago
This is a very nice exposition of the strategy, but I still don't see how it answers my more specific question, about whether you can solve the problem row by row (is the grid).
alxv · 9 years ago
Sorry, it turns that was misunderstanding from my part. It turns out we cannot solve a board row by row without revisiting the previous rows. The board below is a counterexample:

  101
  010
  100
The first and second rows are solved. However we cannot solve the last row without re-doing the first and second rows. The two solutions of this board shows this:

  solution #1:
  110
  100
  000

  solution #2:
  110
  110
  000
Actually, seeing my mistake made me challenge my assumption that a singular matrix over the reals might not be over integers modulo 2. This is likely wrong too. I don't know much about abstract algebra (and I am not a mathematician). Wikipedia (https://en.wikipedia.org/wiki/Determinant#Square_matrices_ov...) states "the reduction modulo m of the determinant of such a matrix is equal to the determinant of the matrix reduced modulo m."

The move matrix M for all boards of size 3n + 2 appears to be singular. This means these boards may have no solution or a large number of solutions.

u/alxv

KarmaCake day332November 11, 2007View Original