Back then display port ran at 60fps, and hdmi ran at 30fps. My hardware has changed and moved on, but I still default to using a display port cable. It's rare to not find a graphics card without display port, and it's rare to find a monitor without one, so I've never had a reason to try HDMI. As far as I can tell it's a stubborn format that continue to fight to live on. Frankly, I don't really get why we still have HDMI today.
When using Windows or Linux I don't find much benefit in text rendering on a 4k display.
But as Mac OS X has no sub pixel rendering or grid fitting text looks terrible without a high ppi display.
Which is a complete rip-off. You could easily get 2% interest on that $100k you're leaving with them by just leaving it with a better bank. For that to be worth it, you'd need to get at least $2k cash back on that credit card. At 5.25% (which is only for that 1 category you chose!) you'd need to burn through $38k/year... i.e., you gotta $100 a day on that card.
Except even if you somehow were planning to spend $100/day on that one lucky category, your cash back would be an order of magnitude lower, because they'd only give you that cash back on the first $2,500 in purchase, i.e. you could only earn $131 at most. So you're losing out on at least $2,000 to earn at most $131. Terrific deal!
Honestly, I just don't understand why people bother with traditional banks at this point. Pretty much all of the major brokers offer cash management accounts that are equivalent of online banking. Pretty much all with standard:
- No fees
- No minimum balance
- Free checkwriting (usually free physical checks for that matter!)
- ATM fee reimbursement (at least anywhere in the U.S., if not internationally too)
- Free debit cards
- Direct deposit
- Deposit paper checks by taking a picture with a phone app, etc etc etc.
The only possible downside that I can think of is that I'd have to deal with a little extra hassle if I wanted a cashier's check. But even then, it would simply take an extra day or two. And it's not like I'm closing on a new home purchase all that frequently.
Nevertheless, people that I talk to get weird and SCARED when I talk about it. Even my wife keeps a separate checking account at a physical bank, because she just likes knowing that a brick-and-mortar building is there. I don't get it myself, but human nature can be odd when it comes to money.
Does it require constant time concat?
Does it require immutability?
Does it require a balancing scheme based on the fib sequence?
Does it require that the tree is binary?
My tree is an attempt to be as fast as possible with log everything operations and no pointer/reference stability.
For accesses, you can in constant time determine which circular array contains the element at the given index (it's simply i % sqrt(N)) and then in constant time access the element from the underlying array. For inserts and deletions, you find the circular array that contains the location you want to insert into. First you make sure there is space in the array to insert. You do this by moving one element from each circular array to the next one. Since deleting and inserting from the end of a circular array takes O(1) and there are O(sqrt(N)) arrays, this takes a total of O(sqrt(N)) time. Then you insert the new element into the middle of the designated circular array which is of size sqrt(N) so it takes in the worst case O(sqrt(N)). This means insertions take a total of O(sqrt(N)) time.
As immawizard pointed out, there is a generalized version of this idea called tiered vectors[0] that supports an arbitrary level of nesting. A 1-tiered vector is a circular array. A k-tiered vector is an array of n^(1/k) tiered vectors of tier (k-1). You can show that for a k-tiered vector, access time is O(k), while insertion and deletion have a runtime of O(n^(1/k)). The datastructure mentioned in the post can be considered a 2-tiered vector.
The post includes benchmarks comparing the datastructure to std::vector. I would be interested in seeing benchmarks vs a binary search tree. Even though the datastructure has O(sqrt(N)) performance, that's still a lot slower than O(log(N)). The square root of a million is 1000, while the log base 2 of a million is only ~20.
One nitpick is that the author names the datastructure after themselves. Naming things after yourself is typically a faux pas.
Basically, it has O(log n) everything like a binary search tree you suggested but also very good constant factors.
By default, leaves hold 1024/sizeof(T) elements and branches hold 47 elements, so it can access up to 13,289,344 8 byte elements with only 4 levels.
Note that the assembly for process_message2 calls each function only once even though some appear twice in the body.