# Requirements
- lz4
- zstd
# Building git clone https://github.com/hydradatabase/hydra
cd hydra/columnar
./configure
make
make install
# Install CREATE EXTENSION IF NOT EXISTS columnar;
The actual extension docs are at https://docs.hydra.so/concepts/using-hydra-columnarThis is ridiculous.
Hyperscalars see an immediate ROI from efficiency/reliability improvements and actively invest in TCP alternatives all of the time. It's just really hard.
Networking companies see an ability to differentiate their products from their peers and work on this kind of thing as well. I did a 3 second google for "QUIC acceleration Mellanox" and got a hit on Nvidia's blog right away.
You just can't trivially replace something with an investment totally 50 years of clock time and thousands of years of engineer time. It will either take a long time or a massive shift in needs/technology. FWIW, I wouldn't be surprised if the high-performance RDMA networks being put together for AI workloads were the thing that grew into the "next" thing.
Maybe we were just early in giving (HFT) customers RDMA back in ~2007[1][2] but I don't see it entering the mainstream anytime soon. And after a relatively short 20 years of adoption, the "next" thing for hyperscalers is not going to be the next thing for everyone else.
[1] https://downloads.openfabrics.org/Media/IB_LowLatencyForum_2...
[2] https://www.thetradenews.com/wombat-and-voltaire-break-milli...
As noted, several app categories seem to have "matured" and/or ossified and are dominated by one or two players. For example, Microsoft and Apple dominate the "office" productivity software market on macOS.
Moreover, apps like Discord, Teams, VS Code, etc. seem to be cross-platform web apps which are clunky and unsatisfying compared to native apps. (Other web apps don't even have desktop versions and only exist in the browser.)
That being said, Adobe's offerings are expensive subscriptions which have opened up a market for Affinity as well as Pixelmator and Acorn.
- Both
- Both
- Yes. I wish that weren't the case, but considering that I can't find a single provider so far who respects end user privacy, I would expect for one who does so to charge more.
- No. Ideally, the provider wouldn't keep any logs, so they wouldn't be aware that the same client was making a subsequent request.
- I guess it's completely up to the provider. As this would be the first privacy-respecting provider, they'll probably have to go all-in with privacy, if they wish to gain traction and popularity within the community. So no, I'd personally hope that they wouldn't do that. However if this were an existing provider hoping to start becoming more private, yet they also have current customers for whom these features matter, then I guess workarounds like this are better than not being able to transition to better privacy in general. Or, even better, offer features like this for customers who need it, but allow them to be disabled from account settings for those who don't want it.
- To me, personally, I do not care at all about metrics. If a client is querying DNS, then it's because they're about to connect to one of my services (leaving cyberattacks out of the picture for the moment), at which point if I wanted to (which I don't) I could collect metrics. That being said, I don't think that, for those who want it, collecting generalized metrics at the country level, for example, would be unreasonable. And other metrics, such as DNS routing based on server "health checks" or number of resolution errors, etc. aren't bad either. It's just imperative that when the company collects these generalized metrics, they have a clear and perfect process of purging the metrics of all PII, and only saving the country name from which the request originated, for example.
No problem!
- Is this for personal domains or commercial?
- Are the clients 'sensitive' or do you want to protect PII out of principle?
- Do you expect to pay a premium (compared to larger providers) for client privacy?
- For records that have a distribution strategy like round robin or balanced by load, do you expect a client to receive the same result on subsequent requests?
- Is it acceptable to keep (for a record's TTL) a hash the client's subnet and the response for the purposes of only returning consistent records, or do you consider this another flavour of tracking client IPs?
- How valuable are metrics/reporting do you? Is reporting query volume at the ASN or country level enough? Too much?
Thanks.
[1] https://replicache.dev [2] https://hexdocs.pm/phoenix_live_view/Phoenix.LiveComponent.h...