https://en.wikipedia.org/wiki/Base_rate_fallacy
https://en.wikipedia.org/wiki/Simpson%27s_paradox
Someone that's done some analysis using the above mental models: https://www.covid-datascience.com/post/israeli-data-how-can-...
https://en.wikipedia.org/wiki/Base_rate_fallacy
https://en.wikipedia.org/wiki/Simpson%27s_paradox
Someone that's done some analysis using the above mental models: https://www.covid-datascience.com/post/israeli-data-how-can-...
If I were starting an IRC channel for a free software project now, I would put it on OFTC, which has a real governance model with elections and - mysteriously - also manages to be drama-free.
I was barely in high school when I came up with the name OFTC and I registered OFTC.net. Very early on in the process of creating OFTC, I agreed with all of the people I was creating OFTC with that I would behave as caretaker rather than owner of OFTC.net while we figured out our governance.
Ultimately we came up with a governance model, and we also managed to convince Software in the Public Interest to take custody of the domain name and have it managed in accordance with the governance model we designed.
We started with a pretty great group of both capable and well-intended people, and one of the things we figured out was that if OFTC was going to be a sustainable project, it needed more sustainable governance than the project we were leaving.
One of the key people behind the very early push for OFTC to have a stable governance model later became a Member of Parliament here in Canada.
They in turn reference my 2015 take on this: http://dnscookie.com/
With homage Moxie's Cryptographic Doom Principle, I propose the Cache Doom Principle: If a system's behaviour can be influenced by a cache, eventually someone will figure out a way to use that cache to leak data.
(1) Firefox already uses its own root store
(2) App developers can include additional roots in addition to the system root store: https://developer.android.com/training/articles/security-con...
(3) Chrome is migrating to using it's own store: "Historically, Chrome has integrated with the Root Store provided by the platform on which it is running. Chrome is in the process of transitioning certificate verification to use a common implementation on all platforms where it's under application control, namely Android, Chrome OS, Linux, Windows, and macOS. Apple policies prevent the Chrome Root Store and verifier from being used on Chrome for iOS."
https://www.chromium.org/Home/chromium-security/root-ca-poli...
Disclaimer - I work for DigitialOcean.
https://www.digitalocean.com/docs/container-registry/ - "In the future, each plan will have a bandwidth allowance and additional outbound data transfer (from the registry to the internet) will be $0.10/GiB."
The fact that somebody could put a caching proxy in front of the container registry -- on a droplet also hosted at DigitalOcean -- and have their bandwidth costs fall 10x for doing that does indeed provide further illustration of the absurdity of DigitalOcean's new approach to bandwidth pricing.
Of course it would be nicer if things were free, but to claim this to be evidence of a march into irrelevance? For charging for egress traffic on a Docker registry? That’s a bit too much don’t you think? Especially considering how easy it is to set up some GitHub action and or another CI tool that constantly keeps hammering their registries without a lot of value. Docker (the company) clearly feels the pain of this a lot, and they just want to prevent this type of thing from happening.
If I were to guess the intended use case is to help you with deployments inside the DO cloud, and to actually reduce your ingress traffic when pulling from other, remote docker registries. It’s a win/win for these use cases, and to be honest, it’s not expensive.
Besides, DO’s pricing still is very much favorable compared to other cloud vendors.
Indeed DigitalOcean themselves built their place in the market by charging $0.01/gb for bandwidth. How do we reasonably get to $0.10 as is the case here?
If it were really that expensive for them they could outsource it to a CDN for well under $0.01/gb at their scale, which would leave them the ability to get margin. But all of this pricing is in fact completely detached from the underlying physical realities -- they are charging these prices because they think they can get away with it, not because they need to do so to cover costs and have some margin.
Bandwidth prices shouldn't be going up, indeed they should be going down. 100 gigabit interconnects are a thing now.
First it was the app platform and now this. Gouging us at $0.10/gigabyte bandwidth charges makes us: (1) think less of you, and (2) adds a bunch of cognitive complexity & work to developers' lives.
If this is how it's going to be we may as well just use AWS or move on to one of your competitors that isn't trying to pretend that bandwidth is expensive. It isn't, and there isn't any reason we should have to design applications around artificially absurdly inflated costs.
Even Oracle pretends to understand this. _ORACLE_ are the ones trying to make the case that they aren't only about having hostages/locked in customers.
When Oracle is beating you on this metric you've really jumped the shark.
"Israelis who were vaccinated were 6.72 times more likely to get infected after the shot than after natural infection". https://archive.is/RlwBc
But, once the data is broken down into buckets that help address confounding variables (i.e. different vaccination rates among different age groups), things look very different. All of a sudden efficacy numbers are looking better than 90% for a lot of people.
This will similarly matter a great deal as people try to figure out how long vaccines provide protection. The groups that got vaccinated the earliest in many places were older people and health care workers -- groups which start out at higher risk, and also have a higher probability of less effective immune response to vaccines (older people).
As a result of that, it will be easy for analysts that don't consider that issue to under-estimate the effective time period of vaccines.
The archive.is link you provided isn't working for me at the moment, but to address your statement in the context of the above framework:
The group of people most likely to have been infected with the virus are not the same as the group of people most likely to have antibodies as a result of immunization. In many places, there are a lot more younger people who have gotten infected with the disease than older people. There are other socioeconomic and behavioural differences too.
Given that young people tend to have a more effective immune responses to begin with, and given that they have been shown to have better outcomes after being infected with this virus, it's easy to see a way to incorrectly conclude that stronger immunity results from infection-acquired antibodies, even if the opposite may be true.
In short: Apparent differences may be better explained by the fact that it's a different group of people who have been infected vs those who have not been infected.