This made me laugh! But hyperbole aside, this premise would seem to (by nature of efficient laziness) eventually result in people spam watching ads with bots - despite any ways of preventing this, people will find a way.
This made me laugh! But hyperbole aside, this premise would seem to (by nature of efficient laziness) eventually result in people spam watching ads with bots - despite any ways of preventing this, people will find a way.
You cannot retroactively alter a policy, for obviously good reasons. The main preparation policy-wise is that insurance companies do not knowingly write new policies in the affected area when disaster is ongoing or imminent. Reinsurers will also avoid writing new treaties (which is what a standard reinsurance policy is called) -- for these reasons the Florida cat reinsurance market is typically dominated by policies that incept on June 1st and run to May 31st of the next year.
Internally, the companies will start modeling what they think their potential losses will be almost immediately, as investors expect a fairly quick turnaround on getting initial loss estimates out the door.
There's a fairly new paper here, albeit not yet peer reviewed, on the promise of maximum entropy models in an actuarial setting. The appendix has references to the earlier papers: https://www.casact.org/pubs/forum/20wforum/07_Evans.pdf
The fluid nature of probability is the center of the insurance universe. Probability is always a moving target in the insurance world. Indeed, if it weren't, there wouldn't be much of a need for actuaries. Much of actuarial training revolves around the idea of credibility -- how credible is your sample set, what alterations should you make to old data to make it relevant to today, and what data should you add to it as a complement in order to relieve the model of the biases inherent in your sample size. This is inherently Bayesian in it's approach.
Where it truly gets interesting is that insurance companies are very cognizant of tail risk -- the 1-in-100, 1-in-250, 1-in-500 events that can cause insurer insolvency if not properly accounted for. You can survive a miscalculated loss trend within reasonable bounds, but if you haven't thought about the potential Cat 5 hurricane that hits Miami-Dade then you are going to have some very unhappy investors. When it comes to these types of events, you mostly need to be in the right ballpark. The order of magnitude matters more than the exact number -- albeit the exact number matters quite a bit for regulatory reasons. This type of calculation for property lines has largely been outsourced to the stochastic models developed by companies such as AIR and RMS. A sudden change in their models, which I think is likely after this record breaking hurricane season, can inflict capital pressure on the industry almost instantly.[1]
There are some actuarial papers from around 50 years ago that discuss information entropy as another way to approach the issue of constructing probability models, but they never really caught on. It seems that is likely due to the lack of widespread computing power. I'm hoping these ideas can gain some steam now that we can construct some of these distributions from Python and R.
[1] There is a fantastic article by Michael Lewis that describes this issue at great detail: https://www.nytimes.com/2007/08/26/magazine/26neworleans-t.h...
To add to your tail risk point - I wonder how many people foresaw the Venezuelan oil crisis way back when, or even less likely, the Saudi Arabian oil complex attack in 2019. And of course, the current situation we're in with CoVID that an entire university of forward thoughtful looking people didn't call until it was a week away. As an aside, do insurance companies significantly alter their policies when such a cat-5 hurricane is imminent? What preparations would they make in the face of that sort of event?
Are you talking about chaos theory in the last paragraph? I'll read that article you linked in a bit and see what more I have to say, from skimming through it looks as though my question from the previous paragraph may be answered.
Yet some comments make me feel like they expect, with the threat of harsh criticism, uber-deep and profoundly insightful content from PG on a highly consistent basis. Maybe it's the phenomenon where wider audiences give rise to (or amplify) polarizing views.
A couple well known things the field mostly agrees: there are no real synonyms, in the sense that every word has its own semantic baggage, and that manifests different meanings in different contexts. So i.e. "use" and "utilise" are not at all the same words, one fancy.
Same goes for syntactic structures. There's a variety of approaches here, but you'd be hard pressed to find even a supporter of transformational generative grammar, the Chomskyan paradigm who says active and passive sentences come from the same underlying deep structure (roughly speaking), who'd say active and passive voice sentences are equal or equivalent in discourse.
Similarly, you can't boil down "complexity" to the length of sentences or number of clauses. I'd be more willing to concede if this was talking about the distance of things that refer to each other, or the number of words that refer to previous discourse or outer world (i.e. long-distance dependencies & deictic elements). But you can have paragraph long sentences that read just buttery smooth, and "sentence" itself is a pretty vague term that you can't really pin down. Like, if I splat that previous sentence into two around "and", would it really be two sentences, or is it really one sentence to begin with?
All these weird stuff like "don't use active voice" or "use 'simpler' words" etc. are the product of the same mentality that in ye olde times wanted you not to split infinitives, not to end your sentences with prepositions, and other nonsense up with which you should not put.
If you think about it, every business interaction before the twentieth century was mediated by 1:1 interactions with humans, who brought their own prejudices and self-interest to it. The Stowger exchange was the start of an era of "mechanical honesty" - machines, businesses, and even government departments that could only act in one way, because any bespoke deviation was too inefficient to exist/be profitable, and so ordinary citizens could rely on them.
We are coming to the end of that era. Computing power has reached the point where bespoke dishonesty and manipulation can be implemented efficiently. The public still retains the expectations of the mechanical honesty era, and is an easy mark. That has to change...
[edited for punctuation]
I'm particularly interested in applications of dynamic open-sourced metrics (ranging from corporate carbon footprinting to labour tokenization) - is this the vision for hmt? Only things I've found online so far are Grafana/Prometheus (etc) and Uber's m3, which is built on top of the former. Anything else you know of tackle that topic?