The Mini Cooper SE is also being discontinued.
looks like bolt is still here
The Mini Cooper SE is also being discontinued.
looks like bolt is still here
I took 2 courses: "Rafting Trip" and "Write a Compiler". Both were awesome. The Rafting Trip took us through implementing the Raft consensus algorithm from scratch. And the "Write a Compiler" course had us build a small language using LLVM.
Both courses (but especially the Rafting trip one) were definitely for experienced programmers. In the courses I took, people generally had at least 5 years of professional work. And even still, there were a few people that really struggled to stay on pace in the course.
But at the end, most people had a (kinda) working Raft library or compiler!
Edit:
> Our findings reveal for the first time, that Alzheimer’s symptoms can be transferred to a healthy young organism via the gut microbiota, *confirming a causal role* of gut microbiota in Alzheimer’s disease ...
From: https://faculty.sites.uci.edu/kimgreen/bio/glucocorticoids-a...
Citing many papers
> One early event in AD is an increase in circulating glucocorticoids
You could sum up Alzheimer's as: Diet/Lifestyle + a congenital form of Cushing's syndrome and you have increasing glucocorticoids which imply downregulation of the PVN, less progesterone and low levels of Prolactin reducing oligodendrocyte reducing myelin sheaths. Add in APOE e4 without choline in the diet and you have accumulation of lipids to round it all out.
There is a reason why Omega 3 + B+D vitamins are talked about as preventative as they all reduce inflamation.
See this for example: https://www.grassrootshealth.net/two-nutrients-proven-stop-b...
Edit2: For the curious here is a larger brain dump on the topic with many more links when I had to untangle the pathway earlier this year while working on something else. https://www.reddit.com/r/DrWillPowers/comments/16ae4zy/alzhe...
That said, I think most coders just can't deal with it. For reasons I won't go into, I came to fdb already fully aware of the compromises that software transactional memories have, and fdb roughly matches the semantics of those: retry on failure, a maximum transaction size, a maximum transaction time, and so on. For those who haven't used it, start here: https://apple.github.io/foundationdb/developer-guide.html ; especially the section on transactions.
These constraints _very_ inconvenient for many kinds of applications so, ok, you'd like a wrapper library that handles them gracefully and hides the details (for example count of range).
This seems like it should be easy to do - after all, the expectation is that _application developers_ do it directly - but it isn't actually so in practice and introduces a layering violation into the data modeling if you have any part of your application doing direct key access. I recommend people try it. It can surely be done, but that layer is now as critical as the DB itself, and that has interesting risks.
At heart, the problem is, the limits are low enough that normal applications can and do run into them, and they are annoying. It would be really nice if the FDB team would build this next layer themselves with the same degree of testing but they themselves have not, and I think it's pretty clear that it turns out a small-transaction KV store is not enough to build complex layers in actuality.
Emphasis on the tested part - it's all well and good for fdb to be rock solid, but what needs to be there is that the actual interfact used by 90% of applications is rock solid, and if you exceed basic small-size keys or time, that isn't really true.
Just take as given that the analysis is correct, and screening for rare Disease A on net has no effect on life expectancy. Almost no one actually gets Disease A, but everyone is screened for it, and that has some diffuse cost to life expectancy: Screen enough people enough times and someone will die in a car accident on the way to or from the doctor's office. More likely the screening crowds out other more net-beneficial medical testing or is taken as some false comfort to continue an unhealthy lifestyle.
Modern cancer treatment, especially for the most common types (i.e. the most likely to be screened for) is very good, even if the cancer is caught later due to lack of screening. So even the folks who catch it early due to screening don't incur a benefit in many cases, further pushing down the life-expectancy win on average.
Still: This is like saying home insurance is a bad deal because on average the insurance companies make money. Screening is an insurance policy (not a free one, to be sure) against a catastrophic outcome.
If you're a public health authority in a utilitarian and budget-constrained mindset, sure, don't encourage screenings by the logic and findings of this analysis. But I don't think individuals should consider on-average-LE-negative screenings as something to avoid.
Is there a specific subject area(s) of application? aka big data analytics[a] / data mining and web analytics[b] / streaming applications[c] / other
Random picks with mix of tutorials/examples/explainations/resouces from search engine term "probabilistic data structures": [1][2][3][4], probabilistic DS vs. postgresql[4]
---
[0] : https://xlinux.nist.gov/dads/
[1] : https://www.geeksforgeeks.org/introduction-to-the-probabilis...
[2] : https://iq.opengenus.org/applications-of-different-data-stru...
[3] : https://dzone.com/articles/introduction-probabilistic-0
[4] : https://blog.devgenius.io/probabilistic-data-structures-1-de...
[5] : https://www.postgresql.eu/events/fosdem2020/sessions/session...
----
[a] big data : https://www.researchgate.net/publication/335418069_Probabili...
[b] data mining / web analytics : https://highlyscalable.wordpress.com/2012/05/01/probabilisti...
[c] streaming : https://www.usaclouds.org/blog/streaming-with-probabilistic-...