Just don't elect them. When running for office in most places, being considered wealthy or coming from a wealthy family is a considerable hurdle to overcome (as is being extremely poor).
Just don't elect them. When running for office in most places, being considered wealthy or coming from a wealthy family is a considerable hurdle to overcome (as is being extremely poor).
He doesn't have any power anymore right? Just a pure revenge thing? A powerful group sending a warning message to the current PM?
Deleted Comment
Well that's some real dodgy use of numbers, right there. In "1 in 4000", "1" is the number of people who died as a result of donating a kidney and "4000" are the number of people who didn't, counted over some sample of living kidney donors.
These two numbers, "1" and "4000" have no obvious relation to the value one places on one's life compared to the lives of others. For example, "4000" is not the number of others lives saved by donating one kidney. By donating one kidney one can "save" one other person's life at most (and it's not really "saving" as it is delaying the inevitable).
Equally dodgy is the calculation of "1/3,000 risk of death in surgery is like sacrificing yourself to save 3,000 people" earlier in the article.
Where does this dodgy statistical thinking (like magical thinking, but with statistics) come from I know not, but, anecdotally it's very common in discussions about doing good with numbers and it seems to be designed to shut down debate by claiming "sciense says".
Btw, if I wanted to know how much I value my life over that of a stranger all I'd have to do is ask myself: how many people would I sacrifice to save my life? I am guessing that for the majority of people on the planet the answer is "0". Simple question, simple answer, and no dodgy "maths".
To explain with another example - let's say that I have a dataset of 100 people's scores at golf (no handicaps) and I know that 5% of them are pro-players and others are 'advanced amateurs'. Because of this I might take the top 5 scores and guess that they are pro's and assign the others the guess of 'advanced amateur'.
Now let's say that there was actually no correlation between people's scores at golf and their 'pro' status - what accuracy would I expect in the above experiment? The answer is actually closer to 90% 'accurate guesses' than 50%! (Although obviously - that's 90% accurate based on random chance).
Now if someone told me they got 50% of the guesses wrong at this task, that implies that they guessed that the top 50% of those golfers were pro rather than picking the top 5% of scores, and I would question the methodology.
This % is similar to the dataset in the webpage - I downloaded it, filtered out exclusions and c4% of the valid responses are 60 or over.
If I inherently pick a small population (i.e. over 60's are c4% in this dataset) and I am guessing wrong 50% of the time, it means that my cut-off is incorrectly calibrated. Their score cut-off should, at worst, be picking the wrong 4% and missing another 4%.
Am I going crazy? It seems logical to me, but to be open maths isn't my strong point. I just know that if I designed the guessing rule, I would be getting more than 50% (my algorithm would be 'if the users average score across the three tests is less than -1.5, assign 'over 60' and that would get c95% accurate guesses, albeit it would still not prove anything and I agree with the authors overall premise!).
Only 5% of their dataset is above the age of 60, making their claim that they are getting 50% of their guesses wrong seem like they are calculating it wrong. Surely their cut-off should be at the 95th percentile of the data?
They shouldn't be guessing 'under 60' the same proportion of times as 'over 60', because their population is mostly under 60.
And paranoid, really?