Readit News logoReadit News
tfehring · 2 years ago
The author is claiming that Bayesians vary along two axes: (1) whether they generally try to inform their priors with their knowledge or beliefs about the world, and (2) whether they iterate on the functional form of the model based on its goodness-of-fit and the reasonableness and utility of its outputs. He then labels 3 of the 4 resulting combinations as follows:

    ┌───────────────┬───────────┬──────────────┐
    │               │ iteration │ no iteration │
    ├───────────────┼───────────┼──────────────┤
    │ informative   │ pragmatic │ subjective   │
    │ uninformative │     -     │ objective    │
    └───────────────┴───────────┴──────────────┘
My main disagreement with this model is the empty bottom-left box - in fact, I think that's where most self-labeled Bayesians in industry fall:

- Iterating on the functional form of the model (and therefore the assumed underlying data generating process) is generally considered obviously good and necessary, in my experience.

- Priors are usually uninformative or weakly informative, partly because data is often big enough to overwhelm the prior.

The need for iteration feels so obvious to me that the entire "no iteration" column feels like a straw man. But the author, who knows far more academic statisticians than I do, explicitly says that he had the same belief and "was shocked to learn that statisticians didn’t think this way."

klysm · 2 years ago
The no iteration thing is very real and I don’t think it’s even for particularly bad reasons. We iterate on models to make them better, by some definition of better. It’s no secret that scientific work is subject to rather perverse incentives around thresholds of significance and positive results. Publish or perish. Perverse incentives lead to perverse statistics.

The iteration itself is sometimes viewed directly as a problem. The “garden of forking paths”, where the analysis depends on the data, is viewed as a direct cause for some of the statistical and epistemological crises in science today.

Iteration itself isn’t inherently bad. It’s just that the objective function usually isn’t what we want from a scientific perspective.

To those actually doing scientific work, I suspect iterating on their models feels like they’re doing something unfaithful.

Furthermore, I believe a lot of these issues are strongly related to the flawed epistemological framework which many scientific fields seem to have converged: p<0.05 means it’s true, otherwise it’s false.

edit:

Perhaps another way to characterize this discomfort is by the number of degrees of freedom that the analyst controls. In a Bayesian context where we are picking priors either by belief or previous data, the analyst has a _lot_ of control over how the results come out the other end.

I think this is why fields have trended towards a set of ‘standard’ tests instead of building good statistical models. These take most of the knobs out of the hands of the analyst, and generally are more conservative.

joeyo · 2 years ago

  > Iteration itself isn’t inherently bad. It’s just that the objective
  > function usually isn’t what we want from a scientific perspective.
I think this is exactly right and touches on a key difference between science and engineering.

Science: Is treatment A better than treatment B?

Engineering: I would like to make a better treatment B.

Iteration is harmful for the first goal yet essential for the second. I work in an applied science/engineering field where both perspectives exist. (and are necessary!) Which specific path is taken for any given experiment or analysis will depends on which goal one is trying to achieve. Conflict will sometimes arise when it's not clear which of these two objectives is the important one.

slashdave · 2 years ago
In particle physics, it was quite fashionable (and may still be) to iterate on blinded data (data deliberated altered by a secret, random number, and/or relying entirely on Monte Carlo simulation).
j7ake · 2 years ago
Iteration is necessary for any analysis. To safeguard yourself from overfitting, be sure to have a hold out dataset that hasn’t been touched until the end.
opensandwich · 2 years ago
As someone who isn't particularly well-versed in Bayesian "stuff". Does Bayesian non-parametric methods fall under "uninformative" + "iteration" approach?

I have a feeling I'm just totally barking up the wrong tree, but don't know where my thinking/understanding is just off.

mjburgess · 2 years ago
Non-parametric models can be generically understood as parametric on order statistics.
Onavo · 2 years ago
Interesting, in my experience modern ML runs almost entirely on pragmatic Bayes. You find your ELBO, you choose the latest latent variable du jour that best models your problem domain (these days it's all transformers), and then you start running experiments.
tfehring · 2 years ago
I think each category of Bayesian described in the article generally falls under Breiman's [0] "data modeling" culture, while ML practitioners, even when using Bayesian methods, almost invariably fall under the "algorithmic modeling" culture. In particular, the article's definition of pragmatic Bayes says that "the model should be consistent with knowledge about the underlying scientific problem and the data collection process," which I don't consider the norm in ML at all.

I do think ML practitioners in general align with the "iteration" category in my characterization, though you could joke that that miscategorizes people who just use (boosted trees|transformers) for everything.

[0] https://projecteuclid.org/journals/statistical-science/volum...

thegginthesky · 2 years ago
I miss the college days where professors would argue endlessly on Bayesian vs Frequentist.

The article is very well succinct and even explains why even my Bayesian professors had different approaches to research and analysis. I never knew about the third camp, Pragmatic Bayes, but definitely is in line with a professor's research that was very through on probability fit and the many iteration to get the prior and joint PDF just right.

Andrew Gelman has a very cool talk "Andrew Gelman - Bayes, statistics, and reproducibility (Rutgers, Foundations of Probability)", which I highly recommend for many Data Scientists

bunderbunder · 2 years ago
mturmon · 2 years ago
Thank you.

In fact, the whole talk series (https://foundationsofprobabilityseminar.com/) and channel (https://www.youtube.com/@foundationsofprobabilitypa2408/vide...) seem interesting.

spootze · 2 years ago
Regarding the frequentist vs bayesian debates, my slightly provocative take on these three cultures is

- subjective Bayes is the strawman that frequentist academics like to attack

- objective Bayes is a naive self-image that many Bayesian academics tend to possess

- pragmatic Bayes is the approach taken by practitioners that actually apply statistics to something (or in Gelman’s terms, do science)

DebtDeflation · 2 years ago
A few things I wish I knew when took Statistics courses at university some 25 or so years ago:

- Statistical significance testing and hypothesis testing are two completely different approaches with different philosophies behind them developed by different groups of people that kinda do the same thing but not quite and textbooks tend to completely blur this distinction out.

- The above approaches were developed in the early 1900s in the context of farms and breweries where 3 things were true - 1) data was extremely limited, often there were only 5 or 6 data points available, 2) there were no electronic computers, so computation was limited to pen and paper and slide rules, and 3) the cost in terms of time and money of running experiments (e.g., planting a crop differently and waiting for harvest) were enormous.

- The majority of classical statistics was focused on two simple questions - 1) what can I reliably say about a population based on a sample taken from it and 2) what can I reliably about the differences between two populations based on the samples taken from each? That's it. An enormous mathematical apparatus was built around answering those two questions in the context of the limitations in point #2.

refulgentis · 2 years ago
I see, so academics are frequentists (attackers) or objective Bayes (naive), and the people Doing Science are pragmatic (correct).

The article gave me the same vibe, nice, short set of labels for me to apply as a heuristic.

I never really understood this particular war, I'm a simpleton, A in Stats 101, that's it. I guess I need to bone up on Wikipedia to understand what's going on here more.

skissane · 2 years ago
> - subjective Bayes is the strawman that frequentist academics like to attack

I don’t get what all the hate for subjective Bayesianism is. It seems the most philosophically defensible approach, in that all it assumes is our own subjective judgements of likelihood, the idea that we can quantify them (however in exactly), and the idea (avoid Dutch books) that we want to be consistent (most people do).

Whereas, objective Bayes is basically subjective Bayes from the viewpoint of an idealised perfectly rational agent - and “perfectly rational” seems philosophically a lot more expensive than anything subjective Bayes relies on.

3abiton · 2 years ago
Funny enough I also heard recently about Fiducial Statistics as a 3rd camp, an intriguing podcast episode 581 of super data science, with the EiC of Harvard Business Review.
RandomThoughts3 · 2 years ago
I’m always puzzled by this because while I come from a country where the frequentist approach generally dominates, the fight with Bayesian basically doesn’t exist. That’s just a bunch of mathematical theories and tools. Just use what’s useful.

I’m still convinced that Americans tend to dislike the frequentist view because it requires a stronger background in mathematics.

parpfish · 2 years ago
I don’t think mathematical ability has much to do with it.

I think it’s useful to break down the anti-Bayesians into statisticians and non-statistician scientists.

The former are mathematically savvy enough to understand bayes but object on philosophical grounds; the later don’t care about the philosophy so much as they feel like an attack on frequentism is an attack on their previous research and they take it personally

runarberg · 2 years ago
I think the distaste Americans have to frequentists has much more to do with history of science. The Eugenics movement had a massive influence on science in America a and they used frequentist methods to justify (or rather validate) their scientific racism. Authors like Gould brought this up in the 1980s, particularly in relation to factor analysis and intelligence testing, and was kind of proven right when Hernstein and Murray published The Bell Curve in 1994.

The p-hacking exposures of the 1990s only fermented the notion that it is very easy to get away with junk science using frequentest methods to unjustly validate your claims.

That said, frequentists are still the default statistics in social sciences, which ironically is where the damage was the worst.

ordu · 2 years ago
I'd suggest you to read "The Book of Why"[1]. It is mostly about Judea's Pearl next creation, about causality, but he also covers bayesian approach, the history of statistics, his motivation behind bayesian statistics, and some success stories also.

To read this book will be much better, then to apply "Hanlon's Razor"[2] because you see no other explanation.

[1] https://en.wikipedia.org/wiki/The_Book_of_Why

[2] https://en.wikipedia.org/wiki/Hanlon's_razor

gnulinux · 2 years ago
This statement is correct only on a very basic, fundamental sense, but it disregards the research practice. Let's say you're a mathematician who studies analysis or algebra. Sure, technically there is no fundamental reason for constructive logic and classical logic to "compete", you can simply choose whichever one is useful for the problem you're solving, in fact {constructive + lem + choice axioms} will be equivalent to classical math, so why not just study constructive math since it's higher level of abstraction and you can always add those axioms "later" when you have a particular application.

In reality, on a human level, it doesn't work like that because, when you have disagreements on the very foundations of your field, although both camps can agree that their results do follow, the fact that their results (and thus terminology) are incompatible makes it too difficult to research both at the same time. This basically means, practically speaking, you need to be familiar with both, but definitely specialize in one. Which creates hubs of different sorts of math/stats/cs departments etc.

If you're, for example, working on constructive analysis, you'll have to spend tremendous amount of energy on understanding contemporary techniques like localization etc just to work around a basic logical axiom, which is likely irrelevant to a lot of applications. Really, this is like trying to understand the mathematical properties of binary arithmetic (Z/2Z) but day-to-day studying group theory in general. Well, sure Z/2Z is a group, but really you're simply interested in a single, tiny, finite abelian group, but now you need to do a whole bunch of work on non-abelian groups, infinite groups, non-cyclic groups etc just to ignore all those facts.

thegginthesky · 2 years ago
It's because practicioners of one says that the other camp is wrong and question each other's methodologies. And in academia, questioning one's methodology is akin to saying one is dumb.

To understand both camps I summarize like this.

Frequentist statistics has very sound theory but is misapplied by using many heuristics, rule of thumbs and prepared tables. It's very easy to use any method and hack the p-value away to get statistically significant results.

Bayesian statistics has an interesting premise and inference methods, but until recently with the advancements of computing power, it was near impossible to do simulations to validate the complex distributions used, the goodness of fit and so on. And even in the current year, some bayesian statisticians don't question the priors and iterate on their research.

I recommend using methods both whenever it's convenient and fits the problem at hand.

bb86754 · 2 years ago
I can attest that the frequentist view is still very much the mainstream here too and fills almost every college curriculum across the United States. You may get one or two Bayesian classes if you're a stats major, but generally it's hypothesis testing, point estimates, etc.

Regardless, the idea that frequentist stats requires a stronger background in mathematics is just flat out silly though, not even sure what you mean by that.

ants_everywhere · 2 years ago
> I’m still convinced that Americans tend to dislike the frequentist view because it requires a stronger background in mathematics.

The opposite is true. Bayesian approaches require more mathematics. The Bayesian approach is perhaps more similar to PDE where problems are so difficult that the only way we can currently solve them is with numerical methods.

derbOac · 2 years ago
I never liked the clubs you were expected to put yourself in, what "side" you were on, or the idea that problems in science that we see today could somehow be reduced to the inferential philosophy you adopt. In a lot of ways I see myself as information-theoretic in orientation, so maybe objective Bayesian, although it's really neither frequentist nor Bayesian.

This three cultures idea is a bit of slight of hand in my opinion, as the "pragmatic" culture isn't really exclusive of subjective or objective Bayesianism and in that sense says nothing about how you should approach prior specification or interpretation or anything. Maybe Gelman would say a better term is "flexibility" or something but then that leaves the question of when you go objective and when you go subjective and why. Seems better to formalize that than leave it as a bit of smoke and mirrors. I'm not saying some flexibility about prior interpretation and specification isn't a good idea, just that I'm not sure that approaching theoretical basics with the answer "we'll just ignore the issues and pretend we're doing something different" is quite the right answer.

Playing a bit of devil's advocate too, the "pragmatic" culture reveals a bit about why Bayesianism is looked at with a bit of skepticism and doubt. "Choosing a prior" followed by "seeing how well everything fits" and then "repeating" looks a lot like model tweaking or p-hacking. I know that's not the intent, and it's impossible to do modeling without tweaking, but if you approach things that way, the prior just looks like one more degree of freedom to nudge things around and fish with.

I've published and edited papers on Bayesian inference, and my feeling is that the problems with it have never been in the theory, which is solid. It's in how people use and abuse it in practice.

bayesian_trout · 2 years ago
If you want to get an informed opinion on modern Frequentist methods check out the book "In All Likelihood" by Yudi Pawitawn.

In an early chapter it outlines, rather eloquently, the distinctions between the Frequentist and Bayesian paradigms and in particular the power of well-designed Frequentist or likelihood-based models. With few exceptions, an analyst should get the same answer using a Bayesian vs. Frequentist model if the Bayesian is actually using uninformative priors. In the worlds I work in, 99% of the time I see researchers using Bayesian methods they are also claiming to use uninformative priors, which makes me wonder if they are just using Bayesian methods to sound cool and skip through peer review.

One potential problem with Bayesian statistics lies in the fact that for complicated models (100s or even 1000s of parameters) it can be extremely difficult to know if the priors are truly uninformative in the context of a particular dataset. One has to wait for models to run, and when systematically changing priors this can take an extraordinary amount of time, even when using high powered computing resources. Additionally, in the Bayesian setting it becomes easy to accidentally "glue" a model together with a prior or set of priors that would simply bomb out and give a non-positive definite hessian in the Frequentist world (read: a diagnostic telling you that your model is likely bogus and/or too complex for a given dataset). One might scoff at models of this complexity, but that is the reality in many applied settings, for example spatio-temporal models facing the "big n" problem or for stuff like integrated fisheries assessment models used to assess status and provide information on stock sustainability.

So my primary beef with Bayesian statistics (and I say this as someone who teaches graduate level courses on the Bayesian inference) is that it can very easily be misused by non-statisticians and beginners, particularly given the extremely flexible software programs that currently are available to non-statisticians like biologists etc. In general though, both paradigms are subjective and Gelman's argument that it is turtles (i.e., subjectivity) all the way down is spot on and really resonates with me.

Deleted Comment

usgroup · 2 years ago
+1 for “in all likelihood” but it should be stated that the book explains a third approach which doesn’t lean on either subjective or objective probability.
bayesian_trout · 2 years ago
fair :)
kgwgk · 2 years ago
> So my primary beef with Bayesian statistics (...) is that it can very easily be misused by non-statisticians and beginners

Unlike frequentist statistics? :-)

bayesian_trout · 2 years ago
hard to accidentally glue a frequentist model together with a prior ;)
prmph · 2 years ago
So my theory is that probability is an ill-defined, unfalsifiable concept. And yet, it _seems_ to model aspects of the world pretty well, empirically. However, might it be leading us astray?

Consider the statement p(X) = 0.5 (probability of event X is 0.5). What does this actually mean? It it a proposition? If so, is it falsifiable? And how?

If it is not a proposition, what does it actually mean? If someone with more knowledge can chime in here, I'd be grateful. I've got much more to say on this, but only after I hear from those with a rigorous grounding the theory.

enasterosophes · 2 years ago
As a mathematical theory, probability is well-defined. It is an application of a larger topic called measure theory, which also gives us the theoretical underpinnings for calculus.

Every probability is defined in terms of three things: a set, a set of subsets of that set (in plain language: a way of grouping things together), and a function which maps the subsets to numbers between 0 and 1. To be valid, the set of subsets, aka the events, need to satisfy additional rules.

All your example p(X) = 0.5 says is that some function assigns the value of 0.5 to some subset which you've called X.

That it seems to be good at modelling the real world can be attributed to the origins of the theory: it didn't arise ex nihilo, it was constructed exactly because it was desirable to formalize a model for seemingly random events in the real world.

mppm · 2 years ago
> So my theory is that probability is an ill-defined, unfalsifiable concept. And yet, it seems to model aspects of the world pretty well, empirically.

I have privately come to the conclusion that probability is a well-defined and testable concept only in settings where we can argue from certain exact symmetries. This is the case in coin tosses, games of chance and many problems in statistical physics. On the other hand, in real-world inference, prediction and estimation, probability is subjective and much less quantifiable than statisticians (Bayesians included) would like it to be.

> However, might it be leading us astray?

Yes, I think so. I increasingly feel that all sciences that rely on statistical hypothesis testing as their primary empirical method are basically giant heaps of garbage, and the Reproduciblity Crisis is only the tip of the iceberg. This includes economics, social psychology, large swathes of medical science, data science, etc.

> Consider the statement p(X) = 0.5 (probability of event X is 0.5). What does this actually mean? It it a proposition? If so, is it falsifiable? And how?

I'd say it is an unfalsifiable proposition in most cases. Even if you can run lots of cheap experiments, like with coin tosses, a million runs will "confirm" the calculated probability only with ~1% precision. This is just lousy by the standards of the exact sciences, and it only goes downhill if your assumptions are less solid, the sample space more complex, or reproducibility more expensive.

skissane · 2 years ago
> So my theory is that probability is an ill-defined, unfalsifiable concept

Probability isn’t a single concept, it is a family of related concepts - epistemic probability (as in subjective Bayesianism) is a different concept from frequentist probability - albeit obviously related in some ways. It is unsurprising that a term looks like an “ill-defined, unfalsifiable concept” if you are mushing together mutually incompatible definitions of it.

> Consider the statement p(X) = 0.5 (probability of event X is 0.5). What does this actually mean?

From a subjective Bayesian perspective, p(X) is a measure of how much confidence I - or any other specified person - have in the truth of a proposition, or my own judgement of the weight of evidence for or against it, or my judgement of the degree of my own knowledge of its truth or falsehood. And 0.5 means I have zero confidence either way, I have zero evidence either way (or else, the evidence on each side perfectly cancels each other out), I have a complete lack of knowledge as to whether the proposition is true.

> It it a proposition?

It is a proposition just in the same sense that “the Pope believes that God exists” is a proposition. Whether or not God actually exists, it seems very likely true that the Pope believes he does

> If so, is it falsifiable? And how?

And obviously that’s falsifiable, in the same sense that claims about my own beliefs are trivially falsifiable by me, using my introspection. And claims about other people’s beliefs are also falsifiable, if we ask them, and if assuming they are happy to answer, and we have no good reason to think they are being untruthful.

prmph · 2 years ago
So you response actually strengthens my point, rather than rebuts it.

> From a subjective Bayesian perspective, p(X) is a measure of how much confidence I - or any other specified person - have in the truth of a proposition, or my own judgement of the weight of evidence for or against it, or my judgement of the degree of my own knowledge of its truth or falsehood.

See how inexact and vague all these measures are. How do you know your confidence is (or should be) 0.5 ( and not 0.49) for example? Or, how to know you have judged correctly the weight of evidence? Or how do you know the transition from "knowledge about this event" to "what it indicates about its probability" you make in your mind is valid? You cannot disprove these things, can you?

Unless you you want to say the actual values do not actually matter, but the way the probabilities are updated in the face of new information is. But in any case, the significance of new evidence still has to be interpreted; there is no objective interpretation, is there?.

canjobear · 2 years ago
You’re right that a particular claim like p(X=x)=a can’t be falsified in general. But whole functions p can be compared and we can say one fits the data better than another.

For example, say Nate Silver and Andrew Gelman both publish probabilities for the outcomes of all the races in the election in November. After the election results are in, we can’t say any individual probability was right or wrong. But we will be able to say whether Nate Silver or Andrew Gelman was more accurate.

enugu · 2 years ago
> What does this actually mean? It it a proposition? If so, is it falsifiable? And how?

If you saw a sequence of 1000 coin tosses at say 99% heads and 1% tails, you were convinced that the same process is being used for all the tosses and you had an opportunity to bet on tails with 50% stakes, would you do it?

This is a pragmatic answer which rejects P(X)=0.5. We can try to make sense of this pragmatic decision with some theory. (Incidentally, being exactly 0.5 is almost impossible, it makes more sense to verify if it is an interval like (0.49,0.51)).

The CLT says that probability of X can be obtained by conducting independent trials and the in limit, the average number of times X occurs will approach p(X).

However, 'limit' implies an infinite number of trials, so any initial sequence doesn't determine the limit. You would have to choose a large N as a cutoff and then take the average.

But, is this unique to probability? If you take any statement about the world, "There is a tree in place G", and you have a process to check the statement ("go to G and look for a tree"), can you definitely say that the process will successfully determine if the statement is true? There will always be obstacles("false appearances of a tree" etc.). To rule out all such obstacles, you would have to posit an idealized observation process.

For probability checking, an idealization which works is infinite independent observations which gives us p(X).

PS: I am not trying to favour frequentism as such, just that the requirement of an ideal of observation process shouldn't be considered as an overwhelming obstacle. (Sometimes, the obstacles can become 'obstacles in principle' like position/momentum simultaneous observation in QM and if you had such obstacles, then indeed one can abandon the concept of probability).

meroes · 2 years ago
This is the truly enlightened answer. Pick some reasonably defined concept of it if forced. Mainly though, you notice it works and apply the conventions.
kgwgk · 2 years ago
> If it is not a proposition, what does it actually mean?

It's a measure of plausibility - enabling plausible reasoning.

https://www.lesswrong.com/posts/KN3BYDkWei9ADXnBy/e-t-jaynes...

https://en.wikipedia.org/wiki/Cox%27s_theorem

ants_everywhere · 2 years ago
So here's a sort of hard-nosed answer: probability is just as well-defined as any other mathematics.

> Consider the statement p(X) = 0.5 (probability of event X is 0.5). What does this actually mean?

It means X is a random variable from some sample space to a measurable space and P is a probability function.

> If so, is it falsifiable? And how?

Yes, by calculating P(X) in the given sample space. For example, if X is the event "you get 100 heads in a row when flipping a fair coin" then it is false that P(X) = 0.5.

It's a bit like asking whether 2^2 = 4 is falsifiable.

There are definitely meaningful questions to ask about whether you've modeled the problem correctly, just as it's meaningful to ask what "2" and "4" mean. But those are separate questions from whether the statements of probability are falsifiable. If you can show that the probability axioms hold for your problem, then you can use probability theory on it.

There's a Wikipedia article on interpretations of probability here: https://en.wikipedia.org/wiki/Probability_interpretations. But it is pretty short and doesn't seem quite so complete.

prmph · 2 years ago
> For example, if X is the event "you get 100 heads in a row when flipping a fair coin" then it is false that P(X) = 0.5

I think you haven't thought about this deeply enough yet. You take it as self evident that P(X) = 0.5 is false for that event, but how do you prove that? Assuming you flip a coin and you indeed get 100 heads in a row, does that invalidate the calculated probability? If not, then what would?

I guess what I'm driving at is this notion (already noted by others) that probability is recursive. If we say p(X) = 0.7, we mean the probability is high that in a large number of trials, X occurs 70% of the time. Or that the proportion of times that X occurs tends to 70% with high probability as the number of trials increase. Note that this second order probability can be expressed with another probability ad infinitum.

usgroup · 2 years ago
Bare in mind that Breiman's polemic was about generative vs discriminative methods. I.e. that we should not start an analysis by thinking about how the data generation can be modelled, but instead we should start with prediction. From that vein came boosted trees, bagging, random forests, xgboost and so on: non generative black box methods.

Still today most of the classical machine learning toolbox is not generative.

gwd · 2 years ago
Nit: "Bear in mind". "Bare" means "to make bare" (i.e., to uncover); "bear" means "to carry": "As you evaluate this discussion, carry it in your mind that..."
mjhay · 2 years ago
The great thing about Bayesian statistics is that it's subjective. You don't have to be in the subjectivist school. You can choose your own interpretation based on your (subjective) judgment.

I think this is a strength of Bayesianism. Any statistical work is infused with the subjective judgement of individual humans. I think it is more objective to not shy away from this immutable fact.

klysm · 2 years ago
The appropriateness of each approach is very much a function of what is being modeled and the corresponding consequences for error.
mjhay · 2 years ago
Of course. The best approach for a particular problem depends on your best judgment.

I guess that means I'm in the pragmatist school in this article's nomenclature (I'm a big fan of Gelman and all the other stats folks there), but what one thinks is pragmatic is also subjective.

Deleted Comment