The only thing I can easily find is that they have saturated fat - but it takes 4.5 eggs to have as much saturated fat as 1tbsp of butter.
Aside from 1 year of strict vegan diet, I've eaten an average of 4 eggs with 1tbsp butter mostly daily for my entire adult life (I'm 34), and also ate eggs regularly in childhood, and I seem to be in excellent health with no known issues. But I'm curious what I should be watching for.
Generally when we take this into consideration, we see a linear increase in risk from increased egg consumption. For example this paper suggests that the higher your genetic risk for CVD, the higher the increase in risk from egg consumption, but even those with lower CVD risk see a ~6% increase in CVD events per 3 eggs/week increase: https://www.sciencedirect.com/science/article/pii/S000291652....
Additionally replacing the 1tbsp butter with plant oils would likely reduce ACM risk by ~17% (http://doi.org/10.1001/jamainternmed.2025.0205), but I'll be the first to admit that there's nothing that really replaces butter on a taste basis :D. Got to find the happy balance between health and hedonism IMO!
In the same way that increased salt intake causes increased water retention, and thus increased weight, but this increased level of water weight goes away after a few days if you start consuming less salt, and there's no evidence of this causing long term harm.
I could see where if one already had severe CVD, maybe eating more salt could be the straw that breaks the camel's back, and thus until they heal the CVD it could be wise to limit salt. But this would be no indication that the salt is the cause of the CVD or causes any long term chronic problem. And it is the long term chronic CVD that is by far the most important to address IMHO. If salt is not causing that, this whole discussion is largely misdirected energy.
---
> IMO Replacing butter with refined oils and whole grain flours/carbs - sure, solid move.
We disagree here but that's a separate issue from salt so I'll leave it. :)
> I’m very wary of trying to find “the culprit” for public health problems [etc]
I agree entirely with this. It's very complex, many factors, no single culprit or silver bullet, and that this is extremely important. It's all the things. So it's important to try to tease out which things are having which kind and degree of effect. And this is where I think salt has been scapegoated in a way that probably just distracts from the root problem as I describe above.
We've already gone over data showing that when we summate data from RCTs and salt consumption we see reduced salt consumption leads to reduced cardiovascular disease events, so it's demonstrably not the case that there's no evidence of this causing long term harm.
Additionally we have strong evidence of a dose-response curve regarding blood pressure and atherosclerosis, so that additional 5mmHg is contributing to additional plaque burden. Even after you reduce your salt intake, that plaque is still going to be there, increasing your risk of a CVD event.
Additionally when we look at the results of INTERSALT, age-related increases in blood pressure only seemed to occur in populations consuming more than 2-3g salt per day, which suggests that in addition to acute rises in BP, higher salt consumption than this may also be responsible for much larger rises in the long term that are not reversed when salt consumption is dropped.
Taking that whole body of evidence in totality, I think it's hard to argue that the effects of salt on the risk of adverse health outcomes is akin to water weight.
> I could see where if one already had severe CVD, maybe eating more salt could be the straw that breaks the camel's back, and thus until they heal the CVD it could be wise to limit salt. But this would be no indication that the salt is the cause of the CVD or causes any long term chronic problem. And it is the long term chronic CVD that is by far the most important to address IMHO. If salt is not causing that, this whole discussion is largely misdirected energy.
Again, even small increases in BP over normal range (and even slightly below - we tend to see increases in risk once systolic BP rises about 110) is associated with increases in CVD, so the raised blood pressure is one of the forces driving that long term chronic CVD.
> We disagree here but that's a separate issue from salt so I'll leave it. :)
Well if you're open-minded about the topic but think refined oils are a health risk, you're the same as I was a few years ago. I ended up changing my view on the topic. If you think there's a health concern not addressed by https://uprootnutrition.com/blog/seedoils I'd be genuinely interested to know.
> And this is where I think salt has been scapegoated in a way that probably just distracts from the root problem as I describe above.
I think the evidence very strongly suggests that sodium consumption is one of the root problems driving chronic health issues in the West.
In that case, no "we" don't, because I am reading this and I do not agree with this "standard" nor your characterization of what I wrote.
> Mechanisms are inferred from empirical evidence, I don't see how you can treat them as two separate categories. For example, in your crash test dummy analogy, verification through crash tests (with dummies) is empirical evidence.
This is not how it works. Crash tests are used for validation, but the data from crash tests is generally not used to infer mechanism. Physicists don't come up with a new theory of mechanics every time a crash test has an unexpected outcome.
> If you name proxy experiments that support your views (crash tests) as mechanisms and ones that don't (SSB replacement with NNS RCTs) as "empirical evidence is valid only for the situation in which it was obtained" then sure, everything you want to believe is supported by sound science and everything you don't isn't.
I don't think you understand. If you want to support a public health intervention you either have the empirical data with a relevant endpoint,
or you can point to mechanism which bridges the part between the data that you have and the outcome which you want to achieve.
When it comes to pharmaceutics and food additives, our mechanistic understanding is insufficient so we often have to resort to empirical studies on humans, including RCTs (Pfizer's Covid vaccine trial had tens of thousands of participants) and also observational studies at population level. And it is the last part where artificial sweeteners fail to show benefit so far.
When it comes to seat belts, our mechanistic understanding is sufficient so we don't need to resort to empirism. Yes we perform validation but only to check if there are no design oversights in the vehicle nor shortcomings with the simulation software, typically in a low triple-digit number of crash tests. But no humans involved and especially no control arm with humans.
(Well if you ignore the one study on the efficacy of parachutes which was done as RCT https://doi.org/10.1136/bmj.k5094 )
> So it doesn't meet your own goalpost
It does, because again, mechanism is provided. You could say that the study has weak evidence for the mechanism and it works like that perhaps only for sugar. Because that some mechanism is found in mice does not mean it is also found in humans, and it would be a fair point. This is why many species are tested and so far the results held up (testing humans takes too long for obvious reasons).
I’m going to pass on the crash test dummies bit. You’ve misunderstood the point I was making, but it could be poor communication by me and I think the point is becoming increasingly tangential.
> When it comes to pharmaceutics and food additives, our mechanistic understanding is insufficient so we often have to resort to empirical studies on humans, including RCTs
> It does, because again, mechanism is provided. You could say that the study has weak evidence for the mechanism and it works like that perhaps only for sugar. Because that some mechanism is found in mice does not mean it is also found in humans, and it would be a fair point. This is why many species are tested and so far the results held up (testing humans takes too long for obvious reasons).
So you don’t feel you’re being straw manned again, can I get a clear answer to this: is your argument that if we stack together sufficient numbers of mechanistic animal studies we can be sufficiently confident enough in the translation rate of such studies to humans that we can roll out public health interventions without any evidence of efficacy in human populations?
Meta analysis is only as strong as the studies it's based on. I looked at quite a few studies before that purport to show sodium causing CVD, and none of them strongly support their conclusion - they all had significant flaws, not that they're not useful research just that they don't show what they are used to say they show.
For example, there were studies showing that increased salt increased blood pressure by ~5 mm Hg over long term. I understand that blood pressure can be affected very slightly by salt intake, I would guess because the body is holding more water or some other normal mechanism like that, but this does not suggest it's the long term cause of blood pressures going up from a normal 120 to a chronic 160 or 200 as we're seeing in tons of people. There could be any number of adjustments that would increase blood pressure slightly WHILE the change is in effect and then go back to baseline afterward. The chronic high blood pressure is a disease that doesn't just go back to normal immediately after a change.
Anyway, I don't have time at the moment to look through the 11 studies cited in that meta analysis, but if you pick the one or two that give the strongest evidence for salt causing CVD I'd look at them.
I'm genuinely trying to figure this out myself as best I can, because I know way too many people close to me dealing with early stage CVD and diabetes. And a lot of them say they're working on it by avoiding meat and dairy and eggs and salt, and instead of that they end up eating more refined oils and refined flour and sugar. It doesn't seem to be helping them any after years of this, and I think this is backwards advice. I'm not saying we need to eat tons of salt, maybe it does have a minor effect, just that it's not the real culprit.
> if you'd picked a slightly different set of CVD-correlated biomarkers you would have got exactly the opposite result.
So the evidence I’m looking for is empirics showing CVD correlated biomarkers that suggest a beneficial effect from consuming levels of sodium above recommended levels. Without that evidence then I don’t see why we should believe they would have got the opposite result if they picked other CVD biomarkers.
> but this does not suggest it's the long term cause of blood pressures going up from a normal 120 to a chronic 160 or 200 as we're seeing in tons of people
I’m not claiming that salt is the single cause of hypertension, but that doesn’t mean that the kind of reductions you see from salt reduction aren’t meaningful or contribute to those very high figures. It’s easy to dismiss 5mmHg as insignificant, but we generally see a 5mmHg reduction in sysBP translate to a ~10% reduction in CVD events. Considering how prevalent CVD is, that’s a pretty large effect size.
Chronic diseases are often overdetermined and stack - people have a poor diet which means they consume too much salt, they’re overweight and obesity, have T2DM or prediabetes, sedentary lifestyle, etc etc. The fact that we can’t point to a single one of these and say “this is the thing causing your 160/100 BP doesn’t mean we shouldn’t try to fix the individual factors. So sure, salt reduction seems to offer 5-6mmHg reduction, exercise 4-8, hypertensive drugs 10. But put them all together and that’s a massive change.
> Anyway, I don't have time at the moment to look through the 11 studies cited in that meta analysis, but if you pick the one or two that give the strongest evidence for salt causing CVD I'd look at them.
It’s just three trials (or four depending on how you count TOHP I & II). I think I’ve met my burden in terms of showing there’s evidence of high salt intake having adverse effects, I have no interest in forcing you to read them. Just trying to provide evidence if that’s something you’re seeking.
> I'm genuinely trying to figure this out myself as best I can, because I know way too many people close to me dealing with early stage CVD and diabetes.
I’m sorry to hear that. We certainly seem to be struggling with chronic lifestyle-related disease these days, though with GLP-1 RAs I’m a lot more optimistic than I was a few years ago.
> And a lot of them say they're working on it by avoiding meat and dairy and eggs and salt, and instead of that they end up eating more refined oils and refined flour and sugar.
Yeah depending on the composition of what they’re eating that doesn’t sound great. IMO Replacing butter with refined oils and whole grain flours/carbs - sure, solid move. Replacing meat with plant proteins is also a worthwhile step, and eggs don’t seem to be great for health in many respects. But fermented dairy seems to be a positive as far as CVD risk goes, so ethics aside, seems like a backwards step if they replacing yoghurt and cheese with sugar and white flour!
> I'm not saying we need to eat tons of salt, maybe it does have a minor effect, just that it's not the real culprit.
I’m very wary of trying to find “the culprit” for public health problems. It’s so rarely the case that a disease has a single aetiology, and in my experience the people who’ll tell you “it was the sugar all along”, “it was the seed oils all along”, “it was the glyphosate all along” have a book to sell. The reality is probably closer to it being a combination of several things. Not as sexy though, no publisher or influencer is interested in that view!
I think they should just label things more explicitly like this - accelerate veganism 100x when people in the supermarkets have to choose between “pressed soybeans” and “mammary gland secretions”.
I think the idea just bothers me, “vegetables aren’t good enough, what you really want is fake meat.”
Who is "we"?
I don't know what you are going on about. Empirical evidence is one thing, mechanism is another source of knowledge by which we can shape public health policy. While empirical evidence is valid only for the situation in which it was obtained, mechanism is universal.
For example, we mandate wearing seatbelts in cars in the name of public health. It is however not necessary to do seatbelt on/off RCTs with actual people. How we know that this is beneficial: Because physics, verification through crash tests (with dummies), and because we know that seatbelt mandates increase the frequency of people wearing them.
Going back to the original question, it was clearly shown in observational studies that giving children sweetened food is bad: Childhood dietary habits shape lifelong food preferences, and preference for sweet food leads to worse outcomes regarding chronic diseases later in life. This has been shown in lots of research, both in humans and in animal models:
https://doi.org/10.1126/science.adn5421
https://doi.org/10.3390/nu16030428
https://doi.org/10.1093/chemse/bjr050
With randomly adding or substituting sugar with artificial sweeteners there is however no empirical evidence nor known mechanism which supports a public health benefit. In fact the mechanisms we know from animal farming suggest a detrimental effect.
Me and anyone else reading this.
> While empirical evidence is valid only for the situation in which it was obtained, mechanism is universal.
Mechanisms are inferred from empirical evidence, I don't see how you can treat them as two separate categories. For example, in your crash test dummy analogy, verification through crash tests (with dummies) is empirical evidence. Yet under your framework, should we assume that it is valid only for the situation in which it was obtained - only for dummies, not people; in cars pushed towards walls in controlled situations, rather than on public roads?
If you name proxy experiments that support your views (crash tests) as mechanisms and ones that don't (SSB replacement with NNS RCTs) as "empirical evidence is valid only for the situation in which it was obtained" then sure, everything you want to believe is supported by sound science and everything you don't isn't. But the view itself seems to contain a logical contradiction, so you're dead before you've even got off the ground.
I would understand mechanistic evidence in the domain of health science to be in vitro and animal studies. Even if we were to grant that mechanism is universal in this field (which I wouldn't, we frequently see heterogenous results even within the same exposures on the same mouse models, for example), there are thousands of mechanisms that come together to influence the outcomes we actually care about. This is why when we look at translation rates of mechanisms to outcomes in humans we typically see rates below 5% (and is also why pharmaceuticals that work perfectly in animal models barely ever make it to market in humans).
Going back to the evidence you've cited in support of your intervention - the first two (the only ones in humans) are neither looking at NNSs nor an intervention on banning them. So it doesn't meet your own goalpost for "if we introduce a public health policy, then we need to take human behavior and adherence rates into account". In the rationing example, you have an entirely different context - one in which people literally cannot purchase large amounts of sugar. This would not be the case if we were to ban NNS today.
Your third study was in mice which, as discussed, has an incredibly low chance of actually translating into human outcomes. I don’t find “we have evidence in RCTs that NNSs are beneficial but there’s this mouse study that says otherwise so let’s ban them” a convincing argument.
So again, any actual evidence in support of your proposed intervention? How do we know, for example, that banning NNSs won't just lead to higher sugar consumption and adverse outcomes, since we know from RCTs that substituting SSBs for NNSs improves health outcomes? If all those consuming your banned substance now switch to SSBs instead of their NNSs, congratulations, you've just worsened health outcomes.
> In general, hard sciences are much more reliable than social sciences because standards are higher and topics are less emotional.
Having read lots of both, I'm not sure that's true. There's no way to prove it because nobody has clear definitions for what the words hard, social, science, standards or reliable mean. But the extreme political bias doesn't go away, researcher degrees of freedom are just as large, the topics are sometimes much more emotional, and a lot of fields you'd expect to be hard are methodologically no different to any social science.
For example, a guy in Wales recently claimed a big payout from a fraud suit he won against the Dana Farber cancer center at Harvard. It'd been publishing fraudulent papers for years, yet either nobody noticed or nobody cared. Climatology tells people that the end is nigh, a message sufficiently distressing to make some psychologically vulnerable people commit suicide. Do any social sciences have an emotional effect that extreme? A lot of the COVID pseudo-science was specifically designed to manipulate people's emotions (e.g. journals rejecting correct papers because fewer people might take vaccines as a result [1]). And epidemiology isn't based on any empirical understanding of viruses or disease. It's just modelling no different to the type described in the article.
Unfortunately, ideology and bad incentives are the same no matter what field you look at. There is a hard/soft distinction to be made, but it's more about how close the field is to engineering. Engineering fields have a lot of movement between public and private sector, which keeps the universities a bit more honest. Maybe other fields like law and finance are the same, I don't know, I never read papers in those.
[1] https://x.com/mgmgomes1/status/1291162360657453056
1. What Covid pseudo science are you referring to?
2. How does the x thread you referenced show “journals rejecting correct papers because fewer people might take vaccines as a result”? As I read it, the rejection was because the journal had a higher bar for evidence on claims around herd immunity than the researchers were able to meet. That is, they’d happily publish papers suggesting lower levels of herd immunity, but the papers would require more rigour than that which was provided.
It’s not clear to me that this bar existed just to increase vaccine uptake, but to generally avoid moves towards relaxing interventions based on insufficiently strong evidence (social distancing, mask wearing, etc).
It’s not clear to me what objection one would have to having a high threshold for evidence when publishing research with such high potential to impact public health. That seems like a good thing to me.