> study including 22.7 million vaccinated individuals and 5.9 million unvaccinated individuals
These are the important bits for the non medical folks
Also significantly: "vaccinated individuals consistently had a lower risk of death, regardless of the cause."
> At the heart of the problem is the tendency for AI language models to confabulate, which means they may confidently generate a false output that is stated as being factual.
"Confabulate" is precisely the correct term; I don't know how we ended up settling on "hallucinate".
More opensource projects should move off GitHub. I moved off it myself.
What's this about?
Certain aspects of human nature, as they apply to the corporate world, can be acknowledged and understood, even if they're not excuses when they lead to the downfall of a prominent organization. When you give someone a big title, a dump truck full of cash, and a mandate to innovate, human nature dictates that most people will internalize the idea that "because I was given all this, I must be competent", even if they very obviously are not. Typically the outcome is a "bold plan forward" which is notable for lacking any actual clear solution to the company's main problems. In one example I know of, the CEO decided to pivot from an unrelated field towards launching a cryptocurrency, and cooked up a cartoonishly-dangerous marketing scheme to support the idea. One person ended up dying as a result, and the company then purged every mention of crypto from its website. (And yes, the company collapsed soon afterwards.)
While it's easy to blame the CEO with their oversized salary, the blame for such disasters doesn't just lie with them. After all, arguably the most important roles of the board are to hire a good CEO, ensure the CEO is actually performing as they should, and fire them if they're not. When politics, cronyism, or again, simple incompetence, lead the board to also fail at its job, you end up with the long, slow decline into obscurity we've seen so often in the tech world.
But Mozilla had a good run.
First, your business model isn't really clear, as what you've described so far sounds more like a research project than a go-to-market premise. Computational pathology is a crowded market, and the main players all have two things in common: access to huge numbers of labeled whole-slide images, and workflows designed to handle such images. Without the former, your project sounds like a non-starter, and given the latter, the idea you've pitched doesn't seem like an advantage. Notably, some of the existing models even have open weights (e.g. Prov-GigaPath, CTransPath).
Second, you've talked about using this approach to make diagnoses, but it's not clear exactly how this would be pitched as a market solution. The range of possible diagnoses is almost unlimited, so a useful model would need training data for everything (not possible). My understanding is that foundation models solve this problem by focusing on one or a few diagnoses in a restricted scope, e.g. prostate cancer in prostate core biopsies. The other approach is to screen for normal in clearly-defined settings, e.g. Pap smears, so that anything that isn't "normal" is flagged for manual review. Either approach, as you can see, demands a very different training and market positioning strategy.
Finally, do you have pathologists advising you, and have you done any sort of market analysis? Unless you're already a pathologist (and probably even if you were), I suspect that having both would be of immense value in deciding a go-forward plan.
All the best!
Deleted Comment
-- Exactly 400 study participants recruited.
-- Exactly 193 of 200 participants completing the study in each group (which, for a study administered in a community setting, is an essentially impossibly-high completion rate).
-- No author disclosures -- in fact, no information about the authors whatsoever, other than their names.
-- No information on exposures, lifestyles, or other factors which invariably influence infection rates.
-- Inappropriate statistical methods, which focus very heavily on p values.
-- Only 3 authors, which for a randomized controlled trial involving hundreds of people in different settings with regular follow-up, seems rather unlikely.