I'm not sure what you're implying here. Are you saying both types of structures are useful, but not as useful as the hype suggests, or that an X-Ray Crystal (XRC) and low confidence structures are both very useful with the XRC being marginally more so?
An XRC structure is great, but it's a very (very) long way from getting me to a drug. Observe the long history of fully crystalized proteins still lacking a good drug. Or this piece on the general failure of purely structure guided efforts in drug discovery for COVID (https://www.science.org/content/blog-post/virtual-screening-...). I think this tech will certainly be helpful, but for most problems I don't see it being better than a slightly-more-than-marginal gain in our ability to find medicines.
Edit: To clarify, if the current state of the field is "given a well understood structure, I often still can't find a good medicine without doing a ton of screening experiments" then it's hard to see how much this helps us. I can also see several ways in which a less than accurate structure could be very misleading.
FWIW I can see a few ways in which it could be very useful for hypothesis generation too, but we're still talking pretty early stage basic science work with lots of caveats.
Source: PhD Biochemist and CEO of a biotech.
But I'm pretty close to launching "something" in the next few weeks. I'm doing a hardware startup, I already have multiple production-ready prototypes. Now I'm working on documentation, and some basic marketing material... but again all I can see is a hundred different things I still have left to do.
There's also the fact that the business would need to be quite successful to match my current, and rather hefty paycheck. Less money, less safety, but doing something I love... I hope it's worth it.
Always remember, no decision is a decision. Usually that's the worst choice because almost any decision, even a wrong one, at least moves the ball in some direction, allowing you to gather more information. The only guarantee in this game is that stasis will kill you, so bias towards action. When in doubt, try to evaluate "most probable bad outcome" which is different from "worst possible outcome".
Good luck!
I personally have a clue, and the entire field of organic chemistry has a clue, given enough time and money most reasonable structures can be synthesized (and QED+SAScore+etc and then human filter is often enough to weed out the problem compounds that will be unstable or hard to make). Actually even some of the state of the art synthesis prediction models are able to predict decent routes if the compounds are relatively simple [0]. The issue is that in silico activity/property prediction is often not reliable enough for the effort to design and execute a synthesis to be worth it, especially because as typically the molecules will get more dissimilar to known compounds with the given activity, the predictions will also often become less reliable. In the end, what would happen is that you just spend 3 months of your master student's time on a pharmacological dead end. Conversely, some of the "novel predictions" of ML pipelines includign de novo structure generation can be very close to known molecules, which makes the measured activity to be somewhat of a triviality.[1] For this reason, it makes sense to spend the budget on building block-based "make on demand" structures that will have 90% fulfillment, that will take 1-2 months from placed order to compound in hand and that will be significantly cheaper per compound, because you can iterate faster. Recent work around large scale docking has shown that this approach seems to work decently for well behaved systems.[2] On the other hand, some truly novel frameworks are not available via the building block approach, which can also be important for IP.
More fundamentally, of course you are correct, and I agree with you: having a lot of structures is in itself not that useful. Getting closer to physically more meaningful and fundamental processes and speeding them up to the extent possible can generate way more transparent reliable activity and novelty.
[0] https://www.sciencedirect.com/science/article/pii/S245192941... [1] http://www.drugdiscovery.net/2019/09/03/so-did-ai-just-disco... [2] https://www.nature.com/articles/s41586-021-04175-x.pdf
We've done work in this area and will be publishing some results later in the year.
A key challenge: very few labs have enough data.
Something I view as a key insight: a lot of labs are doing absurdly labor intensive exploratory synthesis without clear hypotheses guiding their work. One of our more useful tasks turned out to be interactively helping scientists refine their experiments before running them.
Another was helping scientists develop hypotheses for _why_ reactions were occuring, because they hadn't been able to build principled models that predicted which properties were predictive of reaction formation.
Going all the way to synthesis is nice, but there's a lot of lower hanging fruit involved in making scientists more effective.
Also shameless plug: I started a company to do just that, anchored to generating custom million-to-billion point datasets and using ML to interpret and design new experiments at scale.
Is there really "chipheads" who are predicting ARM ISA to buck this trend and start pulling ahead at equivalent technology nodes? By what mechanism do they believe this will happen, do you know?
"The theory goes that arm64’s fixed instruction length and relatively simple instructions make implementing extremely wide decoding and execution far more practical for Apple, compared with what Intel and AMD have to do in order to decode x86-64’s variable length, often complex compound instructions."
Not sure it's true, not an expert. But it doesn't sound wrong!
Then the methane is consumed by some genetically modified bacteria (probably e.coli) first to convert it to an intermediate product, and then with a second organism to convert it into starch.
In the article they talk about genetically modified enzymes and I've actually jumped ahead to the using genetically modified bacteria here[2]. I think that is a safe assumption as a number of bio-reactors do exactly this.
What is somewhat interesting to me is that livestock generates a lot of methane, if you could harvest it and convert it back into starch to feed the livestock you could increase the efficiency of raising livestock farming.
[1] https://phys.org/news/2020-02-method-carbon-dioxide-methane-...
[2] The article hasn't appeared in sci-hub yet :-)
https://climate.nasa.gov/faq/33/which-is-a-bigger-methane-so...
"carbon dioxide is reduced to methanol by an inorganic catalyst and then converted by enzymes first to three and six carbon sugar units and then to polymeric starch. This artificial starch anabolic pathway relies on engineered recombinant enzymes from many different source organisms and can be tuned to produce amylose or amylopectin at excellent rates and efficiencies relative to other synthetic carbon fixation systems—and, depending on the metric"
I don't know of any other driving assist system that get anywhere close to this. Every time I tried fancy rental cars (BMWs, Audis, Mercedes) with the latest and greatest driver assist, it's feels like a joke.
What am I missing?
Tesla is uniquely risk tolerant for better or worse. You also don’t hear about people getting into accidents in a BMW on self driving because they don’t make the same claims and have tons of safeguards.