Readit News logoReadit News
crypto420 · 6 months ago
I'm not sure if people here even read the entirety of the article. From the article:

> We applied the AI co-scientist to assist with the prediction of drug repurposing opportunities and, with our partners, validated predictions through computational biology, expert clinician feedback, and in vitro experiments.

> Notably, the AI co-scientist proposed novel repurposing candidates for acute myeloid leukemia (AML). Subsequent experiments validated these proposals, confirming that the suggested drugs inhibit tumor viability at clinically relevant concentrations in multiple AML cell lines.

and,

> For this test, expert researchers instructed the AI co-scientist to explore a topic that had already been subject to novel discovery in their group, but had not yet been revealed in the public domain, namely, to explain how capsid-forming phage-inducible chromosomal islands (cf-PICIs) exist across multiple bacterial species. The AI co-scientist system independently proposed that cf-PICIs interact with diverse phage tails to expand their host range. This in silico discovery, which had been experimentally validated in the original novel laboratory experiments performed prior to use of the AI co-scientist system, are described in co-timed manuscripts (1, 2) with our collaborators at the Fleming Initiative and Imperial College London. This illustrates the value of the AI co-scientist system as an assistive technology, as it was able to leverage decades of research comprising all prior open access literature on this topic.

The model was able to come up with new scientific hypotheses that were tested to be correct in the lab, which is quite significant.

dekhn · 6 months ago
So, I've been reading Google research papers for decades now and also worked there for a decade and wrote a few papers of my own.

When google publishes papers, they tend to juice the results significance (google is not the only group that does this, but they are pretty egregious). You need to be skilled in the field of the paper to be able to pare away the exceptional claims. A really good example is https://spectrum.ieee.org/chip-design-controversy while I think Google did some interesting work there and it's true they included some of the results in their chip designs, their comparison claims are definitely over-hyped and they did not react well when they got called out on it.

warbaker · 6 months ago
The article you linked is not an example of this happening. Google open-sourced the chip design method, and uses it in production for TPU and other chips.

https://github.com/google-research/circuit_training

https://deepmind.google/discover/blog/how-alphachip-transfor...

tsumnia · 6 months ago
Remember Google is a publicly traded company, so everything must be reviewed to "ensure shareholder value". Like dekhn said, its impressive, but marketing wants more than "impressive".
killjoywashere · 6 months ago
I have worked with Google teams as well, and they taught me a fair bit about how to be rigorously skeptical. It takes domain knowledge, statistical knowledge, data, time and the computational resources to challenge them. I've done it, but it took real resources.

That said, it's a useful exercise to figure out the plan of attack. My experience is the "juice" was mainly in "easy true negative" subclasses. They weren't oversampled, but the human brain wouldn't even consider most of that data. Once you ablate those subclasses from the dataset, (which takes a lot of additional labelling effort), you can start challenging their assertions. But it's hard.

And that said I also review a number of articles in that domain, and I haven't seen a group with stronger datasets overall.

ein0p · 6 months ago
That applies to absolutely everyone. Convenient results are highlighted, inconvenient are either not mentioned or de-emphasized. You do have to be well read in the field to see what the authors _aren't_ saying, that's one of the purposes of being well-read in the first place. That is also why 100% of science reporting is basically disinformation - journalists are not equipped with this level of nuanced understanding.
shpongled · 6 months ago
That a UPR inhibitor would inhibit viability of AML cell lines is not exactly a novel scientific hypothesis. They took a previously published inhibitor known to be active in other cell lines and tried it in a new one. It's a cool, undergrad-level experiment. I would be impressed if a sophomore in high school proposed it, but not a sophomore in college.
CaptainOfCoit · 6 months ago
> I would be impressed if a sophomore in high school proposed it

That sounds good enough for a start, considering you can massively parallelize the AI co-scientist workflow, compared to the timescale and physical scale it would take to do the same thing with human high school sophomores.

And every now and then, you get something exciting and really beneficial coming from even inexperienced people, so if you can increase the frequency of that, that sounds good too.

klipt · 6 months ago
Only two years since chatGPT was released and AI at the level of "impressive high school sophomore" is already blasé.
hinkley · 6 months ago
I have a less generous recollection of the wisdom of sophomores.
kuhewa · 6 months ago
I'm sure the scientists involved had a wish list of dozens of drug candidates to repurpose to test based on various hypotheses. Ideas are cheap, time is not.

In this case they actually tested a drug probably because Google is paying for them to test whatever the AI came up with.

elicksaur · 6 months ago
I’m not familiar with the subject matter, but given your description, I wouldn’t really be impressed by anyone suggesting it. It just sounds like a very plausible “What if” alternative.

On the level of suggesting suitable alternative ingredients in fruit salad.

We should really stop insulting the intelligence of people to sell AI.

tomrod · 6 months ago
Incremental progress is incremental progress.

[0] https://matt.might.net/articles/phd-school-in-pictures/

dekhn · 6 months ago
(to be Shpongled is to be kippered, mashed, smashed, destroyed...completely geschtonkenflopped)
hirenj · 6 months ago
I read the cf-PICI paper (abstract) and the hypothesis from the AI co-scientist. While the mechanism from the actual paper is pretty cool (if I'm understanding it correctly), I'm not particularly impressed with the hypothesis from the co-scientist.

It's quite a natural next step to take to consider the tails and binding partners to them, so much so that it's probably what I would have done and I have a background of about 20 minutes in this particular area. If the co-scientist had hypothesised the novel mechanism to start with, then I would be impressed at the intelligence of it. I would bet that there were enough hints towards these next steps in the discussion sections of the referenced papers anyway.

What's a bit suspicious is in the Supplementary Information, around where the hypothesis is laid out, it says "In addition, our own preliminary data indicate that cf-PICI capsids can indeed interact with tails from multiple phage types, providing further impetus for this research direction." (Page 35). A bit weird that it uses "our own preliminary data".

TrainedMonkey · 6 months ago
> A bit weird that it uses "our own preliminary data"

I think potential of LLM based analysis is sky high given the amount of concurrent research happening and high context load required to understand the papers. However there is a lot of pressure to show how amazing AI is and we should be vigilant. So, my first thought was - could it be that training data / context / RAG having access to a file it should not have contaminated the result? This is indirect evidence that maybe something was leaked.

preston4tw · 6 months ago
This is one thing I've been wondering about AI: will its broad training enable it to uncover previously covered connections between areas the way multi-disciplinary people tend to, or will it still miss them because it's still limited to its training corpus and can't really infer.

If it ends up being more the case that AI can help us discover new stuff, that's very optimistic.

semi-extrinsic · 6 months ago
In some sense, AI should be the most capable at doing this within math. Literally the entire domain in its entirety can be tokenized. There are no experiments required to verify anything, just theorem-lemma-proof ad nauseam.

Doing this like in this test, it's very tricky to rule out the hypothesis that the AI is just combining statements from the Discussion / Future Outlook sections of some previous work in the field.

rlyshw · 6 months ago
This is kinda getting at a core question of epistemology. I’ve been working on an epistemological engine by which LLMs would interact with a large knowledge graph and be able to identify “gaps” or infer new discoveries. Crucial to this workflow is a method for feedback of real world data. The engine could produce endless hypotheses but they’re just noise without some real world validation metric.
xbmcuser · 6 months ago
Similar stuff is being done for material sciences where AI suggest different combinations to find different properties. So when people say AI(machine learning, LLM) are just for show I am a bit shocked as AI's today have accelerated discoveries in many different fields of science and this is just the start. Anna archive probably will play a huge role in this as no human or even a group of humans will have all the knowledge of so many fields that an Ai will have.

https://www.independent.co.uk/news/science/super-diamond-b26...

fhd2 · 6 months ago
It's a matter of perspective and expectations.

The automobile was a useful invention. I don't know if back then there was a lot of hype around how it can do anything a horse can do, but better. People might have complained about how it can't come to you when called, can't traverse stairs, or whatever.

It could do _one_ thing a horse could do better: Pull stuff on a straight surface. Doing just one thing better is evidently valuable.

I think AI is valuable from that perspective, you provide a good example there. I might well be disappointed if I would expect it to be better than humans at anything humans can do. It doesn't have to. But with wording like "co-scientist", I see where that comes from.

bjarlsson · 6 months ago
What does this cited article have to do with AI? Unless I’m missing something the researchers devised a novel method to create a material that was known since 1967.
YeGoblynQueenne · 6 months ago
It's cool, no doubt. But keep in mind this is 20 years late:

  As a prototype for a "robot scientist", Adam is able to perform independent
  experiments to test hypotheses and interpret findings without human guidance,
  removing some of the drudgery of laboratory experimentation.[11][12] Adam is
  capable of:
  
      * hypothesizing to explain observations
      * devising experiments to test these hypotheses
      * physically running the experiments using laboratory robotics
      * interpreting the results from the experiments
      * repeating the cycle as required[10][13][14][15][16]
  
  While researching yeast-based functional genomics, Adam became the first
  machine in history to have discovered new scientific knowledge independently of
  its human creators.[5][17][18] 
https://en.wikipedia.org/wiki/Robot_Scientist

Mekoloto · 6 months ago
I also think people underestimate how much benefit a current LLM already has to researchers.

A lot of them have to do things on computers which has nothing to do with their expertise. Like coding a small tool for working their data, small tools crunching results, formatting text data, searching and finding the right materials.

A LLM which helps a scientist to code something in an hour instead of a week, makes this research A LOT faster.

And we know from another paper, that we have now so much data, you need to use systems to find the right information for you. The study estimated how much additionanl critical information a research paper missed.

Workaccount2 · 6 months ago
Does this qualify as an answer to Dwarkesh's question?[1][2]

[1]https://marginalrevolution.com/marginalrevolution/2025/02/dw... [2]https://x.com/dwarkesh_sp/status/1888164523984470055

I don't know his @ but I'm sure he is on here somewhere

hinkley · 6 months ago
> in silico discovery

Oh I don’t like that. I don’t like that at all.

j_timberlake · 6 months ago
Don't worry, it takes about 10 years for drugs to get approved, AIs will be superintelligent long before the government gives you permission to buy a dose of AI-developed drugs.
terminalbraid · 6 months ago
I expect it's going to be reasonably useful with the "stamp collecting" part of science and not so much with the rest.
blacksmith_tb · 6 months ago
Not that I don't think there's a lot of potential in this approach, but the leukemia example seemed at least poorly-worded, "the suggested drugs inhibit tumor viability" reads oddly given that blood cancers don't form tumors?
drgo · 6 months ago
Lots of blood cancers form solid tumors (e.g., in lymph nodes)
klipt · 6 months ago
Health professionals often refer to leukemia and lymphoma as "liquid tumors"
celltalk · 6 months ago
“Drug repurposing for AML” lol

As a person who is literally doing his PhD on AML by implementing molecular subtyping, and ex-vivo drug predictions. I find this super random.

I would truly suggest our pipeline instead of random drug repurposing :)

https://celvox.co/solutions/seAMLess

edit: Btw we’re looking for ways to fund/commercialize our pipeline. You could contact us through the site if you’re interested!

heyoni · 6 months ago
Can you explain what you mean by subtyping and if/how it negates the usefulness of repurposing (if that’s what you meant to say). Wouldn’t subtyping complement a drug repurposing screen by allowing the scientist to test compounds against a subset of a disease?

And drug repurposing is also used for conditions with no known molecular basis like autism. You’re not suggesting its usefulness is limited in those cases right?

celltalk · 6 months ago
Sure. There are studies like BEAT-AML which tests selected drugs’ responses on primary AML material. So, not on a cell-line but on true patient data. Combining this information with molecular measurements, you can actually say something about which drugs would be useful for a subset of the patients.

However, this is still not how you treat a patient. There are standard practices in the clinic. Usually the first line treatment is induction chemo with hypomethylating agents (except elderly who might not be eligible for such a treatment). Otherwise the options are still very limited, the “best” drug in the field so far is a drug called Venetoclax, but more things are coming up such as immuno-therapy etc. It’s a very complex domain, so drug repurposing on an AML cell line is not a wow moment for me.

ncfausti · 6 months ago
Thank you for your work on this, truly.
celltalk · 6 months ago
Thanks :)
ttpphd · 6 months ago
It's almost like scientists are doing something more than a random search over language.
celltalk · 6 months ago
I do hallucinate a better future as well.
nazgul17 · 6 months ago
This search is random in the same way that AlphaGo's move selection was random.

In the Monte Carlo Tree Search part, the outcome distribution on leaves is informed by a neural network trained on data instead of a so-called playout. Sure, part of the algorithm does invoke a random() function, but by no means the result is akin to the flip of a coin.

There is indeed randomness in the process, but making it sound like a random walk is doing a disservice to nuance.

I feel many people are too ready to dismiss the results of LLMs as "random", and I'm afraid there is some element of seeing what one wants to see (i.e. believing LLMs are toys, because if they are not, we will lose our jobs).

mnky9800n · 6 months ago
Tbh I don’t see why I would use this. I don’t need an ai to connect across ideas or come up with new hypothesis. I need it to write lots of data pipeline code to take data that is organized by project, each in a unique way, each with its own set of multimodal data plus metadata all stored in long form documents with no regular formatting, and normalize it all into a giant database. I need it to write and test a data pipeline to detect events both in amplitude space and frequency space in acoustic data. I need it to test out front ends for these data analysis backends so i can play with the data. Like I think this is domain specific. Probably drug discovery requires testing tons of variables one by one iterating through the values available. But that’s not true for my research. But not everything is for everybody and that’s okay.
tippytippytango · 6 months ago
Exactly, they want to automate the most rewarding part that we don’t need help with… plus I don’t believe they’ve solved the problem of LLMs generating trite ideas.
trilobyte · 6 months ago
Sounds like the message artists were giving when generative AI started blowing up.
SubiculumCode · 6 months ago
The doing tends to be the hard part. Every scientist has 1000 idea for every one they get a chance to pursue.

That said, I requested early access.

Deleted Comment

eamag · 6 months ago
I think you're just not the target audience. If AI can come up with some good ideas and then split it into tasks some of them an undergrad can do - it can speed up the global research speed by involving more people in useful science
coliveira · 6 months ago
In science, having ideas is not the limiting factor. They're just automating the wrong thing. I want to have ideas and ask the machine to test for me, not the other way around.
not_kurt_godel · 6 months ago
Agreed - AI that could take care of this sort of cross-system complexity and automation in a reliable way would be actually useful. Unfortunately I've yet to use an AI that can reliably handle even moderately complex text parsing in a single file more easily than if I'd just done it myself from the start.
mnky9800n · 6 months ago
Yes. It’s very frustrating. Like there is a great need for a kind of data pipeline test suite where you can iterate through lots of different options and play around with different data manipulations so a single person can do it. Because it’s not worth it to really build it if it doesn’t work. There needs to be one of these astronomer/dagster/apache airflow/azure ml tools that are quick and dirty to try things out. Maybe I’m just naive and they exist and I’ve had my nose in Jupyter notebooks. But I really feel hindered these days in my ability to prototype complex data pipelines myself while also considering all of the other parts of the science.
knowaveragejoe · 6 months ago
This reminds me of a paper: "The ALCHEmist: Automated Labeling 500x CHEaper Than LLM Data Annotators"

https://arxiv.org/abs/2407.11004

In essence, LLMs are quite good at writing the code to properly parse large amounts of unstructured text, rather than what a lot of people seem to be doing which is just shoveling data into an LLM's API and asking for transformations back.

parineum · 6 months ago
> I don’t need an ai to connect across ideas or come up with new hypothesis.

This feels like hubris to me. The idea here isn't to assist you with menial tasks, the idea is to give you an AI generalist that might ne able to alert you to things outside of your field that may be related to your work. It's not going to reduce your workload, in fact, it'll probably increase it but the result should be better science.

I have a lot more faith in this use of LLMs than I do for it to do actual work. This would just guide you to speak with another expert in a different field and then you take it from there.

> In many fields, this presents a breadth and depth conundrum, since it is challenging to navigate the rapid growth in the rate of scientific publications while integrating insights from unfamiliar domains.

coliveira · 6 months ago
> This feels like hubris to me.

No, any scientist has hundreds of ideas they would like to test. It's just part of the job. The hard thing is to do the rigorous testing itself.

mnky9800n · 6 months ago
I have a billion ideas, being able to automate the testing of those ideas in some kind of Star Trek talk to the computer and it just knows what you want way would be perfect. This is the promise of ai. This is the promise of a personal computer. It is a bicycle for your mind. It is not hubris to want to be able to iterate more quickly on your own ideas. It is a natural part of being a tool building species.
iak8god · 6 months ago
> the idea is to give you an AI generalist that might ne able to alert you to things outside of your field that may be related to your work

That might be a good goal. It doesn't seem to be the goal of this project.

ttpphd · 6 months ago
Are you a scientist?
anothermathbozo · 6 months ago
Imagine someone can do the things you can’t do and needs help doing the things you can.
quinnjh · 6 months ago
The market seems excited to charge in whatever direction the weathervane has last been pointing, regardless of the real outcomes of running in that direction. Hopefully I’m wrong, but it reminds me very much of this study (I’ll quote a paraphrase)

“A groundbreaking new study of over 1,000 scientists at a major U.S. materials science firm reveals a disturbing paradox: When paired with AI systems, top researchers become extraordinarily more productive – and extraordinarily less satisfied with their work. The numbers tell a stark story: AI assistance helped scientists discover 44% more materials and increased patent filings by 39%. But here's the twist: 82% of these same scientists reported feeling less fulfilled in their jobs.”

Quote from https://futureofbeinghuman.com/p/is-ai-poised-to-suck-the-so...

Referencing this study https://aidantr.github.io/files/AI_innovation.pdf

yodon · 6 months ago
As a dev, I have the same experience.

AI chat is a massive productivity enhancer, but, when coding via prompts, I'm not able to hit the super satisfying developer flow state that I get into via normal coding.

Copilot is less of a productivity boost, but also less of a flow state blocker.

sanderjd · 6 months ago
Yep! I think these tools are incredibly useful, but I think they're basically changing all our jobs to be more like what product managers do, having ideas for what we want to achieve, but farming out a significant chunk of the work rather than doing it ourselves. And that's fine, I find it very hard to argue that it's a bad thing. But there's a reason that we aren't all product managers already. Programming is fun, and I do experience it as a loss to find myself doing less of it myself.
pradn · 6 months ago
There is some queasy feeling of fake-ness when auto-completing so much code. It feels like you're doing something wrong. But these are all based on my experience coding for half my life. AI-native devs will probably feel differently.
radioactivist · 6 months ago
I'm a bit skeptical of this study given how it is unpublished, from a (fairly junior) single author and all of the underlying details of the subject are redacted. Is there any information anywhere about what this company in the study was actually doing? (the description in the article are very vague -- basically something to do with materials)
captainclam · 6 months ago
Definitely interesting, but I'm not so sure that such a study can yet make strong claims about AI-based work in general.

These are scientists that have cultivated a particular workflow/work habits over years, even decades. To a significant extent, I'm sure their workflow is shaped by what they find fulfilling.

That they report less fulfillment when tasked with working under a new methodology, especially one that they feel little to no mastery over, is not terribly surprising.

BeetleB · 6 months ago
The feeling of dissatisfaction is something I can relate to. My story:

I only recently started using aider[1].

My experience with it can be described in 3 words.

Wow!

Oh wow!

It was amazing. I was writing a throwaway script for one time use (not for work). It wrote it for me in under 15 minutes (this includes my time getting familiar with the tool!) No bugs.

So I decided to see how far I could take it. I added command line arguments, logging, and a whole bunch of other things. After a full hour, I had a production ready script - complete with logs, etc. I had to debug code only once.

I may write high quality code for work, but for personal throwaway scripts, I'm sloppy. I would not put a command line parser, nor any logging. This did it all for me for very cheap!

There's no going back. For simple scripts like this, I will definitely use aider.

And yeah, there was definitely no satisfaction one would derive from coding. It was truly addictive. I want to use it more and more. And no matter how much I use it and like the results, it doesn't scratch my programmer's itch. It's nowhere near the fun/satisfaction of SW development.

[1] https://aider.chat/

geewee · 6 months ago
I tried Aider recently to modify a quite small python + HTML project, and it consistently got "uv" commands wrong, ended up changing my entire build system because it didn't think the thing I wanted to do was supported in the current one (it was).

They're very effective at making changes for the most part, but boy you need to keep them on a leash if you care about what those changes are.

azinman2 · 6 months ago
It seems in general we’re heading toward’s Minsky’s society of minds concept. I know OpenAI is wanting to collapse all their models into a single omni model that can do it all, but I wonder if under the hood it’d just be about routing. It’d make sense to me for agents to specialize in certain tool calls, ways of thinking, etc that as a conceptual framework/scaffolding provides a useful direction.
mythrwy · 6 months ago
I wonder if OpenAI might be routing already based on speed of some "O1" responses I receive. It does make sense.
willy_k · 6 months ago
Also, for some more complex questions I’ve noticed that it doesn’t expose its reasoning. Specifically, yesterday I asked it to perform a search algorithm provided a picture of a grid, and it reasoned for 1-2 minutes but didn’t show any of it (neither in real time nor afterwords), whereas for simpler questions I’ve asked it the reasoning is provided as well. Not sure what this means, but it suggests some type of different treatment based on complexity.
yjftsjthsd-h · 6 months ago
Isn't that kinda the idea of Mixture of Experts?
funnyAI · 6 months ago
"conceptual framework" can actually be another generalist model. Splitting model also comes with some advantages. Like easy separate tuning and replacements. Easy scaling by simply duplicating heavily used model on new hardware.
hinkley · 6 months ago
I am generally down on AI these days but I still remember using Eliza for the first time.

I think I could accept an AI prompting me instead of the other way around. Something to ask you a checklist of problems and how you will address them.

I’d also love to have someone apply AI techniques to property based testing. The process of narrowing down from 2^32 inputs to six interesting ones works better if it’s faster.

confused_boner · 6 months ago
AI prompting us sounds interesting
lorepieri · 6 months ago
Check Manna.
stanford_labrat · 6 months ago
So I'm a biomedical scientist (in training I suppose...I'm in my 3rd year of a Genetics PhD) and I have seen this trend a couple of times now where AI developers tout that AI will accelerate biomedical discovery through a very specific argument that AI will be smarter and generate better hypotheses than humans.

For example in this Google essay they make the claim that CRISPR was a transdisciplinary endeavor, "which combined expertise ranging from microbiology to genetics to molecular biology" and this is the basis of their argument that an AI co-scientist will be better able to integrate multiple fields at once to generate novel and better hypothesis. For one, what they fail to understand as computer scientists (I suspect due to not being intimately familiar with biomedical research) is that microbio/genetics/mol bio are closer linked than you may expect as a lay person. There is no large leap between microbiology and genetics that would slow down someone like Doudna or even myself - I use techniques from multiple domains in my daily work. These all fall under the general broad domain of what I'll call "cellular/micro biology". As another example, Dario Amodei from Claude also wrote something similar in his essay Machines of Loving Grace that the limiting factor in biomedical is a lack of "talented, creative researchers" for which AI could fill the gap[1].

The problem with both of these ideas is that they misunderstand the rate-limiting factor in biomedical research. Which to them is a lack of good ideas. And this is very much not the case. Biologists have tons of good ideas. The rate limiting step is testing all these good ideas with sufficient rigor to either continue exploring that particular hypothesis or whether to abandon the project for something else. From my own work, the hypothesis driving my thesis I came up with over the course of a month or two. The actual amount of work prescribed by my thesis committee to fully explore whether or not it was correct? 3 years or so worth of work. Good ideas are cheap in this field.

Overall I think these views stem from field specific nuances that don't necessarily translate. I'm not a computer scientist, but I imagine that in computer science the rate limiting factor is not actually testing out hypothesis but generating good ones. It's not like the code you write will take multiple months to run before you get an answer to your question (maybe it will? I'm not educated enough about this to make a hard claim. In biology, it is very common for one experiment to take multiple months before you know the answer to your question or even if the experiment failed and you have to do it again). But happy to hear from a CS PhD or researcher about this.

All this being said I am a big fan of AI. I try and use ChatGPT all the time, I ask it research questions, ask it to search the literature and summarize findings, etc. I even used it literally yesterday to make a deep dive into a somewhat unfamiliar branch of developmental biology more easy (and I was very satisfied with the result). But for scientific design, hypothesis generation? At the moment, useless. AI and other LLMs at this point are a very powerful version of google and code writer. And it's not even correct 30% of the time to boot so you have to be extremely careful when using it. I do think that wasting less time exploring hypotheses that are incorrect or bad is a good thing. But the problem here is that we can pretty easily identify good and bad hypotheses already. We don't need AI for that, what takes time is the actual amount of testing of these hypotheses that slows down research. Oh and politics, which I doubt AI can magic away for us.

[1] https://darioamodei.com/machines-of-loving-grace#1-biology-a...

colingauvin · 6 months ago
It's pretty painful watching CS try to turn biology into an engineering problem.

It's generally very easy to marginally move the needle in drug discovery. It's very hard to move the needle enough to justify the cost.

What is challenging is culling ideas, and having enough SNR in your readouts to really trust them.

warkdarrior · 6 months ago
> It's generally very easy to marginally move the needle in drug discovery. It's very hard to move the needle enough to justify the cost.

Maybe this kind of AI-based exploration would lower the costs. The more something is automated, the cheaper it should be to test many concepts in parallel.

bjarlsson · 6 months ago
This is marketing material from Google and people are accepting the premises uncritically.
anothermathbozo · 6 months ago
Almost this entire thread is criticism