This feels like hubris to me. The idea here isn't to assist you with menial tasks, the idea is to give you an AI generalist that might ne able to alert you to things outside of your field that may be related to your work. It's not going to reduce your workload, in fact, it'll probably increase it but the result should be better science.
I have a lot more faith in this use of LLMs than I do for it to do actual work. This would just guide you to speak with another expert in a different field and then you take it from there.
> In many fields, this presents a breadth and depth conundrum, since it is challenging to navigate the rapid growth in the rate of scientific publications while integrating insights from unfamiliar domains.
That might be a good goal. It doesn't seem to be the goal of this project.
So for those people, the LLM is replacing having nothing, not a therapist.