Also agree that using a PD-L1 mab feels like it’s for show especially considering the cancer model they’re using (Colon-26) was shown to be substantially less responsive to PD-L1 inhibitors…
Not the world’s best paper imo
Also agree that using a PD-L1 mab feels like it’s for show especially considering the cancer model they’re using (Colon-26) was shown to be substantially less responsive to PD-L1 inhibitors…
Not the world’s best paper imo
Why screenshots and not copy the source?
If I understand this right, this would even in the EU now be allowed to be sold without the GMO label.
Although I am sure I can back them up to my PC somehow. But having them just on the server is not my favourite solution.
Immich, ente and photoprism all compete in a similar space?
Seems immich is the most polished webpage, but which solution will become the next cloud for photos is to be seen. Surely it's not next cloud anymore, considering the comments here.
The video on Reddit: https://www.reddit.com/r/3Dprinting/comments/1olyzn6/i_made_...
nvcc from the CUDA toolkit has a compatibility range with the underlying host compilers like gcc. If you install a newer CUDA toolkit on an older machine, likely you'll need to upgrade your compiler toolchain as well, and fix the paths.
While orchestration in many (research) projects happens from Python, some depend on building CUDA extensions. An innocently looking Python project may not ship the compiled kernels and may require a CUDA toolkit to work correctly. Some package management solutions provide the ability to install CUDA toolkits (conda/mamba, pixi), the pure-Python ones do not (pip, uv). This leaves you to match the correct CUDA toolkit to your Python environment for a project. conda specifically provides different channels (default/nvidia/pytorch/conda-forge), from conda 4.6 defaulting to a strict channel priority, meaning "if a name exists in a higher-priority channel, lower ones aren't considered". The default strict priority can make your requirements unsatisfiable, even though there would be a version of each required package in the collection of channels. uv is neat and fast and awesome, but leaves you alone in dealing with the CUDA toolkit.
Also, code that compiles with older CUDA toolkit versions may not compile with newer CUDA toolkit versions. Newer hardware may require a CUDA toolkit version that is newer than what the project maintainer intended. PyTorch ships with a specific CUDA runtime version. If you have additional code in your project that also is using CUDA extensions, you need to match the CUDA runtime version of your installed PyTorch for it to work. Trying to bring up a project from a couple of years ago to run on latest hardware may thus blow up on you on multiple fronts.
I wonder if that heat could be stored in a more sensible way, e.g. as heated water in a tank near the bubble. This could improve the efficiency figures at short repeating patterns (charding at high noon, discharging through the night).