Readit News logoReadit News
janeway commented on Blood test boosts Alzheimer's diagnosis accuracy to 94.5%, clinical study shows   medicalxpress.com/news/20... · Posted by u/wglb
janeway · 19 days ago
“We have no cure. I don’t want to know.”

If astronomers announced that a large asteroid might strike Earth in twenty years, and that we currently had no way to deflect it, nobody would respond by saying, “Come back when you already have the rocket.” We would immediately build better telescopes to track it precisely, refine its trajectory models, and begin developing propulsion systems capable of interception. You do not wait for the cure before improving the measurement. You improve the measurement so that a cure becomes possible, targeted, and effective.

Medicine is no different. Refusing to improve early, probabilistic diagnosis because today’s treatments are modest confuses sequence with outcome. Breakthroughs do not emerge from vague labels and mixed populations. They emerge from precise, quantitative stratification that allows real effects to be seen. The danger is not that we measure too early. It is that we continue making irreversible clinical and research decisions using imprecise, binary classifications while biological insight and therapeutic tools are advancing rapidly. Building the probabilistic layer now is not premature. It is how we make future intervention feasible.

janeway commented on Design Thinking Books (2024)   designorate.com/design-th... · Posted by u/rrm1977
kaizenb · 2 months ago
Noted couple of books.

I've been curating (mostly design) books on a digital library: https://links.1984.design/books

janeway · 2 months ago
Wow excellent thank you
janeway commented on A minimal standard for evidence availability in black-box systems   switzerlandomics.ch/blog/... · Posted by u/janeway
janeway · 3 months ago
A sign says: "Dogs must be carried on the escalator."

At first glance it seems clear. On a second read, it becomes obvious that what matters is not the dogs, but whether they are being carried.

Grandma calls out: "The chicken is ready to eat."

Many system outputs have the same problem. They look definitive, but they silently hide whether the required conditions were ever met.

When systems consume outputs from black-box algorithms, the usual options are to trust the conclusion or ignore it entirely.

In clinical genomics, the latter is traditional. For example, the British Society for Genetic Medicine advises clinicians not to act on results from external genomic services https://bsgm.org.uk/media/12844/direct-to-consumer-genomic-t...

This post describes a third approach, grounded in computer science. Before any interpretation, systems should record whether verifiable evidence is actually available.

The standard adds a small but strict step. Each rule first reports whether it could be checked at all: yes, no, or not evaluable. Then the evidence is used in reverse, not to confirm the result, but to try to rule it out. If removing or negating that evidence would change the outcome, it counts as real evidence. If not, it does not.

Crucially, this forces a simple question: could the same result have appeared even if the evidence were absent or different? Only when the answer is no does the result actually count as evidence.

The idea comes from genomics, where hospitals, companies, and research groups need to share results without exposing proprietary methods, but it applies anywhere systems reason over incomplete or black-box data.

janeway commented on Show HN: Virtual SLURM HPC cluster in a Docker Compose   github.com/exactlab/vhpc... · Posted by u/ciclotrone
janeway · 4 months ago
Cool!

I have worked 100% in 3 comparable systems over the past 10 years. Can you access with ssh?

I find it super fluid to work on the HPC directly to develop methods for huge datasets by using vim to code and tmux for sessions. I focus on printing detailed log files constantly with lots of debugs and an automated monitoring script to print those logs in realtime; a mixture of .out .err and log.txt.

janeway commented on The 'Toy Story' You Remember   animationobsessive.substa... · Posted by u/ani_obsessive
diskzero · 4 months ago
I worked at DreamWorks Animation on the pipeline, lighting and animation tools for almost ten years. All of this information is captured in our pipeline process tools, although I am sure there are edits and modifications that are done that escape documentation. We were able to pull complete shows out of deep storage, render scenes using the toolchain the produced them and produce the same output. If the renders weren't reproducable, madness would ensue.

Even with complete attention to detail, the final renders would be color graded using Flame, or Inferno, or some other tool and all of those edits would also be stored and reproducible in the pipeline.

Pixar must have a very similar system and maybe a Pixar engineer can comment. My somewhat educated assumption is that these DVD releases were created outside of the Pixar toolchain by grabbing some version of a render that was never intended as a direct to digital release. This may have happened as a result of ignorance, indifference, a lack of a proper budget or some other extenuating circumstance. It isn't likely John Lasseter or some other Pixar creative really wanted the final output to look like this.

janeway · 4 months ago
Amazing. Your final point seems to make most sense - not the original team itself having any problems.
janeway commented on The 'Toy Story' You Remember   animationobsessive.substa... · Posted by u/ani_obsessive
janeway · 4 months ago
This topic is fascinating to me. The Toy Story film workflow is a perfect illustration of intentional compensation: artists pushed greens in the digital master because 35 mm film would darken and desaturate them. The aim was never neon greens on screen, it was colour calibration for a later step. Only later, when digital masters were reused without the film stage, did those compensating choices start to look like creative ones.

I run into this same failure mode often. We introduce purposeful scaffolding in the workflow that isn’t meant to stand alone, but exists solely to ensure the final output behaves as intended. Months later, someone is pitching how we should “lean into the bold saturated greens,” not realising the topic only exists because we specifically wanted neutral greens in the final output. The scaffold becomes the building.

In our work this kind of nuance isn’t optional, it is the project. If we lose track of which decisions are compensations and which are targets, outcomes drift badly and quietly, and everything built after is optimised for the wrong goal.

I’d genuinely value advice on preventing this. Is there a good name or framework for this pattern? Something concise that distinguishes a process artefact from product intent, and helps teams course-correct early without sounding like a semantics debate?

janeway commented on People still use our old-fashioned Unix login servers   utcc.utoronto.ca/~cks/spa... · Posted by u/sugarpimpdorsey
teekert · 7 months ago
Perhaps it is worth noting that all super computers I know (like the Dutch Snellius and the Finnish Lumi) are Slurm clusters with login nodes.

Bioinformaticians (among others) in (for example) University Medical Centers won’t get much more bang for the buck than on a well managed Slurm cluster (ie with GPU and Fat nodes etc to distinguish between compute loads). You buy the machines, they are utilized close to 100% over their life time.

janeway · 7 months ago
Yes, I spend a majority of my professional life on similar systems writing code in vim and running massive jobs via slurm. Required for processing TBs of data on secured environments with seamless command line access. I hate web-based connections or vscode type system. Although open to any improvements, this works best to me. It’s like a world inside one’s head with a text-based interface.

Graphical data exploration and stats with R, python, etc is a beautiful challenge at that scale.

janeway commented on The Plot of the Phantom, a text adventure that took 40 years to finish   scottandrew.com/blog/2025... · Posted by u/SeenNotHeard
janeway · 8 months ago
Wow, already stumbled into some good humour. Well done
janeway commented on Touching the back wall of the Apple store   blog.lauramichet.com/touc... · Posted by u/nivethan
janeway · 8 months ago
I’ve just finished reading Walter Isaacson’s biography of Steve Jobs. His vision was extraordinary, recognising that even the design of the stores was integral to the product itself. Every layer of engineering was deeply intertwined with aesthetic design. I’ve always shared that belief, but I’m now fully committed to pursuing it without compromise in my own products. It’s proving even more challenging than I’d imagined to make highly technical things feel simple and intuitive for users.

I was recently thinking the exact same thing as the author here; as a teen I got my ipod and instantly respected the graceful design and felt shocked how shoddy my previous cheap mp3 player was in comparison.

I am also convinced that he was fully responsible for keeping Apple on this path and that it is almost impossible to stop others from diluting the craftsmanship towards mediocrity as the group size grows. Big CEOs get labelled as greedy exploiters in a single brushstroke by people who don’t seem to care to read up.

u/janeway

KarmaCake day410February 22, 2022View Original