I wish they elaborated on how they measure commercial promise. I've seen papers that attempt to link grants to value via a 4 step chain: grants fund projects, projects make papers, papers make patents, patents create jumps in stock for US firms. Of course, this is a reductive way to measure progress, but if you want to use AI you'll need a reductive metric.
> And so far, public funders are being cautious. In 2023, the U.S. National Institutes of Health banned the use of AI tools in the grant-review process, partly out of fears that the confidentiality of research proposals would be jeopardized.
It sort of annoys me that this is framed as "fear" about a single issue. The NIH is increasingly criticized for funding low-risk, low-reward inefficient science. People are suggesting that they instead fund high-variance work, stuff that goes against the grain or lets the researcher chart a new path. Using AI would prevent this, because it tends to be a conventional wisdom machine. Its trained on our body of knowledge; how could it do otherwise?
Because that’s the authors actual goal? To take a web page that looks fine to human eyes but is unintuitively not accessible to AI. That’s genuinely useful and valuable.
Sure it’s no different than converting it to markdown for human eyes. But it’s important to be clear about not just WHAT but also WHY.
C’mon now. This isn’t controversial or even bad.
Based on the churn I have fixing security vulnerabilities reported by Snyk and Trivy, I have a feeling that issues have a tendency to be labeled mostly as HIGH or CRITICAL when they are assigned a CVE, for better or worse.