"Lauer notes that not long before he left NIH, he and his colleagues identified a principal investigator (PI) who had submitted more than 40 distinct applications in a single submission round, most of which appeared to be partially or entirely AI generated. The incident was “stunning” and “disappointing,” says Lauer, who was not involved in creating the new NIH policy but hopes the cap will discourage other researchers from abusing the system."
Always somebody who ruins things for everybody else.
While I personally do not mind people abusing the system being called out, in this case I am not sure that is an issue. Is there any purpose in allowing people to submit more than six?
There really does seem to be a "vibe" in the last few years where (paraphrasing) "Society is going to sh*t so although this action is morally dubious, it's not strictly speaking illegal, so I'll maximise my earnings so at least I am well resourced when everything falls apart."
Indeed, many such cases. Our society is full of institutions that functioned only because "no one in a position to participate would be shameless enough to abuse it." Then that assumption breaks, and it's ruined for everyone.
It is relevant, though awkward to discuss, that a large share of NIH and NSF funding proposals (and indeed funded projects) are led by researchers who didn't grow up in the US. I wonder if it's in fact a majority.
It's hard to know without reading them, and perhaps 40 is too much for a innocent explanation, but ...
It's common that the head of the laboratory submit the applications for each project, so 40 application may mean 40 subteams with 2-6 minions each (where each minion has a Ph.D. or is a graduate student.) Usually when the paper is published, the head of the laboratory is the "last author".
Now it's getting common to make an AI cleanup, like fixing orthography and grammar and perhaps reduce the text to 5000 characters. Without reading them it's hard to know if this is the case or it's nonsensical AI slop.
> It's common that the head of the laboratory submit the applications for each project, so 40 application may mean 40 subteams with 2-6 minions each (where each minion has a Ph.D. or is a graduate student.) Usually when the paper is published, the head of the laboratory is the "last author".
I believe this is the reason they are limiting it. A not of grants require the PI to spend at least some percentage of their research time on the project. PIs trend to ignore that requirement. As a result big names were getting a huge number of grants and early career researchers were getting none. Requirements like these give other researchers a chance.
Maybe someone could make a fraud case out of it? It really depends on details not given whether or a decent civil or criminal case exists. Very likely a university-level ethics investigation would be warranted.
For perspective, the CS programs in the NSF already have a two-submission limit per year [1].
Besides reducing the incentive to spam, this rule has had another positive effect: As a researcher without funding, you don't have to spend your whole year writing grants. You can, instead, spend your time on actual research.
With that said, NIH grants tend to me much more narrow than CS ones, and I imagine that it takes a lot more grants to keep a lab going...
Describing this as a limit on "CS programs" is a common, but erroneous, understanding of the proposal limit.
This specific solicitation — CISE Core Programs — has a 2-proposal-per-year limit. However, that only applies to this solicitation, and only counts proposals submitted to this solicitation. CISE Core Programs is an important CS funding mechanism, but there are quite a few other funding vehicles within CISE (Robust Intelligence, RETTL, SATC, and many more, including CAREER). Each has its own limits, that generally don't count or count against the Core Programs limit.
Seems like a band-aid solution for a broken system.
But in general science will have to deal with that problem. Written text used to "proof" that the author spend some level of thought into the topic. With AI that promise is broken.
It's going to be really funny when the NIH eventually sits down the professors, hands them blue exam booklets, and makes them write proposals in freehand.
The question is: is AI breaking the system, or was it always broken and does AI merely show what is broken about it?
I'm not a scientist/researcher myself, but from what I hear from friends who are, the whole "industry" (which is really what it is) is riddled with corruption, politics, broken systems and lack of actual scientific interest.
The same is happening in the Computer Security academic research realm. All of the four top conferences (USENIX Security, ACM CCS, IEEE S&P, and NDSS) have instituted a submission cap--you can't have your name on more than 6 papers being submitted in a given cycle. This has all happened within the last year, likely due to the same GenAI abuse that puts undue burden on PC reviewers.
Having been on the program committee for some of these conferences, this issue of limiting number of submissions was being discussed long before GenAI. Specifically, there was talk of a few highly prolific security researchers that submitted 15-20 papers to these conferences each cycle, with pretty good quality too.
This seems like a good example of a more general issue; when you have a machine that produces bullshit mixed with gem-like phraseology, at a pace that we cannot possibly match as humans, we may be faced with intellectual denial of service attacks.
There was a natural barrier of investing time into writing the proposal.
This barrier is clearly broken now.
A different barrier could be money that people submitting grant proposals would need to pay. First grant proposal could be $0, second $1, third $10, fourth $100, etc.
Or more limited in impact — flag people who are behaving outside of the norms for manaul review, i.e. too many or too frequent submissions. Manual review and if any are found to contain AI, assume all are AI, and charge only that person a fixed proposal review fee going forward.
The "if found to contain AI" part will possibly become harder and harder to detect over time, at which point you have to assume all entries could be AI, and flagging or making them pay a review fee would become the standard.
But it's similar to emails, under water your e-mail account has a 'trust score' based on previous behaviour, domain, etc. It could also come from the other side, if a scientist is attached to a university or other research body, they should sign off on a declaration that AI was not used (or used but clearly marked as such), with a big fine and reputation damage for the university if their researchers violate it.
That would need a clear definition of what "too many" or "too frequent" mean. Every time this definition is changed, you'd need to retroactively apply the change. Changing the "person" would circumvent this.
My idea doesn't involve any of that -- you want to submit 10 proposals? $111,111,111 please.
I think its fair that the decision criteria should be qualitative, its just a bummer that its happening at a time with a complicated political environment and dwindling research funds, making it harder for researchers
If I write anything, and put my name as the author, I'm 100% lying if I am just copy pasting text.
This holds true 10 years ago, if I copied in any text without attribution. A novel, a book, a grant app, a paper, whatever.
Just because you're now copying large swaths of text from an LLM, doesn't make it better than copying from a person, eg plagiarism. And if you took a person's text 10 years ago, and modified a few words out of thousands, yes, that'd be called plagiarism too.
(No, a spell checker isn't that. It's correcting your word for the same word. If you think spellcheckers are the same as whole paragraph insertion, please check your ethics meter, it's broken.)
If the work isn't yours, you need to say so. Otherwise you're being dishonest.
If people get upset at the notion of disclosing, that feels like guilty behaviour. Otherwise, why not disclose?
Now, taking a step back? We're in a period of transition.
I agree that vast imbalances are being created here. This is the true problem.
For example, an application process could state "LLM applications are fine", or not. Instead?
The current is "no" without clearly saying so, for obvious reasons (it's copying work you didn't write, as your own ... plagiarism), but any such "no" without a high incident of detection and punishment, is worse than anything.
The 6 applications seems like a cop out, although it is logical. It should also be coupled with a "OK you win, use LLMs" statement too.
On another note, soon there will be two types of people, and only one of which I will engage in thoughtful email/text communication with.
Those who use LLMs, and people worth talking too.
Of what value is any meaningful conversation, if the other person's response is an LLM paste? Might as well just talk to chatgpt instead.
(note, I'm talking about friendly debate among friends or colleagues. Seeking their opinion or vice versa.)
I disagree. It's like doing collage art with text. Like using samples to make music. We are already in that era of collating text and images and video from AIs. We should learn to embrace
"Lauer notes that not long before he left NIH, he and his colleagues identified a principal investigator (PI) who had submitted more than 40 distinct applications in a single submission round, most of which appeared to be partially or entirely AI generated. The incident was “stunning” and “disappointing,” says Lauer, who was not involved in creating the new NIH policy but hopes the cap will discourage other researchers from abusing the system."
Always somebody who ruins things for everybody else.
It is relevant, though awkward to discuss, that a large share of NIH and NSF funding proposals (and indeed funded projects) are led by researchers who didn't grow up in the US. I wonder if it's in fact a majority.
It's common that the head of the laboratory submit the applications for each project, so 40 application may mean 40 subteams with 2-6 minions each (where each minion has a Ph.D. or is a graduate student.) Usually when the paper is published, the head of the laboratory is the "last author".
Now it's getting common to make an AI cleanup, like fixing orthography and grammar and perhaps reduce the text to 5000 characters. Without reading them it's hard to know if this is the case or it's nonsensical AI slop.
I believe this is the reason they are limiting it. A not of grants require the PI to spend at least some percentage of their research time on the project. PIs trend to ignore that requirement. As a result big names were getting a huge number of grants and early career researchers were getting none. Requirements like these give other researchers a chance.
Besides reducing the incentive to spam, this rule has had another positive effect: As a researcher without funding, you don't have to spend your whole year writing grants. You can, instead, spend your time on actual research.
With that said, NIH grants tend to me much more narrow than CS ones, and I imagine that it takes a lot more grants to keep a lab going...
[1] https://www.nsf.gov/funding/opportunities/computer-informati...
This specific solicitation — CISE Core Programs — has a 2-proposal-per-year limit. However, that only applies to this solicitation, and only counts proposals submitted to this solicitation. CISE Core Programs is an important CS funding mechanism, but there are quite a few other funding vehicles within CISE (Robust Intelligence, RETTL, SATC, and many more, including CAREER). Each has its own limits, that generally don't count or count against the Core Programs limit.
But in general science will have to deal with that problem. Written text used to "proof" that the author spend some level of thought into the topic. With AI that promise is broken.
I'm not a scientist/researcher myself, but from what I hear from friends who are, the whole "industry" (which is really what it is) is riddled with corruption, politics, broken systems and lack of actual scientific interest.
Many systems are going to have to come up with better solutions to the problems AI will pose for legacy processes.
This barrier is clearly broken now.
A different barrier could be money that people submitting grant proposals would need to pay. First grant proposal could be $0, second $1, third $10, fourth $100, etc.
But it's similar to emails, under water your e-mail account has a 'trust score' based on previous behaviour, domain, etc. It could also come from the other side, if a scientist is attached to a university or other research body, they should sign off on a declaration that AI was not used (or used but clearly marked as such), with a big fine and reputation damage for the university if their researchers violate it.
My idea doesn't involve any of that -- you want to submit 10 proposals? $111,111,111 please.
If I write anything, and put my name as the author, I'm 100% lying if I am just copy pasting text.
This holds true 10 years ago, if I copied in any text without attribution. A novel, a book, a grant app, a paper, whatever.
Just because you're now copying large swaths of text from an LLM, doesn't make it better than copying from a person, eg plagiarism. And if you took a person's text 10 years ago, and modified a few words out of thousands, yes, that'd be called plagiarism too.
(No, a spell checker isn't that. It's correcting your word for the same word. If you think spellcheckers are the same as whole paragraph insertion, please check your ethics meter, it's broken.)
If the work isn't yours, you need to say so. Otherwise you're being dishonest.
If people get upset at the notion of disclosing, that feels like guilty behaviour. Otherwise, why not disclose?
Now, taking a step back? We're in a period of transition.
I agree that vast imbalances are being created here. This is the true problem.
For example, an application process could state "LLM applications are fine", or not. Instead?
The current is "no" without clearly saying so, for obvious reasons (it's copying work you didn't write, as your own ... plagiarism), but any such "no" without a high incident of detection and punishment, is worse than anything.
The 6 applications seems like a cop out, although it is logical. It should also be coupled with a "OK you win, use LLMs" statement too.
On another note, soon there will be two types of people, and only one of which I will engage in thoughtful email/text communication with.
Those who use LLMs, and people worth talking too.
Of what value is any meaningful conversation, if the other person's response is an LLM paste? Might as well just talk to chatgpt instead.
(note, I'm talking about friendly debate among friends or colleagues. Seeking their opinion or vice versa.)
I disagree. It's like doing collage art with text. Like using samples to make music. We are already in that era of collating text and images and video from AIs. We should learn to embrace