We here. Most of us (by age) at least. We have witnessed the entire lifetime of the useful internet. We saw as it rose and became useful and we are seeing it sink into the mire of easily-generated shit and become useless. We saw web search become a force for good (information availability) and we saw web search become useless trash. As the cost to produce shit drops, it will only get worse. The internet had a good run. The future, if any, is in closed communities by invitation only. However, that isn’t the Internet we knew and loved. <hat off in respect for the departed>
> DO NOT spam projects, open a handful of reports and then WAIT. You could run the script and open tons of reports all-at-once, but likely you have faults in your process that will cause mass-frustration at scale. Learn from early mistakes and feedback.
I saw something similar with wasm3, a project explicitly said to be in maintenance mode, because the maintainer is in the Ukraine and being busy with other stuff... I managed to triage and provide patches to some of them. The poster was doing fuzz testing by randomly generating WASM binaries. Once a crash was found, they just uploaded the binary as-was, with error message. Completely valid reports, but bad execution with context in mind.
This resulted in huge WASM binaries that could be reduced from thousands of instructions to ten with work. (Even that reduction could probably have been automated, which annoys me even more.) There were also duplicates because they posted 5-10 reports simultaneously, many with the same cause. This is something I feel they should have done ahead of posting.
> DO NOT submit reports that haven't been reviewed BY A HUMAN. This reviewing time should be paid first by you, not open source volunteers.
This seems like the most important point. LLMs are great for generating things. By all means, continue using them. Sometimes useful, sometimes not. It can be inspiring if used right. They are pattern matchers after all, and bug hunting is partially about finding patterns. However, GIGO, and they need a filter after.
(If you like using a hammer to fix dents in my car, don't say you're done just because you've had a few blows; tell me it's done when the dents are gone.)
Do they think they're helping, though? The problem with recent leaps in automation is that they're facilitating more scammers and profiteers than people who sincerely want to make the world better.
> Take away any positive incentive to reporting security issues, for example GitHub showing the number of GitHub Security Advisory "credits" a user appears on.
Fully agree. This is something github can do. When claims are unverifiable, it should stop resume padding.
Daniel (cURL) owner and maintainer went through something similar as LLMs just started, it's covered in his blog post here:
https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-f...
Comically covered by The Primeagen here: https://www.youtube.com/watch?v=e2HzKY5imTE&t=1206s
Previous conversations: https://news.ycombinator.com/item?id=38845878, 121 comments
https://news.ycombinator.com/item?id=38840907, 8 comments
Deleted Comment
I saw something similar with wasm3, a project explicitly said to be in maintenance mode, because the maintainer is in the Ukraine and being busy with other stuff... I managed to triage and provide patches to some of them. The poster was doing fuzz testing by randomly generating WASM binaries. Once a crash was found, they just uploaded the binary as-was, with error message. Completely valid reports, but bad execution with context in mind.
This resulted in huge WASM binaries that could be reduced from thousands of instructions to ten with work. (Even that reduction could probably have been automated, which annoys me even more.) There were also duplicates because they posted 5-10 reports simultaneously, many with the same cause. This is something I feel they should have done ahead of posting.
> DO NOT submit reports that haven't been reviewed BY A HUMAN. This reviewing time should be paid first by you, not open source volunteers.
This seems like the most important point. LLMs are great for generating things. By all means, continue using them. Sometimes useful, sometimes not. It can be inspiring if used right. They are pattern matchers after all, and bug hunting is partially about finding patterns. However, GIGO, and they need a filter after.
(If you like using a hammer to fix dents in my car, don't say you're done just because you've had a few blows; tell me it's done when the dents are gone.)
Fully agree. This is something github can do. When claims are unverifiable, it should stop resume padding.
could an AI bot be trained to recognize low-quality security reports and automatically respond as such?
at the risk of escalating the inevitable arms race, of course.