If you're an authoritative entity over a system (CVEs) that can break production systems (and to be clear, NIST recommends blocking builds/deploys that contain high CVEs), then it's also on you to make sure you're not issuing bogus CVEs.
There is something wrong with security industry and we are all paying the price. At my day job some tool automatically opens security bugs against 15 or so repos we maintain and now we are on the hook for arguing how the report was bogus or fix the vulnerability. Just PR and Jira dance one has to do is exhausting.
Many libraries are flagged with CVEs because they can be used as part of the trust boundary of the whole software system, and they have corner cases that allow certain malicious inputs to give outputs that may be surprising and unexpected to the clients of the library. The library developers push back and say "Can you point to one real-world vulnerability where the library is actually used in the way that the CVE says constitutes a vulnerability?", effectively pushing the responsibility back onto the clients of the library.
But that's exactly how real malware usually works. Individual components that are correct in isolation get combined in ways where the developer thinks that one component offers a guarantee that it doesn't really, and so some combination of inputs does something unexpected. An enterprising hacker exploits that to access the unexpected behavior.
There isn't really a good solution here, but it seems like understanding this tradeoff would point research toward topics like proof-carrying code, fuzzing, trust boundaries, capabilities, simplifying your system, and other whole-system approaches to security rather than nitpicking individual libraries for potential security vulnerabilities.
Shouldn't it be on security researcher to prove that how this can be exploited if no http end points are created?
So much of security scanning is such bullshit.