Xbow has really smart people working on it, so they're well-aware of the usual 30-second critiques that come up in this thread. For example, they take specific steps to eliminate false positives.
The #1 spot in the ranking is both more of a deal and less of a deal than it might appear. It's less of a deal in that HackerOne is an economic numbers game. There are countless programs you can sign up for, with varied difficulty levels and payouts. Most of them pay not a whole lot and don't attract top talent in the industry. Instead, they offer supplemental income to infosec-minded school-age kids in the developing world. So I wouldn't read this as "Xbow is the best bug hunter in the US". That's a bit of a marketing gimmick.
But this is also not a particularly meaningful objective. The problem is that there's a lot of low-hanging bugs that need squashing and it's hard to allocate sufficient resources to that. Top infosec talent doesn't want to do it (and there's not enough of it). Consulting companies can do it, but they inevitably end up stretching themselves too thin, so the coverage ends up being hit-and-miss. There's a huge market for tools that can find easy bugs cheaply and without too many false positives.
I personally don't doubt that LLMs and related techniques are well-tailored for this task, completely independent of whether they can outperform leading experts. But there are skeptics, so I think this is an important real-world result.
Maybe that is because the article is chaotic (like any "AI" article) and does not really address the false positive issue in a well.presented manner? Or even at all?
Below people are reading the tea leaves to get any clue.
There’s two whole paragraphs under a dedicated heading. I don’t think the problem is with the article here. Paragraphs reproduced below:
AI can be remarkably effective at discovering a broad range of vulnerabilities—but the real challenge isn’t always detection, It’s precision. Automation has long struggled with false positives, and nowhere is this more evident than in vulnerability scanning. Tools that flag dozens of irrelevant issues often create more work than they save. When AI enters the equation, the stakes grow even higher: models can generalize well, but verifying technical edge cases is a different game entirely.
To ensure accuracy, we developed the concept of validators, automated peer reviewers that confirm each vulnerability XBOW uncovers. Sometimes this process leverages a large language model; in other cases, we build custom programmatic checks. For example, to validate Cross-Site Scripting findings, a headless browser visits the target site to verify that the JavaScript payload was truly executed. (don’t miss Brendan Dolan-Gavitt’s BlackHat presentation on AI agents for Offsec)
Vulnerability researchers? For public projects, there's a strong preference for prestige stuff: ecosystem-wide vulnerabilities, new attack techniques, attacking cool new tech (e.g., self-driving cars).
To pay bills: often working for tier A tech companies on intellectually-stimulating projects, such as novel mitigations, proprietary automation, etc. Or doing lucrative consulting / freelance work. Generally not triaging Nessus results 9-to-5.
Working from 9 to 5 for a guaranteed salary that is not dependent on how many bugs you find before anybody else, and not having to argue your case or negotiate the bounty.
100% agree with OP, to make a living in BBH you can't go hunting on VDP program that don't pay anything all day. That means you will have a lot of low hanging fruits on those programs.
I don't think LLM replace humans, they do free up time to do nicer tasks.
...which is exactly what technology advancements in our field have done since its inception, vs. the "this changes everything for everybody forever" narative that makes AI cheerleaders so exhausting.
Honestly I think this is extremely impressive, but it also raises what I call the “junior programmer” problem. Say XBOW gets good enough to hoover up basically all that money and can do it cost-effectively. What then happens to the pipeline of security researchers?
> To bridge that gap, we started dogfooding XBOW in public and private bug bounty programs hosted on HackerOne. We treated it like any external researcher would: no shortcuts, no internal knowledge—just XBOW, running on its own.
Is it dogfooding if you're not doing it to yourself? I'd considerit dogfooding only if they were flooding themselves in AI generated bug reports, not to other people. They're not the ones reviewing them.
Also, honest question: what does "best" means here? The one that has sent the most reports?
Walt Disney doesn't pay bug bounties. AT&T's bounties go up to $5k, which is decent but still not much. It's possible that the market for bugs is efficient.
I think they mean dogfooding as in putting on the "customer" hat and using the product.
Seems reasonable to call that dogfooding considering that flooding themselves wouldn't be any more useful than synthetic testing and there's only so much ground they could cover using it on their own software.
If this were coming out of Microsoft or IBM or whatever then yeah, not really dogfooding.
This isn’t just semantics — overstating AI capabilities can confuse the public and mislead buyers, especially in high-stakes security contexts.
I say this as someone actively working in this space. I participated in the development of PentestGPT, which helped kickstart this wave of research and investment, and more recently, I’ve been working on Cybersecurity AI (CAI) — the leading open-source project for building autonomous agents for security:
I’m all for pushing boundaries, but let’s keep the messaging grounded in reality. The future of AI in security is exciting — and we’re just getting started.
Actually, those humans (XBOW's) were already top rankers. Just look it up.
What's being critized here is the hype, which can be misleading and confusing. On this topic, wrote a small essay: “Cybersecurity AI: The Dangerous Gap Between Automation and Autonomy,” to sort fact from fiction -> https://shorturl.at/1ytz7
Since I am the cofounder of a mostly manual based testing in that space we do follow the new AI hackbots closely. There is a lot of money being raised (Horizon3 at 100M, Xbow at 87M, Mindfort will probably soon raise).
The future is definitely a combination of human and bots like anything else, it won't replace the humans just like coding bots won't replace devs. In fact this will allow humans to focus ob the fun/creative hacking instead of the basic/boring tests.
What I am worried about is on the triage/reproduction side, right now it is still mostly manual and it is a hard problem to automate.
This does not surprise me. In a couple of 'legacy' open source projects I found DoS attacks within 10 minutes, with a working PoC. It crashed the server entirely. I suspect with more prompting it could have found RCE but it was an idle shower thought to try.
While niche and not widely used; there are at least thousands of publicly available servers for each of these projects.
I genuinely think this is one of the biggest near term issues with AI. Even if we get great AI "defence" tooling, there are just so many servers and (IoT or otherwise) devices out there, most of which is not trivial to patch. While a few niche services getting pwned isn't probably a big deal, a million niche services all getting pwned in quick succession is likely to cause huge disruption. There is so much code out there that hasn't been remotely security checked.
Maybe the end solution is some sort of LLM based "WAF" that inspects all traffic that ISPs deploy.
«XBOW submitted nearly 1,060 vulnerabilities. All findings were fully automated, though our security team reviewed them pre-submission to comply with HackerOne’s policy on automated tools»
That seems a bit unethical. I’ve thought companies specifically deny usage of automated tools. A bit too late ey…?
The policies states it’s not allowed to use automated tools, not to submit report using automated tools alone. Human review does not really change that.
The #1 spot in the ranking is both more of a deal and less of a deal than it might appear. It's less of a deal in that HackerOne is an economic numbers game. There are countless programs you can sign up for, with varied difficulty levels and payouts. Most of them pay not a whole lot and don't attract top talent in the industry. Instead, they offer supplemental income to infosec-minded school-age kids in the developing world. So I wouldn't read this as "Xbow is the best bug hunter in the US". That's a bit of a marketing gimmick.
But this is also not a particularly meaningful objective. The problem is that there's a lot of low-hanging bugs that need squashing and it's hard to allocate sufficient resources to that. Top infosec talent doesn't want to do it (and there's not enough of it). Consulting companies can do it, but they inevitably end up stretching themselves too thin, so the coverage ends up being hit-and-miss. There's a huge market for tools that can find easy bugs cheaply and without too many false positives.
I personally don't doubt that LLMs and related techniques are well-tailored for this task, completely independent of whether they can outperform leading experts. But there are skeptics, so I think this is an important real-world result.
Below people are reading the tea leaves to get any clue.
AI can be remarkably effective at discovering a broad range of vulnerabilities—but the real challenge isn’t always detection, It’s precision. Automation has long struggled with false positives, and nowhere is this more evident than in vulnerability scanning. Tools that flag dozens of irrelevant issues often create more work than they save. When AI enters the equation, the stakes grow even higher: models can generalize well, but verifying technical edge cases is a different game entirely.
To ensure accuracy, we developed the concept of validators, automated peer reviewers that confirm each vulnerability XBOW uncovers. Sometimes this process leverages a large language model; in other cases, we build custom programmatic checks. For example, to validate Cross-Site Scripting findings, a headless browser visits the target site to verify that the JavaScript payload was truly executed. (don’t miss Brendan Dolan-Gavitt’s BlackHat presentation on AI agents for Offsec)
What is the top talent spending its time on?
To pay bills: often working for tier A tech companies on intellectually-stimulating projects, such as novel mitigations, proprietary automation, etc. Or doing lucrative consulting / freelance work. Generally not triaging Nessus results 9-to-5.
I don't think LLM replace humans, they do free up time to do nicer tasks.
Succinct description of HN. It’s a damn shame.
> To bridge that gap, we started dogfooding XBOW in public and private bug bounty programs hosted on HackerOne. We treated it like any external researcher would: no shortcuts, no internal knowledge—just XBOW, running on its own.
Is it dogfooding if you're not doing it to yourself? I'd considerit dogfooding only if they were flooding themselves in AI generated bug reports, not to other people. They're not the ones reviewing them.
Also, honest question: what does "best" means here? The one that has sent the most reports?
Some of that is likely down to company policies; Snapchat's policy, for example, is that nothing is ever marked invalid.
Seems reasonable to call that dogfooding considering that flooding themselves wouldn't be any more useful than synthetic testing and there's only so much ground they could cover using it on their own software.
If this were coming out of Microsoft or IBM or whatever then yeah, not really dogfooding.
- Design the system and prompts
- Build and integrate the attack tools
- Guide the decision logic and analysis
This isn’t just semantics — overstating AI capabilities can confuse the public and mislead buyers, especially in high-stakes security contexts.
I say this as someone actively working in this space. I participated in the development of PentestGPT, which helped kickstart this wave of research and investment, and more recently, I’ve been working on Cybersecurity AI (CAI) — the leading open-source project for building autonomous agents for security:
- CAI GitHub: https://github.com/aliasrobotics/cai
- Tech report: https://arxiv.org/pdf/2504.06017
I’m all for pushing boundaries, but let’s keep the messaging grounded in reality. The future of AI in security is exciting — and we’re just getting started.
Who would it be, gremlins? Those humans weren't at the top of the leaderboard before they had the AI, so clearly it helps.
What's being critized here is the hype, which can be misleading and confusing. On this topic, wrote a small essay: “Cybersecurity AI: The Dangerous Gap Between Automation and Autonomy,” to sort fact from fiction -> https://shorturl.at/1ytz7
The future is definitely a combination of human and bots like anything else, it won't replace the humans just like coding bots won't replace devs. In fact this will allow humans to focus ob the fun/creative hacking instead of the basic/boring tests.
What I am worried about is on the triage/reproduction side, right now it is still mostly manual and it is a hard problem to automate.
https://hackerone.com/xbow?type=user
Which shows a different picture. This may not invalidate their claim (best US), but a screenshot can be a bit cherry-picked.
While niche and not widely used; there are at least thousands of publicly available servers for each of these projects.
I genuinely think this is one of the biggest near term issues with AI. Even if we get great AI "defence" tooling, there are just so many servers and (IoT or otherwise) devices out there, most of which is not trivial to patch. While a few niche services getting pwned isn't probably a big deal, a million niche services all getting pwned in quick succession is likely to cause huge disruption. There is so much code out there that hasn't been remotely security checked.
Maybe the end solution is some sort of LLM based "WAF" that inspects all traffic that ISPs deploy.
There is also a BIG hurdle between crashing something (which generally will be detected), versus RCE which requires a lot more work.
That seems a bit unethical. I’ve thought companies specifically deny usage of automated tools. A bit too late ey…?
https://en.wikipedia.org/wiki/Burning_Chrome