I long for a production-ready runtime monitoring tool that can ACTUALLY be used in a blocking mode. Otherwise we’re always too late, and I’ve been burned more than once when dealing with an incident. Damned hackers always seem to come around weekends and holidays.
I actually don't care for Wiz's UX.
If you're a manager and just want to get an idea of what your security posture looks like, it's great. They have a million dashboards for you.
But if you're an AppSec Engineer that just wants to see which EC2 instances have which CVEs, it's kind of a pain in the pass and takes way too many clicks.
> I'd love to hear how this project differs from Bearer, which is also written in Go and based on tree-sitter? https://github.com/Bearer/bearer
The primary difference is that we're optimizing for users to write their custom rules easily. We do plan to ship built-in checkers [1] so we cover at least OWASP Top 10 across all major programming languages. We're also truly open-source using the MIT license.
> Regardless, considering there is a large existing open-source collection of Semgrep rules, is there a way they can be adapted or transpiled to tree-sitter S-expressions so that they may be reused with Globstar?
I'm pretty sure there should be a way to make that work. We believe writing checkers (and having a long list of built-in checkers) will be a commodity in a world where AI can generate S-expressions (or tree-sitter node queries in Go) for any language with very high accuracy (which is where we have an advantage as compared to tools that use a custom DSL). To that extent, we're focused on improving the runtime itself so we can support complex use cases from our YAML and Go interfaces. If the community can help us port rules from other sources to our built-in checkers, we'd love that!
One of the main reasons why this works well is because we feed the models our incident playbooks and response knowledge bases.
These playbooks are very carefully written and maintained by people. The current generation of models are pretty much post-human in following them, performing reasoning and suggesting mitigations.
We tried indexing just a bunch of incident slack channels and result was not great. But with explicit documentation, it works well.
Kind of proves what we already know, garbage in, garbage out. But also, other functions, eg: PM, Design have tried automating their own workflows, but doesn't work as well.
1) To apply for Watson access you needed to show C-level approval, so our CEO put his name and phone number on the application (trying Watson was somewhat his idea). A few months later, an IBM marketing team called HIS CELL and asked for ME. Imagine how it felt to have the CEO walk up to me, deadpan hand me his personal iphone and say "It's for you."..
2) They told me they'd help me with the support data idea, and every meeting we set up they tried to pitch "what if we put Watson on all of your customer's storefronts, we could add a 'powered by watson' banner on every page, and you give us a cut of GMV?". I pivoted them to our plugin framework and told them to build it themselves.
3) To demo the technology, the first step was to buy a $250k server from IBM. To demo it.
Big LOLs all around, never trust big blue.