I just started a scan on an open source project I was looking at, but I would love to see you add Elixir to the list of supported languages so that I can use this for my team's codebase!
I just started a scan on an open source project I was looking at, but I would love to see you add Elixir to the list of supported languages so that I can use this for my team's codebase!
And yes, we don’t support C or C++ yet. Our focus is on detecting business logic vulnerabilities (auth bypasses, privilege escalations, IDORs) that traditional SAST tools often miss. The types of exploitable security issues typically found in C/C++ (mainly memory corruption type issues) are better found through fuzzing and dynamic testing rather than static analysis.
Hint: we are working on this, and it can easily expand coverage in oss-fuzz even if those targets have been fuzzed for a long time with enormous amount of compute.
AIUI both Google and Microsoft selected RVA23 as baseline.
> "Google is delighted to see the ratification of the RVA23 Profile," said Lars Bergstrom, Director of Engineering, Google. "This profile has been the result of a broad industry collaboration, and is now the baseline requirement for the Android RISC-V Application Binary Interface (ABI)."
The normal way you'd build something like this is to have a way to store the state and have an LLM in the loop that makes a decision on what to do next based on the state. (With a fresh call to an LLM each time and no accumulating context)
If I understand correctly this is an experiment to see what happens in the long context approach, which is interesting but not super practical as it's knows that LLMs will have a harder time at this. Point being, I wouldn't extrapolate this to how a commercial system built properly to do something similar would perform.
I think the transit time is likely decades and the build time is also a long time as well. But in maybe 40-100 years we could have plentiful HD images of 'nearby' exoplanets. If I'm still around when it happens I will be beyond hyped.
Yeah, let's pretend it works. So far structured output from an LLM is an exercise in programmers' ability to code defensively against responses that may or may not be valid JSON, may not conform to the schema, or may just be null. There's a new cottage industry of modules that automate dealing with this crap.
https://openai.com/index/introducing-structured-outputs-in-t...
or is this a scenario where computation is expensive but validation is cheap?
EDIT: thanks, people, for educating me! very insightful :)