Google results:
Oracle License Audit Survival Guide for CIOs
Handling Oracle’s “Friendly” License Reviews
How to Prepare for an Oracle License Audit
How to Prepare For and Navigate an Oracle License Audit
Top 7 Oracle Audit Triggers (And How to Avoid Them)
Top 5 Best Oracle License Consultant Firms
7 Questions to Ask When Engaging an Oracle Audit DefenseIt's hard to imagine how a company could be more extractive than this.
What domain do you work in?
I hope I'm just misusing the tool, but I don't think so (math+ML+AI background, able to make LLMs perform in other domains, able to make LLMs sing and dance for certain coding tasks, have seen other people struggle in the same ways I do trying to use LLMs for most coding tasks, haven't seen evidence of anyone doing better yet). On almost any problem where I'd be faster letting an LLM attempt it rather than just banging out a solution myself, it only comes close to being correct with intensive, lengthy prompting -- after much more effort than just typing the right thing in the first place. When it's wrong, the bugs often take more work to spot than to just write the right thing since you have to carefully scrutinize each line anyway while simultaneously reverse engineering the rationale for each decision (the API is structured and named such that you expect pagination to be handled automatically, but that's actually an additional requirement the caller must handle, leading to incomplete reads which look correct in prod ... till they aren't; when moving code from point A to point B it removes a critical safety check but the git diff is next to useless and you have to hand-review that sort of tedium and have to actually analyze every line instead of trusting the author when they say that a certain passage is a copy-paste job; it can't automatically pick up on the local style (even when explicitly prompted as to that style's purpose) and requires a hand-curated set of examples to figure out what a given comptime template should actually be doing, violating all sorts of invariants in the generated code, like running blocking syscalls inside an event loop implementation but using APIs which make doing so _look_ innocuous).
I've shipped a lot of (curated, modified) LLM code to prod, but I haven't yet seen a single model or wrapper around such models capable of generating nearly-correct code "most" of the time.
I don't doubt that's what you've actually observed though, so I'm passionately curious where the disconnect lies.
Here you've got the advantage that you're repeating the same task over and over, so you can tweak your prompt as you go, and you've got the "spec" in the form of the C code there, so I think there's less to go wrong. It still did break things sometimes, but the fuzzing often caught it.
It does require careful prompting. In my first attempt Claude decided that some fields in the middle of an FFI struct weren't necessary. You can imagine the joy of trying to debug how a random pointer was changing to null after calling into a Rust routine that didn't even touch it. It was around then I knew the naive approach wasn't going to work.
The second attempt thus had a whole bunch of "port the whole struct or else" in the prompt: https://github.com/rjpower/zopfli/blob/master/port/RUST_PORT... .
In general I've found the agents to be a mixed bag, but overall positive if I use them in the right way. I find it works best for me if I used the agent as a sounding board to write down what I want to do anyway. I then have it write some tests for what should happen, and then I see how far it can go. If it's not doing something useful, I abort and just write things myself.
It does change your development flow a bit for sure. For instance, it's so much more important to concrete test cases to force the agent to get it right; as you mention, otherwise it's easy for it do something subtly broken.
For instance, I switched to tree-sitter from the clang API to do symbol parsing, and Claude wrote effectively all of it; in this case it was certainly much faster than writing it myself, even if I needed to poke it once or twice. This is sort of a perfect task for it though: I roughly knew what symbols should come out and in what order, so it was easy to validate the LLM was going in the right direction.
I've certainly had them go the other way, reporting back that "I removed all of the failing parts of the test, and thus the tests are passing, boss" more times than I'd like. I suspect the constrained environment again helped here, there's less wiggle room for the LLM to misinterpret the situation.
Early on in the blog post, the author mentions that "c2rust can produce a mechanical translation of C code to Rust, though the result is intentionally 'C in Rust syntax'". The flow of the post seems to suggest that LLMs can do better. But later on, they say that their final LLM approach produces Rust code which “is very 'C-like'" because "we use the same unsafe C interface for each symbol we port”. Which sounds like they achieved roughly the same result as c2rust, but with a slower and less reliable process.
It’s true that, as the author says, “because our end result has end-to-end fuzz tests and tests for every symbol, its now much easier to 'rustify' the code with confidence". But it would have been possible to use c2rust for the actual port, and separately use an LLM to write fuzz tests.
I'm not criticizing the approach. There's clearly a lot of promise in LLM-based code porting. I took a look at the earlier, non-fuzz-based Claude port mentioned in the post, and it reads like idiomatic Rust code. It would be a perfect proof of concept, if only it weren't (according to the author) subtly buggy. Perhaps there's a way to use fuzzing to remove the bugs while keeping the benefits compared to mechanical translation. Unfortunately, the author's specific approach to fuzzing seems to have removed both the bugs and the benefits. Still, it's a good base for future work to build on.
You could certainly try using c2rust to do the initial translation, and it's a reasonable idea, but I didn't find the LLMs really struggled with this part of the task, and there's certainly more flexibility this way. c2rust seemed to choke on some simple functions as well, so I didn't pursue it further.
And of course for external symbols, you're constrained by the C API, so how much leeway you have depends on the project.
You can also imagine having the LLM produce more idiomatic code from the beginning, but that can be hard to square with the incremental symbol-by-symbol translation.
That said, you could have the LLM write equivalence tests, and you'd still have the top-level fuzz tests for validation.
So I wouldn't say it's impossible, just a bit harder to mechanize directly.
I didn't measure consistently, but I would guess 60-70% of the symbols ported easily, with either one-shot or trivial edits, 20% Gemini managed to get there but ended up using most of its attempts, and 10% it just struggled with.
The 20% would be good candidates for multiple generations & certainly consumed more than 20% of the porting time.
I guess I worry it would be hard to separate out the "noise", e.g. the C code touches some memory on each call so now the Rust version has to as well.
“They want that land to be clean corn and soybeans,” Bishop said. Before the restrictions, his father was growing organic corn and soybeans on part of the field and letting Bishop grow vegetables on the rest.
I've seen this mentioned elsewhere, but the idea that you'd force someone else to create a mono-crop desert, not even out of a sense of efficiency, but _just because it looks right_, is just so frustrating.