Also highly subjective but the syntax hurts my eyes.
So I’m kind of interested by an answer to the question this articles fails to answer. Why do you guys find Zig so cool ?
I view Zig as a better C, though that might be subjective.
https://github.com/kaitai-io/kaitai_struct_compiler/commits/...
It would be premature to review now because there are some missing features and stuff that has to be cleaned up.
But I am interested in finding someone experienced in Zig to help the maintainer with a sanity check to make best practices are being followed. (Would be willing to pay for their time.)
If comptime is used, it would be minimal. This is because code-generation is being done anyway so that can be an explicit alternative to comptime. But we have considered using it in a few places to simplify the code-generation.
As far as high-level language constructs go, there were similarish things like SecureString (in .NET) or GuardedString (in Java), although as best as I can tell they're relatively unused mostly because the ergonomics around them make them pretty annoying to use.
The thinking was to minimize the the places where a secret could leak. So with an HTTP client, I would think at the lowest layer possible.
I don't think of it as a way to eliminate secrets leaking. More-so reducing the surface area of accidental leaks.
- On Linux systems, any user process can inspect any other process of that same user for it's environment variables. We can argue about threat model but, especially for a developer's system, there are A LOT of processes running as the same user as the developer.
- IMO, this has become an even more pronounced problem with the popularity of running non-containerized LLM agents in the same user space as the developer's primary OS user. It's a secret exfiltration exploiter's dream.
- Environment variables are usually passed down to other spawned processes instead of being contained to the primary process which is often the only one that needs it.
- Systemd exposes unit environment variables to all system clients through DBUS and warns against using environment variables for secrets[1]. I believe this means non-root users have access to environment variables set for root-only units/services. I could be wrong, I haven't tested it yet. But if this is the case, I bet that's a HUGE surprise to many system admins.
I think ephemeral file sharing between a secret managing process (e.g. 1Password's op cli tool) and the process that needs the secrets (flask, terraform, etc.) is likely the only solution that keeps secrets out of files and also out of environment variables. This is how Systemd's credentials system works. But it's far from widely supported.
Any good solutions for passing secrets around that don't involve environment variables or regular plain text files?
Edit: I think 1Password's op client has a good start in that each new "session" requires my authorization. So I can enable that tool for a cli sessions where I need some secrets but a rogue process that just tries to use the `op` binary isn't going to get to piggyback on that authorization. I'd get a new popup. But this is only step 1. Step 2 is...how to share that secret with the process that needs it and we are now back to the discussion above.
1: https://www.freedesktop.org/software/systemd/man/latest/syst...
Something like: my_secret = create_secret(value)
Then ideally it's an opaque value from that point on
It make it seem needlessly complicated, and effectively erased all the positives.
I never claimed otherwise.
> Having to jump out of the code you're reading comes with its own downsides and tends to compromise maintainability where you are increasing the shallowness of your code (higher api surface).
I don't buy this argument. The code you're reading should do one thing according to what it says on the tin (the function name). When the code does something else, you navigate to that other place (easily done in most IDEs), and change contexts. This context change is important, since humans struggle with keeping track of a lot of it at once. When you have to follow a single long function, the context is polluted with previous functionality, comments, variables, and so on, not unlike the scope of the program at that point. If you're changing the code, it becomes easier to shadow a previous variable, or to change something that subsequent code depends on. Decomposing the large function into smaller ones avoids all of this.
As well as aiding in testability, which you conveniently ignored from my previous comment.
The criteria for determining what is "short" and "long" is subjective, of course, and should be determined by whatever the team collectively agrees on. But there should be some accepted definition of these.
> Stanford professor Jon Ousterhout
Eh, I'm not swayed by arguments from authority. Jon's opinion is as valid as mine or yours.
Ultimately, you're going to revisit this code to make the change after some time passes. Is it easy to follow the code and make the change without making mistakes? Is it easy for someone else on the team to do the same?
Sometimes optimizing for "easy to understand and change" means breaking something apart. Sometimes it means combining things. I've read that John Carmack would frequently inline functions because it was too hard to follow.
So, rather than whether something is big or too small, I would ask whether it would be easy to understand/change when coming back to it after a few months.
Put another way: why not optimize for the actual thing you care about rather than an intermediate metric like LOC?