I think it's good advice to not pass secrets through environment variables. Env vars leak a lot. Think php_info, Sentry, java vm dumps, etc. Also, env vars leak into sub-processes if you don't pay extra attention. Instead, read secrets from a vault or from a file-system from _inside_ your process. See also [1] (or [2] which discusses [1]). Dotnet does this pretty good with user secrets [3].
> Instead, read secrets from a vault or from a file-system from _inside_ your process.
I’ve never liked making secrets available on the filesystem. Lots of security vulnerabilities have turned up over the years that let an attacker read an arbitrary file. If retrieving secrets is a completely different API from normal file IO (e.g. inject a Unix domain socket into each container, and the software running on that container sends a request to that socket to get secrets), that is much less likely to happen.
God this is such a prime example of how we just don't do security well enough industry wide, and then you end up with weird stupid stuff like encryption being an enterprise paid feature.
Secrets have to be somewhere. Environment variables are not a good place for them, but if you can't trust your filesystem to be secure, you're already screwed. There's no where else to go. The only remaining place is memory, and it's the same story.
If you can't trust memory isolation, you're screwed.
As a counterintuitive example from a former insider: virtually no one is storing secrets for financial software on an HSM. Almost no one does it, period.
Disagree here. Basically if you use docker (which for most of the stuff you mention, you should), environment variables are pretty much how you configure your docker containers and a lot of sever software packaged up as docker containers expects to be configured this way.
Building a lot of assumptions into your containers about where and how they are being deployed kind of defeats the point of using containers. You should inject configuration, including secrets, from the outside. The right time to access secret stores is just before you start the container as part of the deploy process or vm startup in cloud environments. And then you use environment variables to pass the information onto the container.
Of course that does make some assumptions about the environment where you run your containers not being compromised. But then if that assumption breaks you are in big trouble anyway.
Of course this tool is designed for developer machines and for that it seems useful. But I hope to never find this in a production environment.
Remember we are mainly talking about dev envs here. If you put the secret key in a file...where do you put the file? In a common location for all the dotenv instances? One per dotenv instance? What if people start putting it as a dotfile in the same project directory?
Secrets are nasty and there are tradeoffs in every direction.
Environment vars propagate from process to process _by design_ and generally last the entire process(es) lifetime. They are observable from many os tools unless you've hardened your config and they will appear in core files etc. Secrets imply scope and lifetime - so env variables feel very at odds. Alternatively Env variables are nearly perfect for config for the same reasons that they are concerning for secrets.
Tl/Dr; in low stakes environments the fact that secrets are a special type of config means you will see it being used with env vars which are great for most configs but are poor for secrets. And frankly if you can stomach the risks, it is not that bad.
Storing secrets on the filesystem - you immediately need to answer where on the filesystem and how to restrict access (and are your rules being followed). Is your media encrypted at rest? Do you have se Linux configured? Are you sure the secrets are cleaned after you no longer need them? Retrieving secrets or elevated perms via sockets / local ipc have very similar problems (but perhaps at this point your compartmentalizing all the secrets into a centralized, local point).
A step beyond this are secrets that are locked behind cloud key management APIs or systems like spiffe/spire. At this point you still have to tackle workload identity, which is also a very nuanced problem.
With secrets every solution has challenges and there the only clear answer is to have a threat model and design an architecture and appropriate mitigations that let you feel comfortable while acknowledging the cost, user, developer and operator experience balancing act.
It handles task running (wipe local test db, run linting scripts, etc), environment variables and 'virtual environments', as well as replacing stuff like asdf, nvm, pyenv and rbenv.
Still somewhat early days, tasks are experimental. But looks very promising and the stuff I've tried to far (tasks) works really well.
I second mise, it's been a nice replacement for direnv, asdf and makefiles for my use case. Much faster, still compatible with the old configuration files when needed and all in one tool for the new projects. Awesome.
I’ve switched recently from asdf for managing language & tool versions and the ergonomics are much nicer (eg one command vs having to manually install plugins, etc., more logical commands) It’s also noticeably faster.
Regarding the env vars features, a couple of relevant Mise issues around people trying to integrate env var secrets using SOPS, 1Password, etc.
Seconded. I changed from pyenv to mise because pyenv was slowing down my shell startup (probably the shims, which mise doesn't use by default), and I'm slowly using mise for more stuff. Right now, I'm using it to auto-turn on virtual environments and add project scripts to the PATH, and it works very well.
I haven't felt the need to use it as a task runner yet, but that's probably because I'm used to having a bunch of shell and Python scripts in a `scripts` folder.
The only reason I use .env is because it’s dead simple and very obvious as to how it works to anyone.
If now someone has to read docs to figure out how to configure the app, I’d rather have them read docs for some other safer and more powerful configuration scheme.
I've dealt with problems from python-dotenv and very much prefer it as a command than as a library.
As an example, once I changed a .env fikr and unit tests started failing. Digging deeper into it, and lots of code was checking for .env to load its configurations, and would break without it. I'd prefer this not to happen, as our tests were executing based on outside, not version controlled, configurations.
After removing dotenv as a library and using it only as a command, we were able to separate configuration and logic, and not have .env files affecting our unit tests - we simply ran the application with dotenv command, and the unit tests without.
With leaking secrets being such a big concern, it seems wise to require that secrets be encrypted to use dotenvx. That is, it will only work with encrypted secrets. As others have commented, this doesn't eliminate the risk entirely, but I think having a tool that doesn't support unencrypted secrets at all, although a bit less convenient, is a win.
I'm not sure what the "it" you're referring to is, but if something is not a secret, you could either encrypt it anyway or use an alternate mechanism to provide the value.
No, I mean encrypted. I'm not sure what you mean by "obscured", but if you just mean obfuscated in some easily recoverable way without a key, then no. If you read the original post, it describes the mechanism that dotenvx already has in place for encryption.
The utility has a means of encrypting them with public key cryptography so that the plaintext is never in your development directory. GP thinks this should be made mandatory.
I too have been using sops for years, and agree -- dotenvx encryption seems very similar to sops.
I'd prefer an integration between dotevnx and sops where dotevnx handles the UX of public env and injection, while leveraging sops for secret management and retrieval. Additionally, being able to have multiple keys for different actors is important.
Having a single `.env.keys` file feels risky and error prone. dotenvx encourages adding your various env files, such as `.env.production`, to vcs, and you're one simple mistake away from committing your keyfile and having a bad day.
If sops is not to be integrated, dotenvx could take some inspiration where the main key is encrypted in the secrets file itself, and you can define multiple age key recipients, each of which can then decrypt the main key.
I've been using sops in production since at least 2017, plus it has excellent compatibility with containerized infra tools like helm and other infra tools like terraform (both technically using plugins, but helm secrets and the carlpetts terraform plugin have been around for ages and are widely used.
I don't think encryption is a good idea, and the reason is forming bad habits. Now developers have a very strong and non-ambiguous habit: never put .env files in version control (except may be for .example.env). However, with this, you'll get accustomed to commit .env in _some_ projects, so you'll easily slip and commit it in another project where the vars are not encrypted.
Encrypting secrets and committing them seems very convenient but I'm paranoid about these sorts of things. Can anyone tell me why this would be a bad idea?
One reason I can think of is that normally with secrets I actually don't keep any copies of them. I just set them in whatever secret manager my cloud environment uses and never touch them again unless I need to rotate them. Meaning there is no way to accidentally expose them other than by the secret vault being hacked or my environment being hacked.
With this approach if someone gets access to the encryption key all secrets are exposed.
The biggest issue with storing secrets in version control with the code is that past secrets are never relevant after they have been rotated. This makes rollbacks risky. Consider:
1. Create secret v1
2. Code v1
3. Deploy
4. Secret v2 (rotation)
5. Code v2
6. Deploy
7. Oops, need to roll back to v1 (from step 2)
8. Outage, because the secrets in step 2 are not the secrets from step 4
I disagree. Using vaults isn’t that bad. And I’d also like developers to never actually know the secrets.
It’s a learning curve, but I think it’s best to just bite the bullet and use a vault rather than trusting developers to know and manage secrets properly.
Another benefit is that debugging secret changes is a lot easier. We've had a couple of cases where someone changes the secret in a vault and that causes problems and no one can tell what changed between two deploys
I've used git-crypt and sealed-secrets and the problem is always backing up the master key. sealed-secrets rotates it every so often so you need to go find it and copy it to 1password or whatever you use as your root of trust. (We used a calendar invite for this.)
git-crypt is easy, the master key doesn't rotate, so don't leak it. (Secret encryption key rotation is kind useless; it's nice that if you leak an old key newer secrets aren't leaked, but it depends on your underlying secret rotation policy as to whether or not that saves you any work. I have tended to do them in bulk in the past.)
On my last project we did disaster recovery exercises every 6 months, sometimes with the master key intentionally lost, and it wasn't that big of a deal. Restoring your infra-as-code involves creating the secret encryption service manually, though, which is kind of a pain, but not like days of downtime pain or anything. Of course, if the secrets encrypted your database or something like that, then losing the master key equals losing all your data. Hopefully your database backup has some solution for that problem.
[1] https://blog.diogomonica.com/2017/03/27/why-you-shouldnt-use... [2] https://security.stackexchange.com/questions/197784/is-it-un... [3] https://learn.microsoft.com/en-us/aspnet/core/security/app-s...
I’ve never liked making secrets available on the filesystem. Lots of security vulnerabilities have turned up over the years that let an attacker read an arbitrary file. If retrieving secrets is a completely different API from normal file IO (e.g. inject a Unix domain socket into each container, and the software running on that container sends a request to that socket to get secrets), that is much less likely to happen.
Secrets have to be somewhere. Environment variables are not a good place for them, but if you can't trust your filesystem to be secure, you're already screwed. There's no where else to go. The only remaining place is memory, and it's the same story.
If you can't trust memory isolation, you're screwed.
As a counterintuitive example from a former insider: virtually no one is storing secrets for financial software on an HSM. Almost no one does it, period.
Deleted Comment
Building a lot of assumptions into your containers about where and how they are being deployed kind of defeats the point of using containers. You should inject configuration, including secrets, from the outside. The right time to access secret stores is just before you start the container as part of the deploy process or vm startup in cloud environments. And then you use environment variables to pass the information onto the container.
Of course that does make some assumptions about the environment where you run your containers not being compromised. But then if that assumption breaks you are in big trouble anyway.
Of course this tool is designed for developer machines and for that it seems useful. But I hope to never find this in a production environment.
So how do you rotate secrets without bouncing app servers..?!
Environment vars propagate from process to process _by design_ and generally last the entire process(es) lifetime. They are observable from many os tools unless you've hardened your config and they will appear in core files etc. Secrets imply scope and lifetime - so env variables feel very at odds. Alternatively Env variables are nearly perfect for config for the same reasons that they are concerning for secrets.
Tl/Dr; in low stakes environments the fact that secrets are a special type of config means you will see it being used with env vars which are great for most configs but are poor for secrets. And frankly if you can stomach the risks, it is not that bad.
Storing secrets on the filesystem - you immediately need to answer where on the filesystem and how to restrict access (and are your rules being followed). Is your media encrypted at rest? Do you have se Linux configured? Are you sure the secrets are cleaned after you no longer need them? Retrieving secrets or elevated perms via sockets / local ipc have very similar problems (but perhaps at this point your compartmentalizing all the secrets into a centralized, local point).
A step beyond this are secrets that are locked behind cloud key management APIs or systems like spiffe/spire. At this point you still have to tackle workload identity, which is also a very nuanced problem.
With secrets every solution has challenges and there the only clear answer is to have a threat model and design an architecture and appropriate mitigations that let you feel comfortable while acknowledging the cost, user, developer and operator experience balancing act.
Deleted Comment
https://mise.jdx.dev/
It handles task running (wipe local test db, run linting scripts, etc), environment variables and 'virtual environments', as well as replacing stuff like asdf, nvm, pyenv and rbenv.
Still somewhat early days, tasks are experimental. But looks very promising and the stuff I've tried to far (tasks) works really well.
I’ve switched recently from asdf for managing language & tool versions and the ergonomics are much nicer (eg one command vs having to manually install plugins, etc., more logical commands) It’s also noticeably faster.
Regarding the env vars features, a couple of relevant Mise issues around people trying to integrate env var secrets using SOPS, 1Password, etc.
- https://github.com/jdx/mise/issues/1617
- https://github.com/jdx/mise/issues/1359
I haven't felt the need to use it as a task runner yet, but that's probably because I'm used to having a bunch of shell and Python scripts in a `scripts` folder.
If now someone has to read docs to figure out how to configure the app, I’d rather have them read docs for some other safer and more powerful configuration scheme.
As an example, once I changed a .env fikr and unit tests started failing. Digging deeper into it, and lots of code was checking for .env to load its configurations, and would break without it. I'd prefer this not to happen, as our tests were executing based on outside, not version controlled, configurations.
After removing dotenv as a library and using it only as a command, we were able to separate configuration and logic, and not have .env files affecting our unit tests - we simply ran the application with dotenv command, and the unit tests without.
Dead Comment
Sops also integrates easily with AWS and other existing key management solutions, so that you can use your existing IAM controls on keys.
I mentioned in another comment, but I've been using it over five years at two jobs and have found it to be great.
[0]: https://github.com/getsops/sops
I'd prefer an integration between dotevnx and sops where dotevnx handles the UX of public env and injection, while leveraging sops for secret management and retrieval. Additionally, being able to have multiple keys for different actors is important.
Having a single `.env.keys` file feels risky and error prone. dotenvx encourages adding your various env files, such as `.env.production`, to vcs, and you're one simple mistake away from committing your keyfile and having a bad day.
If sops is not to be integrated, dotenvx could take some inspiration where the main key is encrypted in the secrets file itself, and you can define multiple age key recipients, each of which can then decrypt the main key.
One reason I can think of is that normally with secrets I actually don't keep any copies of them. I just set them in whatever secret manager my cloud environment uses and never touch them again unless I need to rotate them. Meaning there is no way to accidentally expose them other than by the secret vault being hacked or my environment being hacked.
With this approach if someone gets access to the encryption key all secrets are exposed.
Typically, developers can’t change production secrets in vaults and need to follow some other protocols.
Encrypted secrets mean you deploy everything along side the secrets.
The developer experience is great, but the biggest issues I have faced while using Kubeseal were
1. Developers HAVE the secret in order to encrypt it. This can be not ideal as then they can use these secrets in production or leak them
2. The secret encryption key change causes the need to re encrypt everything.
3. People don’t understand the concept.
It’s a learning curve, but I think it’s best to just bite the bullet and use a vault rather than trusting developers to know and manage secrets properly.
git-crypt is easy, the master key doesn't rotate, so don't leak it. (Secret encryption key rotation is kind useless; it's nice that if you leak an old key newer secrets aren't leaked, but it depends on your underlying secret rotation policy as to whether or not that saves you any work. I have tended to do them in bulk in the past.)
On my last project we did disaster recovery exercises every 6 months, sometimes with the master key intentionally lost, and it wasn't that big of a deal. Restoring your infra-as-code involves creating the secret encryption service manually, though, which is kind of a pain, but not like days of downtime pain or anything. Of course, if the secrets encrypted your database or something like that, then losing the master key equals losing all your data. Hopefully your database backup has some solution for that problem.