It's nice to present a LastPass method, but really, my suggestion is stay far away from LastPass either as a user or an integrator. They've been breached at least seven times since 2011. The net will be better off with fewer integrations to it.
lastpass provide a cli that as far as I've seen serves all migration needs so I haven't seen any need to ever touch the service with a ten foot pole otherwise.
This is interesting. Amazing how something so fundamental is still such a pain, and we all build our own half-baked solutions for it on every new project. We've been thinking about this problem for a while now as well, and just launched another tool (https://varlock.dev) that might be interesting for you to check out. Would be very happy to collaborate or just talk about the problem space.
Our tool has similar goals, although a slightly different approach. Varlock uses decorator style comments within a .env file (usually a committed .env.schema file) to add additional metadata used for validation, type generation, docs etc. It also introduces a new "function call" syntax for values - which can hold declarative instructions about how to fetch values, and/or can hold encrypted data. We call this new DSL "env-spec" -- similar name :)
Certainly some trade-offs, but we felt meeting people where they already are (.env files) is worthwhile, and will hopefully mean the tool is applicable in more cases. Our system is also explicitly designed to handle all config, rather than just secrets, as we feel a unified system is best. Our plugin system is still in development, but we will allow you to pull specific items from different backends, or apply a set of values, like what you have done. We also have some deeper integrations with end-user code, that provide additional security features - like log redaction and leak prevention.
Realistically, why would your different environments have different ways of consuming secrets from different locations? Yes, you wouldn't use AWS Secrets Manager in your local testing, maybe... but giving each developer control and management of their own secrets, in their own locations, is just begging for trouble. How do you handle sharing of common secrets? How do you handle scenarios where some parts are shared (e.g. a shared api key for a dev third party API) but others aren't (local instance of test db)? How do you make sure that api key that everyone uses in dev is actually rotated from times to times, and nobody has stored it in clear text .env because once they had issues with OnePassword's service being down, and left it at that? How do you make sure that nobody is using an insecure secrets manager (e.g. LastPass)?
It's just adding the risk of having the impression that there is proper secrets management, but actually having a mess of everyone doing whatever they feel like with secrets, with no control over who has access to what, and what secret is used where and by whom and why. Which is kind of like a good ~70% of the point of secrets management.
Centralised secrets management or bust, IMO. Ideally with a secrets scanner checking your code doesn't have a secret in clear text left by mistake/lazyness. Vault/OpenBao isn't that complicated to set up, but if really is, your platform probably has something already.
Disclaimer: I work at HashiCorp, but opinions my own, I've been a part of the team implementing Vault at my past job for centralised secrets management and 100% believe it's the way things should be done to minimise the risk of mishandling secrets.
I'm not advocating that different locations of secrets IS something we want, but rather it IS the sad state of reality.
By having a secrets specification we can start working towards a future that will consolidate these providers and allow teams to centralize it if needed, by having simple means of migrating from a mess into a central system.
I'm assuming that the PaaS/IaaS providers already have solutions for secrets. So a new centralized system may help with just dev and DIY bare metal?
But the centralized method, as in secretspec, not everyone will accept reading secrets in environment variables, as is also done with the 1password cli run command [1]. They also may need to be injected as files or less secure command line parameters. In the Kubernetes world one solution the is External Secrets Operator [2]. Secrets may also be pulled from an API as well from the cloud host. I won't comment on how that works in k8s.
To note, the reason for reading from file handles is so that the app can watch for changes and reload, e.g., key/token rotations without restarting the server.
But what could be useful to some developers is a secretspec inject subcommand (the universal version of the op inject command). I use op inject / dotenvy with Rust apps -- pretty easy to manage and share credentials. Previously I had something similiar written in Rust that also handled things like base64 / percent-encoding transforms.
If you aren't tied to Rust, probably could just fork external-secrets and get all the provider code for free.
It’s not clear to me how the secrets are referenced in storage. Is the expectation that given `--provider onepassword` that one of the entries in 1p would be “BUCKET”?
which also addresses the trust and rotation problems. I suppose for dev secrets those are annoying, but even with secretspec you would have to rotate dev secrets when someone is offboarded.
For at least the "keep secrets out of version control" I implemented a python library (and racket library) that has served me well over the years for general configuration [0].
One key issue is that splitting general config from secrets is practically extremely difficult because once the variables are accessible to a running code base most languages and code bases don't actually have a way differentiate between them internally.
I skipped the hard part of trying to integrate transparently with actual encrypted secret stores. The architecture leaves open the ability to write a new backend, but I have found that for most things, even in production, the more important security boundaries (for my use cases) mean that putting plaintext secrets in a file on disk adds minuscule risk compared to the additional complexity of adding encryption and screwing something up in the implementation. The reason is that most of those secrets can be rotated quickly because there will be bigger things to worry about if they leak from a prod or even a dev system.
The challenge with a standard for something like this is that the devil is always in the details, and I sort of trust the code I wrote because I wrote it. Even then I assume I screwed something up, which is part of why I don't shared it around (the others are because there are still some missing features and architecture cleanup, and I don't want people depending on something I don't fully trust).
There is a reason I put a bunch of warnings at the top of the readme. Other people shouldn't trust it without extensive review.
Glad to see work in the space trying to solve the problem, because a good solution will need lots of community buy-in to build quality and trust.
You'd be surprised. In the past I was on a big project at company with multi-billion $ revenue. They got caught with their pants down on an audit once because people would not only commit credentials into internal repositories, they were usually not encrypted at all, among other deeper issues. It sparked a multi-year long project of incorporating a secrets management service into the 1000+ repositories and services the company used. Found a loooooot of dead bodies, tons of people got fired during the process. After that experience I imagine this practice is fairly common - people, even smart developers, don't always seem to be able to comprehend the blast radius of some of these things.
One of my favorite incidents during this clean-up effort was, the security team + my team had discovered a lot of DB credentials were just sitting on developer's local machines and basically nowhere else that made any kind of sense, and they'd hand them around as needed via email or message. So, we made tickets everywhere we found instances of this to migrate to the secret management platform. One lead developer with a privileged DB credential wrote a ticket that was basically:
"Migrate secret to secret management platform" and in the info section, wrote the plaintext value of the key, inadvertently giving anyone with Jira read access to a sensitive production database. Even when it was explained to him I could tell he didn't really understand fully why that was silly. Why did he have it in the first place is a natural followup question, but these situations don't happen in a vacuum, there's usually a lot of other dumb stuff happening to even allow such a situation to unfold.
> Found a loooooot of dead bodies, tons of people got fired during the process.
I'm genuinely curious as to what the fireable offenses here would be. If the company had an existing (broken) culture of keeping unencrypted secrets I wouldn't expect people following that culture to be fired for it.
Okay, but that sounds like a very different situation than a small shop where encrypted secrets are committed to one file per-repo, and keys and secrets are rotated regularly.
Wait, why are there so many skeptics in this thread?
I have setup AWS + SOPS in several projects now, and the developers do not have access to the secrets themselves nor the encryption key (which is stored in AWS). Only once did we ever require to rollback a secret and that happened at AWS level, not the code’s. Also it happened within the key rotation period, so it was easy.
For us it’s easier to track changes (not the value, but when it changes), easier to associate it with incidents.
Indeed, the only time I saw this was a decade ago for a temporary POC... not doing this is a good defense-in-depth practice even if the encryption is solid.
Our tool has similar goals, although a slightly different approach. Varlock uses decorator style comments within a .env file (usually a committed .env.schema file) to add additional metadata used for validation, type generation, docs etc. It also introduces a new "function call" syntax for values - which can hold declarative instructions about how to fetch values, and/or can hold encrypted data. We call this new DSL "env-spec" -- similar name :)
Certainly some trade-offs, but we felt meeting people where they already are (.env files) is worthwhile, and will hopefully mean the tool is applicable in more cases. Our system is also explicitly designed to handle all config, rather than just secrets, as we feel a unified system is best. Our plugin system is still in development, but we will allow you to pull specific items from different backends, or apply a set of values, like what you have done. We also have some deeper integrations with end-user code, that provide additional security features - like log redaction and leak prevention.
Anyway, would love to chat!
Realistically, why would your different environments have different ways of consuming secrets from different locations? Yes, you wouldn't use AWS Secrets Manager in your local testing, maybe... but giving each developer control and management of their own secrets, in their own locations, is just begging for trouble. How do you handle sharing of common secrets? How do you handle scenarios where some parts are shared (e.g. a shared api key for a dev third party API) but others aren't (local instance of test db)? How do you make sure that api key that everyone uses in dev is actually rotated from times to times, and nobody has stored it in clear text .env because once they had issues with OnePassword's service being down, and left it at that? How do you make sure that nobody is using an insecure secrets manager (e.g. LastPass)?
It's just adding the risk of having the impression that there is proper secrets management, but actually having a mess of everyone doing whatever they feel like with secrets, with no control over who has access to what, and what secret is used where and by whom and why. Which is kind of like a good ~70% of the point of secrets management.
Centralised secrets management or bust, IMO. Ideally with a secrets scanner checking your code doesn't have a secret in clear text left by mistake/lazyness. Vault/OpenBao isn't that complicated to set up, but if really is, your platform probably has something already.
Disclaimer: I work at HashiCorp, but opinions my own, I've been a part of the team implementing Vault at my past job for centralised secrets management and 100% believe it's the way things should be done to minimise the risk of mishandling secrets.
By having a secrets specification we can start working towards a future that will consolidate these providers and allow teams to centralize it if needed, by having simple means of migrating from a mess into a central system.
But the centralized method, as in secretspec, not everyone will accept reading secrets in environment variables, as is also done with the 1password cli run command [1]. They also may need to be injected as files or less secure command line parameters. In the Kubernetes world one solution the is External Secrets Operator [2]. Secrets may also be pulled from an API as well from the cloud host. I won't comment on how that works in k8s.
To note, the reason for reading from file handles is so that the app can watch for changes and reload, e.g., key/token rotations without restarting the server.
But what could be useful to some developers is a secretspec inject subcommand (the universal version of the op inject command). I use op inject / dotenvy with Rust apps -- pretty easy to manage and share credentials. Previously I had something similiar written in Rust that also handled things like base64 / percent-encoding transforms.
If you aren't tied to Rust, probably could just fork external-secrets and get all the provider code for free.
[1] https://developer.1password.com/docs/cli/reference/commands/...
[2] https://external-secrets.io
edit: it’s not covered in the post, but it is on the launch and doc site: https://secretspec.dev/providers/onepassword/
[1] https://devenv.sh/blog/2025/07/21/announcing-secretspec-decl...
We hope that one day github actions would integrate secretspec more tightly, leaving aside using environment variables as a transport.
That's going to be a long journey, one worth striving for.
It's a standalone tool with YAML configuration, simple to use.
Basically the way it works:
- You create the secret in GCP/AWS/etc Secrets Manager service, and put the secret data there.
- Refer to the secret by its name in Teller.
- Whenever you run `$ teller run ...` it fetches the data from the remote service, and makes it available to your process.
One key issue is that splitting general config from secrets is practically extremely difficult because once the variables are accessible to a running code base most languages and code bases don't actually have a way differentiate between them internally.
I skipped the hard part of trying to integrate transparently with actual encrypted secret stores. The architecture leaves open the ability to write a new backend, but I have found that for most things, even in production, the more important security boundaries (for my use cases) mean that putting plaintext secrets in a file on disk adds minuscule risk compared to the additional complexity of adding encryption and screwing something up in the implementation. The reason is that most of those secrets can be rotated quickly because there will be bigger things to worry about if they leak from a prod or even a dev system.
The challenge with a standard for something like this is that the devil is always in the details, and I sort of trust the code I wrote because I wrote it. Even then I assume I screwed something up, which is part of why I don't shared it around (the others are because there are still some missing features and architecture cleanup, and I don't want people depending on something I don't fully trust).
There is a reason I put a bunch of warnings at the top of the readme. Other people shouldn't trust it without extensive review.
Glad to see work in the space trying to solve the problem, because a good solution will need lots of community buy-in to build quality and trust.
0. https://github.com/tgbugs/orthauth
Maybe I haven't worked at enough places, but... when has this ever been allowed/encouraged/normalized?
One of my favorite incidents during this clean-up effort was, the security team + my team had discovered a lot of DB credentials were just sitting on developer's local machines and basically nowhere else that made any kind of sense, and they'd hand them around as needed via email or message. So, we made tickets everywhere we found instances of this to migrate to the secret management platform. One lead developer with a privileged DB credential wrote a ticket that was basically:
"Migrate secret to secret management platform" and in the info section, wrote the plaintext value of the key, inadvertently giving anyone with Jira read access to a sensitive production database. Even when it was explained to him I could tell he didn't really understand fully why that was silly. Why did he have it in the first place is a natural followup question, but these situations don't happen in a vacuum, there's usually a lot of other dumb stuff happening to even allow such a situation to unfold.
I'm genuinely curious as to what the fireable offenses here would be. If the company had an existing (broken) culture of keeping unencrypted secrets I wouldn't expect people following that culture to be fired for it.
I have setup AWS + SOPS in several projects now, and the developers do not have access to the secrets themselves nor the encryption key (which is stored in AWS). Only once did we ever require to rollback a secret and that happened at AWS level, not the code’s. Also it happened within the key rotation period, so it was easy.
For us it’s easier to track changes (not the value, but when it changes), easier to associate it with incidents.