Also S3 (+ azure and GCP) would be a good 'edge' k/v store.
Someone could start with Github in a simple project, migrate to S3 to handle more requests and then migrate later to a full server solution... All of this keeping the same client codebase
We already support S3, Azure and GCS, as well as OCI (any compatible registry) as a source in the open-source server-side evaluator. So if you pop a deploy step to any of these sources from your Git repo, you can use them via the Flipt server process as a source of truth in production. Our server-side and client-side SDKs can source from Flipt in these scenarios.
But, we are keen to both explore skipping the Flipt server middle-man for client-SDKs, as well as make the publish step to these locations a simple configuration process in our UI. To avoid having to write things like GH actions to achieve the end to end result.
However, this can be changed, so that not all commits/pushes are treated equally during CD. Either by using rules to ignore changes to certain sub-directories / files or through having reproducible builds and skipping the process restarting parts when the resulting artefacts between two commits haven't changed (e.g. the digest of a docker image not changing from one commit to the next).
This is often an optimisation though, and takes time/effort to put in place.
Use s3. Honestly.
Flipt Open-Source can be run to consume from these locations. You can go as far as configuring a workflow to publish on push, so that you can combine our managed UI with any of these distributions methods through Git.
With any of these backends (including Git), we periodically fetch and cache data in-memory. Evaluations work on an in-mem snapshot. So temporary downtime doesn't propagate into your applications being unable to get flag evaluations.
One thing is that running Flipt open source on your infra, means running replicas all sourcing from the same Git repo. They currently polls for updates and this means eventual consistency comes into play when you scale. We have plans to help mitigate that with cloud though (pushing updates from cloud to your self hosted runners).
"Feature Flags that live inside your code" - That's just variables, no?
Means you can experiment and target different cohorts with variants of your app without restarting processes everywhere.
In our experience a lot folks came and said… but the ui is so important for us to be able to use a feature flag tool.
For example, this sure does look like Dagger <https://github.com/dagger/dagger#what-is-dagger> in its use of golang-as-ci <https://docs.dagger.io/quickstart/daggerize#construct-a-pipe...> and plausibly "run ci locally"
Dagger doesn't ship with a dashboard (that I know of) but I also struggle to think of why one would want an artisanal dashboard when <https://docs.github.com/en/actions/use-cases-and-examples/de...> and <https://docs.gitlab.com/ee/ci/environments/index.html> exist in close proximity to the existing CI infrastructure. I guess if one is trying to be "CI agnostic" but my life experience is that trying to be CI agnostic leads to a lot of NIH which one cannot hire for and cannot take to the next job
While this is in the CI adjacent (and indeed I am a big dagger fan and certainly inspired by it), this predominantly lives in the CD realm instead.
In particular, it has helped us with something that I feel is missing in the GitOps space, which is the connective tissue between environments. Automating updates to app versions directly in repo and then bringing that altogether into a single dashboard where I can see what does my repo say the desired state of the world should be. Ultimately, we want to surface what the actual state is too.
We’ve hedged our bets a bit so far and left room for non GitOps CD to potentially slot in too. But not sure if we should just double down and be explicit and go hard on GitOps.
When you say a “why not…” you’re referring to like a Glu compared with X section right? That’s a good idea. I will add that!