However some balance is needed. Orgs may want to do exploration since it may not be obvious where competitive advantage can come from, or like you say perhaps hybrid makes sense, using it only in non prod.
However I am wary of the capacity of skills vendors to take advantage when you come to depend on them, even when their intentions are good and all ideals aligned. Being able to deliver the limited Kubernetes experience for yourself in low-stakes contexts, where you can depend on it because you know how it works, well enough to administer in a pinch, but availing that also in a pinch you're not the bottleneck to solve a problem, because you use the managed broker in all the places where it matters, feels like a sweet spot to me.
I don't want to pay money to a broker every time I spin up a new experiment for the duration of the experiment =/= I don't want to perform experiments.
That's where I see the disconnect that "Leadership" may fail to understand. You can provide a service at low marginal cost to take some of the load off your people, and that might also have the effect of stopping any experiments that fall beneath a certain threshold as "not worth the cost" - all because we settled on getting something for cheap that should have been free.
Then again, dodging all those diversions might have been a part of the strategy...
Also, no need to assume. I specifically said "use something managed".
Which might be in line with what you said about
> 80% of orgs don't have the scale, core competencies or justifiable need to be managing container clusters themselves.
But also, would at least have some potential to be solved and much more cost effectively, or maybe at least grown past, if they would just spend some energy on deploying Kubernetes internally; even if we can't or won't afford an entire team dedicated to doing only that, (and even if we commit to using only managed services for production anywhere and everywhere.)
In my experience the way some places reflexively avoid it like it's a trap to be stayed out of, winds up being a bit like a self-fulfilling prophecy "we're not doing Kubernetes" - I empathize with the person who you triggered, even if now we're up to two walls of text from just a simple comment, I feel triggered too.
https://github.com/microservices-demo/microservices-demo
I have seen others are still using it but not officially from anyone at Weaveworks
That is a curious KPI.
How do you place a value on Microsoft building Flux into Azure Arc? I know it isn't worth $0 but do they actually need a contract with anybody (at Flux or Weaveworks) in order to go on doing that - no. They don't need one.
https://github.com/rajch/weave/tree/reweave
If you use Weave net still, definitely follow his work and consider learning to build the image, so you can keep it ahead of CVE scanners. (You are using a CVE scanner in your clusters, right?)
In my network, I'm the only one using ArgoCD. Supposedly they're equals, but it always made me curious to try Flux.
Are there any statements about the future of Flux and other open source tools, and whether the remaining community has enough resources to maintain development, or if they will reduce their contributions to only fixing critical bugs?
https://github.com/fluxcd/flux2/discussions/
tl;dr: Flux is a graduated CNCF project and not going anywhere
We looked at weaveworks and its competitor both as a product and an investment (mid 6 figure usage). Our big issue was that we had a lot of smaller teams doing different things and not one or two featured items raking in the majority of our revenue.
These solutions work if you have a bunch of snowflake workloads by design (or bad design).
That's a really interesting characterization of WGE, and I can't say I disagree much (my personal opinion as an ex-Wyvern/OSS Engineer DX @ weaveworks)