Wrappers that aim to simplify another technology always make me nervous, especially with a rapidly evolving project.
Yes, this makes it easier to get started -- but when something goes wrong, now you have to hunt down bugs in two layers of software. And, since you've intentionally isolated yourself from the underlying layer, you have less experience with it!
This is why I like Helm. If you write your charts well, you can write your k8s yaml once, and do the things you need to do on a daily basis by adjusting your chart values
Your concern is justified. Any abstraction must deal with minimizing leakages. This is why we have started addressing deployment error troubleshooting with some level of diagnosis so that the tool can provide error info in terms of higher-level abstraction.
Helm does require understanding of all the underlying low-level objects defined in Kubernetes. HyScale hopes to provide higher-level entities to deal with, as well as providing ways to do higher-level ops & deployment introspection. We believe it should be possible to satisfy the needs of a reasonably large number (>80%) of apps.
> HyScale hopes to provide higher-level entities to deal with, as well as providing ways to do higher-level ops & deployment introspection.
That's exactly why the operator pattern exists!
Kubernetes stands out from other container orchestration tools for one reason only: it's portable. I can apply my templates to any cluster -empty or not- and my services will be running with the same topology. I can make them work with an existing installation, if I need to. To this end, I need to be able to inspect, modify and understand how everything is fitting together. An abstraction layer will always complicate things.
As said before, Helm is great because it doesn't hide Kubernetes' insides.
Kubernetes complexities are often acknowledged. In our team, we experienced this first-hand while migrating a large PaaS application onto Kubernetes about two years ago. This prompted us to seek out a way to simplify and speed-up app deployments to Kubernetes. This pursuit eventually led us to believe that Kubernetes complexities deserve an abstraction, in quite the same way as JQuery over Javascript, or Spring over Servlets. The HyScale project was born out of this, to create an app-centric abstraction that is closer to the everyday language of developers and devops.
With HyScale, a simple & short service-descriptor allows developers to easily deploy their apps to K8s without having to write or maintain a voluminous amount of K8s manifest YAMLs. This enables self-service deployments as well as development on K8s with minimal IT/DevOps intervention.
We would love to get feedback, ideas and contributions.
Congrats on this launch, it's great to see serious efforts at Kubernetes simplification and I hope you succeed! K8s bills itself as a "platform for platforms", and such projects are a real test of that idea.
Questions:
1. How do you deal with the mutability of K8s resources? Do you assume people won't change the underlying resources that you generate, or do you keep k8s controllers running to ensure there's no deviation?
2. How understandable is your generated output to humans? Do users have a way to go "backwards" through your abstraction? (Since your tool lives in an ecosystem with many other kubernetes tools, your users will sometimes end up having to deal with the generated output, since other tools operate at the K8s level such as prometheus, log aggregators etc)
3. Do you interoperate with K8s-level resources well? Do _all_ my services need to be in this abstraction for this to work well? e.g. Can my hyscale yaml reference a non-hyscale service for any reason? Or are they essentially two separate worlds?
1. The expectation is that people deploying through hyscale don’t want to go and modify K8s resources directly on the cluster behind the scenes. If they do, then it either deviates from the hspec files in git or they have parallel K8s yamls which won’t make much sense either. As of now, hyscale deploys and doesn’t keep anything (eg. controller) running, But it's a good suggestion to keep the K8s resources in sync with the desired state (hspec) and we’ll certainly put some thought into it.
2. HyScale outputs standard K8s manifest yamls which are as readable as any well hand-coded.
On the other hand, if there was a way to generate back hspec files from existing K8s yamls, would that be interesting/useful?
3. Suppose you have K8s resources/snippets already written for some of your services, you can refer to them in hspec and add application context to it and make use of profiles for environment variations.
If the question is more about off-the-shelf services (eg postgres, redis, etc), we have helm chart support coming up. https://github.com/hyscale/hyscale/wiki/Roadmap#2-ots-off-th...
This looks like a really nice and polished app, thanks for releasing it!
Is there an option to only generate the yaml instead of generate and deploy altogether? This would allow me to create a lot of boilerplate while still having the option to fine tune the configurations if needed.
HyScale also helps you troubleshoot any errors during deployment. So rather than generate with HyScale & deploy using kubectl, you may want to fine-tune generated yamls by merging your custom config snippets via:
https://github.com/hyscale/hyscale/issues/284 (in the upcoming release)
Currently, no support for rollbacks. It's a good suggestion for the roadmap, thanks. For now, you’d have to redeploy from a previous git commit.
Our initial goal is to achieve sufficient abstraction levels to satisfy at least 80% of the app deployment use-cases out there. Meantime, for any insufficiencies, we’re coming up with a way to merge K8s manifest snippets into the generated YAMLs. See this issue and associated PR for more:
https://github.com/hyscale/hyscale/issues/284
Kubernetes was released only 6 years ago so I'd imagine there is still a lot of legitimate evolution left in the ecosystem. I'd have to compliment you for choosing a project like this rather than something that had no chance of working because the ecosystems are completely set, like a new programming language. I believe there will be a distributional challenge for you in getting people to use this software. You can't pay for an advertising campaign. Maybe the most you can do is post on HN, but after that, people will forget about it. The fact that once it's used once in a GitHub project others will be forced to use it provides some hope. You say you want to be like JQuery over javascript. It may be worth it to you to figure out how JQuery solved their distributional challenge. Just as nobody needs to use JQuery, nobody will need to use your software, and there will be a strong temptation for people to bypass it and just use raw Kubernetes.
It is amazing the complexity of modern software projects like Kubernetes and I'd agree they have challenges in creating a simple interface that everyone will like while still getting the software to work consistently. According to the principle of radical skepticism it's amazing that anything so complex works at all.
Reach out to the CTOs and VPs of Engineering that list Kubernetes as one of their core technologies. They're most apt to choose K8s for their own team.
Ask them if they've had any issues with Kubernetes, specifically mis-configuration or slow turn around times for configuration changes.
Explain your framework in one or two lines. Pick out one or two _specific_, common problems with K8s and ask them "Are you experiencing X? How about Y?" Talk to them like you already know and feel their pain. Because you do (you wouldn't have created this framework otherwise).
You'll learn a lot. And maybe get adoption and mayb a consulting gig out of it. :)
Use the advanced search on Linkedin to find these people. Make sure your Linkedin title has something to do with being a Kubernetes expert.
If you're in a big city, find those clients that are local first, as you can visit them in person (that goes a long way).
e.g. Senior DevOps Consultant, Specializing in Kubernetes/HyScale.
Here's the people search you need. Use Hunter.io to find their emails.
I disagree, I think there'll eventually be huge demand for these kinds of frameworks and wrappers compared to "plain old Kubernetes", much like how a high percentage of developers are hungry for something to use on top of plain JavaScript. I could even see the demand eventually surpassing demand for Kubernetes itself. Kubernetes offers a ton of modern advantages - even for pretty small projects - at the cost of a huge amount of complexity and required learning.
If you can get the advantages plus something simpler than Kubernetes or homebrewed solutions with Docker, then I suspect a gigantic market will form. I think it's just a question of if it'll end up being this particular implementation, or an alternative one, or a full-on standalone competitor to Kubernetes designed for simplicity. We're still in the very early days.
This is something I think OpenShift really adds value over "raw Kubernetes." With OpenShift you can treat it a bit like a flexible Heroku with `oc new-app` which can use s2i or your provided Dockerfile and will generate the foundation of what you need. You can then iterate on it if you need something beyond the standard setup.
okd4 came ~1yr after ocp4 was there.
Can it be trusted it won't have such delays in the future ? What about security fixes, features, will they always lag behind as an incentive to get the paid version?
In k8s, even if you maintain iy yourself, there is a huge community and you can always get your fixes. How does okd community compare? okd was on k8s 1.11 till few weeks back, 8 releases behind! Imagine the security issues okd had for such a long period...
(ps: even centos seems to be lagging behing badly, centos 7.7 took many months after rhel 7.7).
as for ocp/okd tools like s2i, ansible for replacing helm, routes, deployment configs, etc -> they never took off, community did not agree. Those that did not take care to stay away from stuff that is not pure k8s suffer from beying disconnected from the rest of industry and have to invest to redo everything...
not to mention their impossibility to switch to cloud providers solutions like eka,aks,pks,etc...
Thanks for your comment. I'll take a bit at a time:
> okd4 came ~1yr after ocp4 was there. Can it be trusted it won't have such delays in the future ?
This is a fair criticism and great question. I was also really frustrated at this delay, although there was a pretty good technical reason for it. OpenShift was based on Red Hat Core OS, which until recently didn't have an upstream (Fedora CoreOS now fills this void). With the acquisition of Core OS, RH engineers saw an opportunity to totally rethink the Node portion of OpenShift. CoreOS allows you to treat the whole OS as immutable like containers, which makes for some fascinating possibilities. This became a hard requirement for master nodes for a few reasons, one of which is the Node itself is totally managed by an operator[1]. With RHEL being a subscription product, this was a problem for OKD users (no host OS!). I do think RH deserves criticism for not prioritizing the community high enough, but I can assure you it wasn't malice. Also because of this I don't worry about releases falling behind in the future since Fedora CoreOS is generally available now.
> as for ocp/okd tools like s2i, ansible for replacing helm, routes, deployment configs, etc -> they never took off, community did not agree.
This is untrue. DeploymentConfigs did take off. In fact, modern Deployments in K8s are the result of the community agreeing and integrating them into upstream K8s. There are minor differences that were made to balance priorities (primarily CAP theorem considerations, Consistency v Availability[2]), but the two are remarkably similar. The child resources of each (ReplicaSet and ReplicaController) are also super similar.
Regarding Ansible, that works just fine on K8s, and likewise Helm works fine on OCP. OpenShift is not a custom mangled version of K8s - it is K8s, with some custom resources slapped on top. It's true that OCP Routes don't work on plain K8s, but to say that it never took off because the community did not agree is not fair. The modern Ingress of K8s took a lot of inspiration from Routes. Red Hat is the number 2 contributor to K8s (behind Google) and constantly pushes code upstream whenever possible.
[2]: "DeploymentConfigs prefer consistency, whereas Deployments take availability over consistency. For DeploymentConfigs, if a node running a deployer Pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted." See: https://docs.openshift.com/container-platform/4.1/applicatio....
> Those that did not take care to stay away from stuff that is not pure k8s suffer from beying disconnected from the rest of industry and have to invest to redo everything... not to mention their impossibility to switch to cloud providers solutions like eka,aks,pks,etc
I think "redo everything" is pretty unfair and borders on FUD. I've only known one person that moved from OCP to EKS, and the only thing they had to do was change their Route to an AWS Ingress (which is similarly non-portable I might add. It only works on AWS). If you use ImageStreams and such then yeah, you'll have to move those, but it's not very hard. Migrating from OpenShift to "other K8s distro" is really not that bad.
I would also point out that OpenShift/OKD can run nearly anywhere as well, so if you move from AWS to Azure, or bare metal or anything else, you don't necessarily have to abandon OpenShift. There's really no such thing as "just k8s" anyway. If you use any of the custom cloud provider stuff (like the AWS Ingress) then you're not portable without modifications either (and in some cases significant modifications if you have a highly customized ALB for example). If you care about portability, I think OpenShift/OKD is still a decent solution.
IMHO if you need a tool like this, you are normally better off building it yourself in-house. You will inevitably end up fighting all of the leaky abstractions something like this does not support for your use cases.
This a hundred times. Do yourself a favour and use Dhall/Cue/Jsonnet to develop some abstractions that fit your workload and environment. There is not much value proposition in a tool like this if you can use a slightly lower-level, more generic tool (like a configuration-centric programming language, which is an actually full-fledged programming language) to accomplish the same goal in a more flexible and more powerful fashion, that leaves you space for evolution and unforeseen structure changes.
The idea of tools mandating what 'environments' are is absurd, as it's pretty much always different for everyone (and that's good!).
I've been enjoying using Tanka [1], which is a command-line tool from the Grafana team to manage k8s configurations, which you define using jsonnet. Complete flexibility, with minimal boilerplate possible by using the older (unmaintained unfortunately) ksonnet library [2] or the upcoming jsonnet-libs/k8s(-alpha) (which we're using in production) [3], or roll your own, abstracting to whatever level you find best.
I strongly disagree. One of the primary* values of Kubernetes is that it commoditizes ops. Most companies are not special. Most applications are not special.
You probably need some APIs, a database or two, domain names and TLS certificates, and maybe a caching layer and object storage. There's zero reason why an abstraction layer can't be flexible enough to handle the overwhelming majority of line-of-business apps out there.
If you're going to be home-rolling your own janky custom deploy solution, you might as well save yourselves the headaches and not bother using Kubernetes either.
* - I might argue the only real value for most non-Google-scale organisations.
The HyScale specification looks familiar to most developers/devops who have been doing development or deployment for some years as we wanted to create a spec that is intuitively understood and application-centric, while at the same time grows to support 80% of the use-cases out there. The spec is meant to also support K8s native elements such as sidecars, ingress, etc. We are also looking at the compose-spec to see if there is some convergence in the near/late future.
If so, I'll be jumping right on this - for me, Docker Compose (and Swarm) configs are easy to write, read and maintain. Compared to the swathes of config required for k8s, it's beautiful.
Yes, this makes it easier to get started -- but when something goes wrong, now you have to hunt down bugs in two layers of software. And, since you've intentionally isolated yourself from the underlying layer, you have less experience with it!
This is why I like Helm. If you write your charts well, you can write your k8s yaml once, and do the things you need to do on a daily basis by adjusting your chart values
Helm does require understanding of all the underlying low-level objects defined in Kubernetes. HyScale hopes to provide higher-level entities to deal with, as well as providing ways to do higher-level ops & deployment introspection. We believe it should be possible to satisfy the needs of a reasonably large number (>80%) of apps.
That's exactly why the operator pattern exists!
Kubernetes stands out from other container orchestration tools for one reason only: it's portable. I can apply my templates to any cluster -empty or not- and my services will be running with the same topology. I can make them work with an existing installation, if I need to. To this end, I need to be able to inspect, modify and understand how everything is fitting together. An abstraction layer will always complicate things.
As said before, Helm is great because it doesn't hide Kubernetes' insides.
With HyScale, a simple & short service-descriptor allows developers to easily deploy their apps to K8s without having to write or maintain a voluminous amount of K8s manifest YAMLs. This enables self-service deployments as well as development on K8s with minimal IT/DevOps intervention.
We would love to get feedback, ideas and contributions.
Questions:
1. How do you deal with the mutability of K8s resources? Do you assume people won't change the underlying resources that you generate, or do you keep k8s controllers running to ensure there's no deviation?
2. How understandable is your generated output to humans? Do users have a way to go "backwards" through your abstraction? (Since your tool lives in an ecosystem with many other kubernetes tools, your users will sometimes end up having to deal with the generated output, since other tools operate at the K8s level such as prometheus, log aggregators etc)
3. Do you interoperate with K8s-level resources well? Do _all_ my services need to be in this abstraction for this to work well? e.g. Can my hyscale yaml reference a non-hyscale service for any reason? Or are they essentially two separate worlds?
Answers to your Questions:
1. The expectation is that people deploying through hyscale don’t want to go and modify K8s resources directly on the cluster behind the scenes. If they do, then it either deviates from the hspec files in git or they have parallel K8s yamls which won’t make much sense either. As of now, hyscale deploys and doesn’t keep anything (eg. controller) running, But it's a good suggestion to keep the K8s resources in sync with the desired state (hspec) and we’ll certainly put some thought into it.
2. HyScale outputs standard K8s manifest yamls which are as readable as any well hand-coded. On the other hand, if there was a way to generate back hspec files from existing K8s yamls, would that be interesting/useful?
3. Suppose you have K8s resources/snippets already written for some of your services, you can refer to them in hspec and add application context to it and make use of profiles for environment variations. If the question is more about off-the-shelf services (eg postgres, redis, etc), we have helm chart support coming up. https://github.com/hyscale/hyscale/wiki/Roadmap#2-ots-off-th...
Is there an option to only generate the yaml instead of generate and deploy altogether? This would allow me to create a lot of boilerplate while still having the option to fine tune the configurations if needed.
I also didn't see any information on rollbacks.
HyScale also helps you troubleshoot any errors during deployment. So rather than generate with HyScale & deploy using kubectl, you may want to fine-tune generated yamls by merging your custom config snippets via: https://github.com/hyscale/hyscale/issues/284 (in the upcoming release)
Currently, no support for rollbacks. It's a good suggestion for the roadmap, thanks. For now, you’d have to redeploy from a previous git commit.
It is amazing the complexity of modern software projects like Kubernetes and I'd agree they have challenges in creating a simple interface that everyone will like while still getting the software to work consistently. According to the principle of radical skepticism it's amazing that anything so complex works at all.
Reach out to the CTOs and VPs of Engineering that list Kubernetes as one of their core technologies. They're most apt to choose K8s for their own team.
Ask them if they've had any issues with Kubernetes, specifically mis-configuration or slow turn around times for configuration changes.
Explain your framework in one or two lines. Pick out one or two _specific_, common problems with K8s and ask them "Are you experiencing X? How about Y?" Talk to them like you already know and feel their pain. Because you do (you wouldn't have created this framework otherwise).
You'll learn a lot. And maybe get adoption and mayb a consulting gig out of it. :)
Use the advanced search on Linkedin to find these people. Make sure your Linkedin title has something to do with being a Kubernetes expert.
If you're in a big city, find those clients that are local first, as you can visit them in person (that goes a long way).
e.g. Senior DevOps Consultant, Specializing in Kubernetes/HyScale.
Here's the people search you need. Use Hunter.io to find their emails.
https://www.linkedin.com/search/results/people/?facetGeoUrn=...
Client outreach can be successful if it's specific and serving a genuine need.
If you can get the advantages plus something simpler than Kubernetes or homebrewed solutions with Docker, then I suspect a gigantic market will form. I think it's just a question of if it'll end up being this particular implementation, or an alternative one, or a full-on standalone competitor to Kubernetes designed for simplicity. We're still in the very early days.
This is something I think OpenShift really adds value over "raw Kubernetes." With OpenShift you can treat it a bit like a flexible Heroku with `oc new-app` which can use s2i or your provided Dockerfile and will generate the foundation of what you need. You can then iterate on it if you need something beyond the standard setup.
By the way OKD 4 (the freely available upstream version of OpenShift) is now generally available: https://www.openshift.com/blog/okd4-is-now-generally-availab...
as for ocp/okd tools like s2i, ansible for replacing helm, routes, deployment configs, etc -> they never took off, community did not agree. Those that did not take care to stay away from stuff that is not pure k8s suffer from beying disconnected from the rest of industry and have to invest to redo everything... not to mention their impossibility to switch to cloud providers solutions like eka,aks,pks,etc...
> okd4 came ~1yr after ocp4 was there. Can it be trusted it won't have such delays in the future ?
This is a fair criticism and great question. I was also really frustrated at this delay, although there was a pretty good technical reason for it. OpenShift was based on Red Hat Core OS, which until recently didn't have an upstream (Fedora CoreOS now fills this void). With the acquisition of Core OS, RH engineers saw an opportunity to totally rethink the Node portion of OpenShift. CoreOS allows you to treat the whole OS as immutable like containers, which makes for some fascinating possibilities. This became a hard requirement for master nodes for a few reasons, one of which is the Node itself is totally managed by an operator[1]. With RHEL being a subscription product, this was a problem for OKD users (no host OS!). I do think RH deserves criticism for not prioritizing the community high enough, but I can assure you it wasn't malice. Also because of this I don't worry about releases falling behind in the future since Fedora CoreOS is generally available now.
[1]: https://github.com/openshift/machine-config-operator
> as for ocp/okd tools like s2i, ansible for replacing helm, routes, deployment configs, etc -> they never took off, community did not agree.
This is untrue. DeploymentConfigs did take off. In fact, modern Deployments in K8s are the result of the community agreeing and integrating them into upstream K8s. There are minor differences that were made to balance priorities (primarily CAP theorem considerations, Consistency v Availability[2]), but the two are remarkably similar. The child resources of each (ReplicaSet and ReplicaController) are also super similar.
Regarding Ansible, that works just fine on K8s, and likewise Helm works fine on OCP. OpenShift is not a custom mangled version of K8s - it is K8s, with some custom resources slapped on top. It's true that OCP Routes don't work on plain K8s, but to say that it never took off because the community did not agree is not fair. The modern Ingress of K8s took a lot of inspiration from Routes. Red Hat is the number 2 contributor to K8s (behind Google) and constantly pushes code upstream whenever possible.
[2]: "DeploymentConfigs prefer consistency, whereas Deployments take availability over consistency. For DeploymentConfigs, if a node running a deployer Pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted." See: https://docs.openshift.com/container-platform/4.1/applicatio....
> Those that did not take care to stay away from stuff that is not pure k8s suffer from beying disconnected from the rest of industry and have to invest to redo everything... not to mention their impossibility to switch to cloud providers solutions like eka,aks,pks,etc
I think "redo everything" is pretty unfair and borders on FUD. I've only known one person that moved from OCP to EKS, and the only thing they had to do was change their Route to an AWS Ingress (which is similarly non-portable I might add. It only works on AWS). If you use ImageStreams and such then yeah, you'll have to move those, but it's not very hard. Migrating from OpenShift to "other K8s distro" is really not that bad.
I would also point out that OpenShift/OKD can run nearly anywhere as well, so if you move from AWS to Azure, or bare metal or anything else, you don't necessarily have to abandon OpenShift. There's really no such thing as "just k8s" anyway. If you use any of the custom cloud provider stuff (like the AWS Ingress) then you're not portable without modifications either (and in some cases significant modifications if you have a highly customized ALB for example). If you care about portability, I think OpenShift/OKD is still a decent solution.
The idea of tools mandating what 'environments' are is absurd, as it's pretty much always different for everyone (and that's good!).
[1] https://tanka.dev/
[2] https://github.com/ksonnet/ksonnet-lib
[3] https://jsonnet-libs.github.io/k8s-alpha/
You probably need some APIs, a database or two, domain names and TLS certificates, and maybe a caching layer and object storage. There's zero reason why an abstraction layer can't be flexible enough to handle the overwhelming majority of line-of-business apps out there.
If you're going to be home-rolling your own janky custom deploy solution, you might as well save yourselves the headaches and not bother using Kubernetes either.
* - I might argue the only real value for most non-Google-scale organisations.
It would be a killer app if you are.
https://www.compose-spec.io/
The HyScale spec schema itself is available at the companion repo here: https://github.com/hyscale/hspec
Its also backed by some serious enterprises: - https://www.cloudfoundry.org/memberprofiles/
[0] https://rancher.com/blog/2019/introducing-rio
[1] https://www.cloudfoundry.org/kubecf/