While it's a bit unusual for a new account submitting only one site to have two posts on the front page, it seems to be a well-written article. I certainly appreciate fresh takes over someone submitting the same Paul Graham or Kalzumeus page that been submitted dozens of times, or over users who submit several times a day, every day (tosh, tomte, etc)
Is it a community-determined "best practice" to prioritise reviewing a contributor's history over reading their contribution and upvoting/flagging according to its content?
Hashicorp is one of the extreme examples. Terraform Cloud is an okay product at best, which went from expensive to very expensive. Moreover, they aggressively changed the Terraform license from open-source to BSL.
So after the whole community contributed a lot of providers, they want to profit from that.
You should use OpenTofu and buy IAC from one of the companies sponsoring full-time engineers on that project: Spacelift, env0, Harness or Scalr.
I listened to a changelog.com podcast the other day on the subject of OpenTofu (It was a few months old so they were still calling it OpenTF).
Apparently a lot of the actual images are hosted on the GitHub container registry (ghcr.io), and in many cases, the Terraform registry is just passing the request through to download an image from a repo that may be owned by someone else.
So in effect, they're putting scary licensing text on a lot of content they have no control over.
Yet, HashiCorp does not support CDKTF natively in their commercial product! It puts very little effort into CDKTF as it doesn't make any extra revenue. The only reason they did it was to offer an alternative to Pulumi.
I am a terraform user but haven't courage to migrate as IAC is trickier than it looks. How much is OpenTofu compatible with terraform? Is there any page which tracks this or list down things which might break on migration.
OpenTofu is a superset of Terraform. They maintain complete compatibility with Terraform while adding additional features on top of it. They have more dedicated developers on the project than Hashicorp does, and they are building out a fairly robust RFC process for the additional features.
I would not recommend using OpenTofu in production until they've had their first official release though. They currently have some alpha releases out that you can experiment with.
All these companies saw Snowflake's success and thought, "we want to use some kind of usage based pricing scheme to capture the value that we provide". Implicit in this thought process is that the price should scale in an unbounded way - if you're providing X value for a company with 10M revenue, you should be providing roughtly 1000X value to a company with 10B revenue.
And it may be true that you are providing 1000X the value! But that doesn't mean that you are going to be able to get away with charging 1000X the price when the cost of hiring a team to fully replace your product with something built in house is only 10X or 100X.
Snowflake doesn't have this problem because very few companies could recreate Snowflake, period, much less recreate it for less than their Snowflake spend. But all these "hosted open source product" offerings should have realized by now that they need a ceiling on their pricing structure and/or they need to stop open sourcing their code.
"Value-based pricing" is a code word for "reach into your customers' pockets and steal the spare change". It's the end game of monopolists, actual or wanna-be, and it's a good way to burn literally all of your goodwill as quickly as possible.
I completely agree and I've been trying to shout from the rooftops how this kind of greed inherently prevents banking value in your "brand" which means you'll only ever be one mistake, incident, or bad news story from a bunch of angry customers. This puts exactly the same pressure on a company and product as only shipping "barely good enough" functionality (which makes sense since you're ratcheting up the cost with this exact balance in mind). My take is that this is a really bad trade for a saas company/team trying to build for anything except short term.
That's the biggest challenge - you can't capture all the value that you're providing your clients, otherwise there's 0 reason to use your product over either
1. Building it inhouse
2. Using someone who is capturing <100% of value provided.
Then the only "moat" you have is lock-in, but clients tend to not like that, and you're squandering your reputational capital.
I get that companies want to price their products in a way that they don't leave surplus value on the table (i.e. the more value a customer gets out of it, the more expensive the product gets), but the fundamental problem is that it's... so hard to actually "judge" the value that's being created by using the product in many cases.
In such cases, probably for the best that companies err on the side of caution with some formula they're sure is below the actual value but isn't too far away.
Based on recent experience with large deployments at S&P500 types, IMHO the value snowflake providers to most companies can easily be deployed in house with either old school Hadoop,Spark, Hive or old school Impala, Hbase or more recently Trino.
The clients I had where we installed snowflake every is super worried about any additional copy operation because everyone is keenly aware of the unlimited cost. This makes getting shit done harder.
Clients that have a decently sized k8s cluster running Trino or old school Spark/Hadoop cluster on fat VM's you know what you got you know what your paying for you make estimate of how much ram / cores you need for certain workloads and once they are purchased your engineers get really good at squeezing as much work out of the given resources as possible. And no constant complaining in meetings about what potential extra cost this additional computation may have.
Also if other people working in other parts of your org don't run snowflake themselves you gotta pay for their snowflake usage on your bill or you pay for copying data back out to S3/ADLS/sFTP so that other departments can get to the results of your computations. And snowflake really doesn't like it when you do this, they even gave exporting data a new name, they call it "unloading" making you feel like your undoing something which you probably should not undo....
On that note, Snowflakes data export options are significantly underdeveloped in contrast to Databricks, Cloudera and also the original open versions Spark, Trino and Impala.
I’ve loved using Hashicorp products for many years but I have (despite often being in a position to do so) failed miserably to engage in any kind of commercial relationship with them.
Their pricing has always been opaque. Despite trying my hardest, even multiple phone calls with their sales teams, I never got even a sniff of how much it would cost for enterprise deployments of Hashicorp products at any of my clients.
I’m a shareholder in HCP and lost money on them after the IPO, so maybe I’m a bit sore, but I’m dumping my holdings soon. I’ve got no confidence this company actually knows how to sell into the bulk of the addressable market that they outlined in their S1.
I think the growth they are currently generating will top out pretty soon when the shock of paying about the same for the management tools as the actual cloud services they are running hits home with their customers.
I can give an insight! We tried buying into Vault Enterprise and let me tell you, it was incredibly expensive. We're talking Splunk levels of prohibitive cost here, maybe even worse.
Their pricing models are actually insane. For Vault Enterprise, you buy into a fixed limited number of clients with a pricing ladder that would make Apple blush. You start at 100 “service tokens” with the next level being 1000, 2500, and 50000 tokens. Any “service” that needs to connect to vault is a client, and the definition of a service is pretty loose. For Kubernetes / compose stacks, any one pod / running container is a service.
It gets worse: Once a token has been claimed, it can no longer be used by a different client for the entire billing period (meaning: a year). This means that you can run out of valid client tokens, even if you're only actively using half if you spent the other half for testing purposes or no longer run the architecture that used up those tokens. Oh, and users are clients, too.
All in all, the ballpark moved somewhere in the low six figures for their 100-token agreement, if I remember correctly. We had to decline because Vault alone would've cost a large part of our infrastructure budget.
* Gross margin: 80% (hey this one is pretty good)
* Operating margin: -62%
* Net margin: -57%
* Return on Equity: -22%
* Return on Invested Capital: -21%
Their sales and marketing is pretty bloated and is destroying all of their gross profit alone.
Their stock seems to be priced on the hopes and dreams that they'll grow revenue out of their current problems before they run out of cash. But the headline and the reactions here show how they're trying to do that.
That is not too unusual for software companies on IPO, lots of them paying $5 to acquire $1 in revenue. I think the common justification is that new customers will stick around for years.
Kind of a shameless plug but it's hard not to with such a title.
Make sure to check out Spacelift[0]. It’s a CI/CD specialized in Infra as Code. Terraform/OpenTofu are first-class citizens and it brings advanced customizability with cross-statefile dependencies, OPA-policies (not just for access control, but e.g. customizing your gitops flow) and others.
The pricing is reasonable, too, and not per-resource. Generally based around concurrency.
Disclaimer: I work at Spacelift, but I do legitimately think it's a great product and recommend it.
I second this shameless plug and I'm about to be a 2 time spacelift purchaser. I did the same dance with Hashicorp a few years ago which led to a product shoot out and Spacelift won easily. That was before it also handled Kubernettes and Ansible.
> Is concurrency in this sense mostly just 'how many terraform plan/apply runs you have going at once?'
Yes, though not by minute, but by "max concurrency" over a month, with some room for bursts.
> Also, is the Enterprise plan significantly more expensive than the cloud plan?
It's quite a bit more expensive. I do recommend contacting our sales team[0] and presenting your use-case, though, to get more details. You can definitely work something out with them.
Terraform Cloud costs are an absolute joke. I was actually the decision maker a couple of years ago when we chose our IAC stack and decided not to go with Terraform Cloud despite thinking the product is strong, and it was entirely due to the business model and cost.
For us (and I'm guessing for many VC funded companies), SAAS managed IAC is a nice to have, but certainly not a MUST-have and certainly not something in the same "willing to spend money range" as your monitoring or your main cloud hosting costs.
I see these kind of tools as a tier below your monitoring tools like DataDog/Splunk etc. And these tools are a tier below your AWS/GCP costs. If your IAC or CI costs are approaching or overtaking your monitoring costs, something weird is afoot. Likewise, if your Datadog costs are approaching your AWS bill, this is obviously wrong.
Hashicorp, in my opinion, thinks their tool is more mission-critical and more of a value-add than in reality it actually is, and I think they also don't understand that in the current high interest rate environment, companies are FAR more willing to put in engineering time to do migrations or money saving projects. My own company put in 100s of hours of Engineering time to reduce the Datadog bill by roughly 40-50%.
Honest question. If you need to pump data into datadog from your cloud, why not just stop doing that, and use the cloud tools? Datadog isnt' creating metrics. The metrics are already on your cloud provider (gcp,aws,etc) -- AND all these providers have ways to view metrics. Why do people pay 20k per month for data dog? They dont' want to use GCP metrics explorer? Same with TF cloud. You can just setup a jenkins job and do all the features TF cloud has in a few hours. Why pay TF cloud? Are people confused? I must be missing something. I worked at startups that thought all metrics need to be in datadog, but they didnt' understand they can just use metrics explorer. You can't even use sql to create graphs in datadog, it's awful.
IMHO If you are serious, dump the metrics into bigquery/redshift and start doing sql like an adult.
If you start pulling all the metrics we currently use in DataDog out of something like BQ/RedShift you're going to spend WAY more in terms of engineering hours and infrastructure than we're currently paying DataDog.
I keep hearing something about a company that switched their monitoring from DataDog to DataBricks and I can see that yes you probably could go build a monitoring solution on top of a datastore like that. But I certainly wouldn't want to.
For viewing, cloudwatch is simply nowhere as convenient, writing whole custom queries takes a lot of time, and setting up altering for just the right thing sometimes requires you to copy the complex result as another metric. Basically I can create a time-shifted difference of two metrics reporting at different intervals, smooth it and make it alert the right teams through slack and pagerduty in a minute or so (while getting a visual feedback on the query the whole time) - this would take a significant amount of time to do through the plain AWS process.
It's a bit like "you can write everything you want in assembler without the overhead of extra layers of X". Yeah, sure, I can. But I appreciate the extra layers of X. I'll take the right cost/convenience balance, like an adult, thank you.
I would not consider metrics explorer to be a particularly good product. For queries where the query builder isn't good enough and you write PromQL instead, they don't even let you alias the legend; instead you must see the entire query as the label for that line.
A pretty minor nitpick, but indicative of the level of attention they give the product.
We use Atlantis [0] for CI/CD automation of Terraform pull requests to a centralized repository. It's pretty good too, especially for a self-hosted solution. I can't see how Terraform Cloud's costs would be justifiable for us without a custom contract.
How does this work? I thought the terraform state file was the single source of truth - if people are applying terraform 'manually' I assume that means on their local device? Are people sharing around the state file but don't have a central location for a lock file? Apologises if this seems obvious...
I think Terraform Cloud (and most Hashi's enterprise offerings) aim at absolutel behemoths of deployments with many, many infra teams where the complexity comes from scale and the companies are not Google or Facebook and therefore this problem can't be solved through talent. For many such enterprises it's easier to throw money at the problem.
Except those same companies then run into hidden limits that exist on Terraform Cloud and hadn't been previously mentioned. Honestly, even bigger companies are better off going with Spacelift as it can scale extremely well, has reasonable support, and is much more feature rich.
As a disclaiming I'm currently writing a book on Terraform, and I've been interacting with a lot of the people in the space. Up until a few years ago I was a huge fan of Hashicorp, but their price changes and lack of support were what made it so I couldn't recommend Terraform Cloud anymore. The license changes they made were the icing on that cake.
You would think that, but having just went through trying to implement Terraform Cloud at work, its tools for doing anything that approaches behemoth deployment are abysmal. Permissions management is a nightmare and any sort of orchestration between different layers is barely there.
I personally like what render.com is doing with IaC through their blueprints format. It's definitely a kid's toy version of IaC, but it's a really nice step up from Heroku.
ZIRP is over and SaaS is entrenched. Prepare for a LOT more rug pulling and price gouging.
The labor and capital cost savings for moving to the cloud and SaaS were to get you there and get you dependent. Now that you don’t have in house IT anymore it’s time to turn the screws. You will soon be paying 2-3X what self hosted internal IT cost.
The the pendulum will start to swing the other way. This is one of those endless cycles in software and IT. Get ready for Harvard Business Review articles about how much someone saved exiting cloud.
It never really swings back the other way. It slows down adoption but we’ll never see large back-in-house IT projects. Because in-house offerings (and open-source) aren’t up to the task anymore.
It’s entirely possible that in-house offerings aren’t up to the task largely because the last 15 years have seen tens if not hundreds of billions of dollars invested in the idea that SAAS is cheaper and better?
Now that the VC subsidy is over, and we’re into the gouge all those billions back from the people you’ve hooked onto your drug phase of the adoption cycle, it’s entirely possible some investors and technologists might see an opportunity in helping companies break out of that situation.
So we may actually see some investment and effort in bringing in house tech up to par.
GPU and data storage costs are driving folks back on premise, maybe not to the same scale as before yet, but when you already need a colo for these services... Might as well bring back compute too.
Until very recently we were exclusively deploying on-prem and had been for over 20 years. The amount of tooling we had to write ourselves to get even a semblance of modern devops practices is insane. Absolutely no one supports the deployment methods and requirements we have, so it was either hacking existing things to do what we want, or writing our own stuff.
My team maintains our own Terraform providers, Ansible playbooks, bare metal management infra, CLI tooling, network automation, ITSM integration, on-prem Kubernetes running on metal in our own DCs, on-prem CI/CD, on-prem Gitlab, evolving IDP, etc.
I disagree, there are great offerings out there. Open stack has stagnated a bit as things have radically shifted to the cloud, but it's still a good scalable and supported option. VMware also can get you pretty far. You can also go hybrid, and have the elastic scalability of cloud with the low cost of on-prem. Products like OpenShift can make this seamless to developers, and easy to manage and maintain.
In-house IT isn't up to the task of building systems that are infinitely scalable but as the price goes up lots and lots of shops are going to realize they don't need infinitely scalable, they need the scale they're at and have no reason to want to grow it. In-house IT is great at that.
The "task" changes from time to time as well. Creating crazy new things, then figuring out they're too complex and then reinventing them simpler is another endless cycle, much older than software development itself.
Ahh but the big boy bare metal companies are catching up. Nutanix and the like can simplify on prem deployments and provide cloud-like experiences, though I admit nowhere nearly as polished yet.
Just use Atlantis, it’s really great. My company switched from Terraform Cloud to maintaining an Atlantis instance and it made things so much smoother, and it’s OSS.
Maybe I'm missing something but although Atlantis seems great, you have to expose a webhook to the open internet that points to a service that has full admin access to your infra. If an attacker finds a security issue with Atlantis and decides to abuse it, you've basically given them admin access. For that exact reason Atlantis a prime target for vulnerability exploitation
You can put it behind something like cloudflare and make the url something that can't be guessed, but yeah it is not the best. I really wish github would publish a list of IPs it calls from.
It's LLM-assisted/bloated fluff which it pretty much admits (with some more LLM-y fluff) right at the start
https://shavingtheyak.com/2023/10/29/seo-generative-ai-and-t...
So after the whole community contributed a lot of providers, they want to profit from that.
You should use OpenTofu and buy IAC from one of the companies sponsoring full-time engineers on that project: Spacelift, env0, Harness or Scalr.
> You may download or copy the Content (and other items displayed on the Services for download) for personal non-commercial use only
https://web.archive.org/web/20201106225027/https://registry....
Apparently a lot of the actual images are hosted on the GitHub container registry (ghcr.io), and in many cases, the Terraform registry is just passing the request through to download an image from a repo that may be owned by someone else.
So in effect, they're putting scary licensing text on a lot of content they have no control over.
Deleted Comment
I would not recommend using OpenTofu in production until they've had their first official release though. They currently have some alpha releases out that you can experiment with.
And it may be true that you are providing 1000X the value! But that doesn't mean that you are going to be able to get away with charging 1000X the price when the cost of hiring a team to fully replace your product with something built in house is only 10X or 100X.
Snowflake doesn't have this problem because very few companies could recreate Snowflake, period, much less recreate it for less than their Snowflake spend. But all these "hosted open source product" offerings should have realized by now that they need a ceiling on their pricing structure and/or they need to stop open sourcing their code.
1. Building it inhouse
2. Using someone who is capturing <100% of value provided.
Then the only "moat" you have is lock-in, but clients tend to not like that, and you're squandering your reputational capital.
I get that companies want to price their products in a way that they don't leave surplus value on the table (i.e. the more value a customer gets out of it, the more expensive the product gets), but the fundamental problem is that it's... so hard to actually "judge" the value that's being created by using the product in many cases.
In such cases, probably for the best that companies err on the side of caution with some formula they're sure is below the actual value but isn't too far away.
The clients I had where we installed snowflake every is super worried about any additional copy operation because everyone is keenly aware of the unlimited cost. This makes getting shit done harder.
Clients that have a decently sized k8s cluster running Trino or old school Spark/Hadoop cluster on fat VM's you know what you got you know what your paying for you make estimate of how much ram / cores you need for certain workloads and once they are purchased your engineers get really good at squeezing as much work out of the given resources as possible. And no constant complaining in meetings about what potential extra cost this additional computation may have.
Also if other people working in other parts of your org don't run snowflake themselves you gotta pay for their snowflake usage on your bill or you pay for copying data back out to S3/ADLS/sFTP so that other departments can get to the results of your computations. And snowflake really doesn't like it when you do this, they even gave exporting data a new name, they call it "unloading" making you feel like your undoing something which you probably should not undo.... On that note, Snowflakes data export options are significantly underdeveloped in contrast to Databricks, Cloudera and also the original open versions Spark, Trino and Impala.
</rant> :)
Their pricing has always been opaque. Despite trying my hardest, even multiple phone calls with their sales teams, I never got even a sniff of how much it would cost for enterprise deployments of Hashicorp products at any of my clients.
I’m a shareholder in HCP and lost money on them after the IPO, so maybe I’m a bit sore, but I’m dumping my holdings soon. I’ve got no confidence this company actually knows how to sell into the bulk of the addressable market that they outlined in their S1.
I think the growth they are currently generating will top out pretty soon when the shock of paying about the same for the management tools as the actual cloud services they are running hits home with their customers.
Their pricing models are actually insane. For Vault Enterprise, you buy into a fixed limited number of clients with a pricing ladder that would make Apple blush. You start at 100 “service tokens” with the next level being 1000, 2500, and 50000 tokens. Any “service” that needs to connect to vault is a client, and the definition of a service is pretty loose. For Kubernetes / compose stacks, any one pod / running container is a service.
It gets worse: Once a token has been claimed, it can no longer be used by a different client for the entire billing period (meaning: a year). This means that you can run out of valid client tokens, even if you're only actively using half if you spent the other half for testing purposes or no longer run the architecture that used up those tokens. Oh, and users are clients, too.
All in all, the ballpark moved somewhere in the low six figures for their 100-token agreement, if I remember correctly. We had to decline because Vault alone would've cost a large part of our infrastructure budget.
Also I wonder if same user having multiple different tokens would count as different tokens... Probably, just to inflate the number...
Their sales and marketing is pretty bloated and is destroying all of their gross profit alone.
Their stock seems to be priced on the hopes and dreams that they'll grow revenue out of their current problems before they run out of cash. But the headline and the reactions here show how they're trying to do that.
Make sure to check out Spacelift[0]. It’s a CI/CD specialized in Infra as Code. Terraform/OpenTofu are first-class citizens and it brings advanced customizability with cross-statefile dependencies, OPA-policies (not just for access control, but e.g. customizing your gitops flow) and others.
The pricing is reasonable, too, and not per-resource. Generally based around concurrency.
Disclaimer: I work at Spacelift, but I do legitimately think it's a great product and recommend it.
[0]: https://spacelift.io
Also, is the Enterprise plan significantly more expensive than the cloud plan?
Yes, though not by minute, but by "max concurrency" over a month, with some room for bursts.
> Also, is the Enterprise plan significantly more expensive than the cloud plan?
It's quite a bit more expensive. I do recommend contacting our sales team[0] and presenting your use-case, though, to get more details. You can definitely work something out with them.
[0]: https://spacelift.io/contact
For us (and I'm guessing for many VC funded companies), SAAS managed IAC is a nice to have, but certainly not a MUST-have and certainly not something in the same "willing to spend money range" as your monitoring or your main cloud hosting costs.
I see these kind of tools as a tier below your monitoring tools like DataDog/Splunk etc. And these tools are a tier below your AWS/GCP costs. If your IAC or CI costs are approaching or overtaking your monitoring costs, something weird is afoot. Likewise, if your Datadog costs are approaching your AWS bill, this is obviously wrong.
Hashicorp, in my opinion, thinks their tool is more mission-critical and more of a value-add than in reality it actually is, and I think they also don't understand that in the current high interest rate environment, companies are FAR more willing to put in engineering time to do migrations or money saving projects. My own company put in 100s of hours of Engineering time to reduce the Datadog bill by roughly 40-50%.
IMHO If you are serious, dump the metrics into bigquery/redshift and start doing sql like an adult.
I keep hearing something about a company that switched their monitoring from DataDog to DataBricks and I can see that yes you probably could go build a monitoring solution on top of a datastore like that. But I certainly wouldn't want to.
It's a bit like "you can write everything you want in assembler without the overhead of extra layers of X". Yeah, sure, I can. But I appreciate the extra layers of X. I'll take the right cost/convenience balance, like an adult, thank you.
A pretty minor nitpick, but indicative of the level of attention they give the product.
Obviously, it's not a perfect system, and it doesn't indefinitely scale, but it worked well enough.
You can see a list on the left-hand side here: https://developer.hashicorp.com/terraform/language/settings/...
[0] https://www.runatlantis.io/
As a disclaiming I'm currently writing a book on Terraform, and I've been interacting with a lot of the people in the space. Up until a few years ago I was a huge fan of Hashicorp, but their price changes and lack of support were what made it so I couldn't recommend Terraform Cloud anymore. The license changes they made were the icing on that cake.
Deleted Comment
The labor and capital cost savings for moving to the cloud and SaaS were to get you there and get you dependent. Now that you don’t have in house IT anymore it’s time to turn the screws. You will soon be paying 2-3X what self hosted internal IT cost.
The the pendulum will start to swing the other way. This is one of those endless cycles in software and IT. Get ready for Harvard Business Review articles about how much someone saved exiting cloud.
https://en.m.wikipedia.org/wiki/Zero_interest-rate_policy
Now that the VC subsidy is over, and we’re into the gouge all those billions back from the people you’ve hooked onto your drug phase of the adoption cycle, it’s entirely possible some investors and technologists might see an opportunity in helping companies break out of that situation.
So we may actually see some investment and effort in bringing in house tech up to par.
My team maintains our own Terraform providers, Ansible playbooks, bare metal management infra, CLI tooling, network automation, ITSM integration, on-prem Kubernetes running on metal in our own DCs, on-prem CI/CD, on-prem Gitlab, evolving IDP, etc.
SaaS, what SaaS?
https://www.runatlantis.io/