Holly mother of God. Mitchell was still on HN yesterday, as he was replying something about Backblaze IPO and its business. Today it is his IPO,
$259 million revenue. 2100+ Customers, 1500+ employees, $10 Billion Valuation.........
I mean I felt it wasn't that long ago Vagrant was "the" tool for the job.
How it all started, the submission on HN [1], quote:
>This project has been the love child of myself and John Bender (nickelcode.com) for the past 6 weeks. We're both daily HN readers and would like to use this as a starting point to show Vagrant to the public. Specifically, I'd like to open up to any questions and feedback, so that the HN community can get to know Vagrant. Your feedback is extremely valued. Thanks!
>A bit of background on this project: I work at a development company (citrusbyte.com) in LA. I see new projects almost every couple months, and I'm often working on multiple projects simultaneously due to work, freelance, and personal projects. Managing the development environments between many projects on a local machine became a huge burden and a coworker once mentioned developing in a virtual machine. I thought this was a great idea, and Vagrant was eventually born from it.
Really amazing achievement in such short space of time. Congratulations!
Edit: I wonder how many company started or partially started on HN that went on to IPO. I know Dropbox is one. Do we have a list somewhere?
If that’s $260M pa for 1500 employees then that works out as $40k revenue per employee per quarter.
Compare with APPL and FB doing [correction: over $600k] per employee per quarter.
Not a value judgment. But I only recently started noticing these numbers and it really puts the big players’ spending power into perspective. Hiring engineers away from FAANG is incredibly expensive.
Edit: thanks for the corrections in the replies. I read figures for FB and AAPL that are reported quarterly but missed that they are for a trailing 12 month period, not for the quarter itself.
>Compare with APPL and FB doing $2.5M per employee per quarter.
your math if off - FB is $500K/employee/quarter, APPL is ~600K/employee/quarter. That still of course a boatload of money allowing them to pay $600K+/year to the engineers.
Point stands, but I'm not sure how you get that much for FB and AAPL. In 2020 (4 quarters) they made, per employee, ~$1.2M and $0.7M in gross profit, $1.5M and $1.9M in revenue. I didn't cross check the table but did get the same number for FB.
> Hiring engineers away from FAANG is incredibly expensive.
That seems to be changing, as the employees at those companies are starting to re-evaluate the ethical choice of staying or leaving a company they thought was "good".
An additional data point (read: anecdote): I applied for a job at Hashicorp in early July of this year. I have yet to receive any reply, including a "thanks but no thanks"
For reference, I also have a friend who applied there in late 2019; he apparently _did_ get a "thanks but no thanks" email about a month later.
Perhaps all of the company is short-staffed, rather than just engineering.
> I mean I felt it wasn't that long ago Vagrant was "the" tool for the job.
Vagrant is my safety hatch, in case Docker goes under and aspect of it that's "the best centralized, cross-distro, server-oriented Linux package manager repository around" is, at least temporarily, thrown into disarray. Back to picking a distro and contorting it into what I need, in that case.
And it's still better than Docker if you're really in a hurry and need to get some pile of undocumented shit running locally ASAP.
Docker at this point is just a wrapper around OCI spec… why would you go back to Vagrant rather than just using any of the other tools that can build OCI images? Vagrant and Docker seem like fundamentally different tools to me.
>Mitchell was still on HN yesterday, as he was replying something about Backblaze IPO and its business. Today it is his IPO
Maybe that is because he stepped down from leadership to become an IC again? We could speculate that he didn't want to go public, or had no desire to do the S-1 work so he stepped down.
> I wonder how many company started or partially started on HN that went on to IPO. I know Dropbox is one. Do we have a list somewhere?
There are only a few places where you can easily promote your saas company, it makes sense that Saas startups that IPO now were promoted when they launched ...
> Edit: I wonder how many company started or partially started on HN that went on to IPO. I know Dropbox is one. Do we have a list somewhere?
Let's not make whether a company went public a metric for assessing quality. Going public is mostly a matter of how much a company expects the public to pay to get a share for the company. Also, it is a matter of how much the bank which facilitates the offering expects to make. Currently, the markets are sky high, so even not so good companies will make crazy amounts of money for the bank and the company which went public. Just look at the dot com bubble and the IPOs for more information.
2100 customers seems low? I've personally worked for 3 companies in the last 3 years who had Vault Enterprise licenses, and in the grand scheme of things, these are pretty small companies in a single, pretty small country (Belgium)
Absolute legend, rocket ship human Mitchell Hashimoto.
I still remember the excitement from Vagrant back in the day (which I think started it all). Here's the 1.0 announcement in 2012. [1]
The tools and vision they created after, just amazing coming from a small scrappy startup crew. Which, IMO, is totally wild given the offerings clearly tend to target bigger Enterprise who have bigger teams/apps/ops demand.
Then to walk away from $50MM barely older than drinking age. [2]
Seriously congratulations to them and the Hashicorp team. Will likely invest and hold for a long time.
He shared his dev setup at Dev Tool Time (https://srcgr.ph/mitchell-hashimoto) which shows how passionate he is about engineering. Going from CEO -> CTO -> IC solidifies his genuineness.
TBH what I got out of this is that he’s got a simple solid setup that works for him. He clearly doesn’t burn time changing his setup often and that makes sense.
You can buy the stock on the first day of trading. If you want to try and get an allocation of shares at IPO price, various brokerages have different processes where you apply. I use E*Trade mostly, but Robinhood does have the best IPO center of any brokerage I have seen.
The impact Vagrant has had on my business is nearly immeasurable (and for free, no less). We're a small startup, and I haven't had the time (or motivation) to learn what Docker, Kubernetes, containers, etc are. Seems overly complex.
But, virtual servers I can understand. I've been using Vagrant since 2013 and it ... just works. We've built our own custom box to standardize our development environment as well.
If there is one company and person I'd like to mimic, it's Hashicorp and Mitchell. Work to build an amazing product or products, get it ready for a sale or IPO, and then transition into an IC to continue doing what I love: hacking.
You can basically just treat it like a package manager and config-assistant. It's often easier(!) to configure a Docker image than the corresponding package, or set of packages, in your typical distro. In part this is because documenting where all the config files and data live just kinda falls naturally out of creating a half-decent image, and in part because good images often put commonly-modified config options—which may correspond to multiple changes in the config files—in single environment variables, for common use cases.
The main gotchas are making sure you've mapped any data directories to something outside the image (which is trivial to do with command-line options, if you prefer writing bash scripts, or in docker-compose yaml, and very easy to test—add some data, destroy the image, bring it back up, is your stuff there? Yes? Good, you got it) so data isn't lost if the image is replaced or destroyed, and making sure your port mapping isn't doing anything dumb like exposing ports it shouldn't on a public interface.
You don't have to use swarm or even actually learn how images work. You can run your application outside of it and just use pre-built official images from PostgreSQL, or whatever, and enjoy a nice, cross-distro, also-sorta-works-on-Mac-and-Windows, consistent set of project daemon dependencies, with an interface that's the same on Red Hat or Gentoo or Arch or wherever, and far more up-to-date than major stable distros (so you could use Debian Stable for simplicity and reliability, for example, but run the latest MySQL or ElasticSearch or whatever on it without mucking with the distro's packages).
I find this massively simplifies server config scripts (Ansible, or bash, or whatever) since I can confine those to fairly generic housekeeping things and put daemon config in much-tidier Docker scripts or yaml.
It's like git. You can get by for a while but when anything out of ordinary happens your understanding needs to go from 2% to 95% very quick. With apps that are really good at package management I fail to see need of docker. Ie node, and go lang.
It's often easier(!) to configure a Docker image than the corresponding package, or set of packages, in your typical distro.
My original reply was going to be something along the lines of "Bwahahahaha" followed by a comparison of how many seconds it takes to `pip3 install torch` vs how many hours you'd rip your hair out trying to get that running in Docker, let alone on a GPU, and let alone in a way that you can actually develop on it.
Perhaps it's easier to say, "We're not smart." Like Racket, Docker is a marvelous tool, and I'm sure a lot of smart people use it in some incredible ways.
I'll be 34 in Feb. Do I want to spend a month trying to force myself to use Docker for no apparent reason?
At my first job (gamedev), one of my coding heroes happened to work there. One thing he said really bugged me: "Shaders are a young person's game." By "hero" I mean that he single handedly wrote most of the Planetside 1 client code, as well as having developed many other titles that I grew up playing on MPlayer. (God help you if you know what MPlayer was.)
I tried explaining to him, no no, you see, it's not so bad! You can do it! I believe in you. Once you put in a little effort, you'll understand all the parts, and you'll see there's really not that much to it.
Yeah, uh, I was 19. He was like 40. I get it now.
I've personally deployed multiple services to production whose reliability can be measured in years: https://status.shawwn.com/
Sure, none of those are too impressive. Except the one I can't talk about, ha. But they're all variations on "get the server running, make sure the process is simple, make sure it's fail-safe, and put failsafes in place to notice if it breaks."
To my surprise, they almost never break. Isn't that marvelous? Here's me, someone inching closer and closer over the hill, delivering robust software that lasts years. Hell, you can even see for yourself: https://tags.tagpls.com/uptime
508d 00h 04m 14s
Not bad.
Sure, I'm being unfair. Because you'll rightly say that there's a world of difference between this and the situations DOcker's designed to solve.
And yet, as I go from company to company, I keep being surprised to find zero people using Docker. Isn't that strange? My wife just got a job at a YC co. I'll ask her whether anyone there uses Docker either. Maybe they do.
Docker's stolen days of my life for no gain. Painful days, because they were days when I was really into hacking, and I could've been busily building a big beaver dam instead of learning infrastructure that none of my colleagues ended up using.
Docker is a time vampire. It's "Nerd Snipe: The Game." You'll want to play with it, and it'll give you just enough happiness to keep you going. But, like a cat, the love is one-way. If Docker were a person, they would totally ditch you on your birthday.
It was much more satisfying to write this than to spend that time staring at yet another damn variation of "how do I forward the port properly?" torrent of blog posts from the legions of developers that Docker has managed to curse, by making the impressive decision to eschew simplicity in favor of being Smart with a capital-S.
But hey, Docker will be around longer than I will, I'm sure. So it'll get the last laugh. In seriousness though, you can get by without it, which is pretty remarkable -- almost as remarkable as it was to try out vagrant and discover that it's the polar opposite of Docker's philosophy.
The difference is easy to spot: Vagrant just works.
When HashiCorp first got announced I thought "How is he going to make a company out of Vagrant?" I was definitely wrong and on my own projects I'm using lots of their products from packer to nomad. Super cool to see someone/people create something like HashiCorp out of what I originally thought would be a single product.
To me, the more astonishing thing is, "How did HashiCorp excel where Docker failed". I'd pay to read a case-study on it, if there's one.
Edit: May be this comment from Mitchell sheds some 1st-party perspective on why it may be so:
> ...Terraform is WORKFLOW agnostic, not TECHNOLOGY agnostic. This is a key part of our product philosophy that we make the 1st element of our Tao: https://www.hashicorp.com/tao-of-hashicorp
> I don't think we've ever claimed cloud portability through "write once run anywhere;" that isn't our marketing or sales pitch and if we ever did make that claim please let me know and I'll poke some teams to correct it. Our pitch is always to just learn one workflow/tool and use it everywhere, but you explicitly WILL rewrite cloud-specific modules/code/etc.
I think a big part of it is that Docker failed to expand much beyond their initial offering. They tried, but weren't able to get much traction. HashiCorp probably wouldn't be IPOing with a multi-billion dollar valuation if they continued to focus mainly on Vagrant.
It wasn't until they launched Vault that I saw where they were going commercially. Their other tools were excellent, but I wouldn't be surprised if Vault would be their #1 cash-cow, it's a massively useful tool in environments that require a support umbrella and love paying for expensive licensing.
Congratulations! The IPO is a confirmation of what many of us in this field already knew: Hashicorp makes amazing tools. I love Consul so much. I'm glad the larger world will appreciate the great work Hashicorp has done as well.
My thoughts not facts. I know that there are more products then I mention.
I fail to see in what segment Hashicorp will remain relevant over time.
Terraform is the tool I mostly see companies pay for. Over time cloud vendors will make Terraform obsolete. In fact it is already a problem to use Terraform since it can not move at the same pace as major cloud vendors.
Vault is an extremely complicated niche tool, most companies should not use.
Consul, the service discovery tool is mostly not needed in cloud environments. Don't think any cloud vendor today have Consul as a service on their agenda even though this has been announced years ago which is a warning sign. Personally I really like Consul and the way you can set up ACL for instance.
Vagrant, use whatever.
Nomad has lost the battle with Kubernetes a long time ago. I never trusted Nomad and I never will but I can see that if you really want to orchestrate a lot of containers Nomad may be the right tool.
When selecting an identity platform you mainly have to go along with the corruption in the industry...
I really wish Hashicorp good luck on this journey though.
- Vault is not niche - it’s THE way to manage pki and credentials if you’re half serious about security.
Which is why you’re now are starting to see managed vault.
- Consul - EVERYONE should use service discovery, cloud or not. It’s indispensable for numerous reasons. If you doubt it’s relevance, check out the Kubernetes integration work - there’s a reason for that focus.
You need service discovery if you operate at any sort of scale, spanning multiple providers and teams (Azure have a managed consul offering btw).
- “Trust” nomad? The team and I have used it since 0.4 and 0.6 in full production at two different companies. K8s as well, but it lacks the unix vibe of “one thing, and do it well”, which is something you get with nomad, consul & vault.
Nomad has been rock solid and I’ve so far had no reason to not “trust” it, 100s of thousands of deploys later.
- terraform spans many providers. It’s a good tool, not without it’s quirks. But I’d rather have one quirky tool than multiple quirky vendor ones. Also, we use TF for basically everything - even the stuff we host in-house through lxc and postgres for example, and through home grown providers as well.
All major clouds have better alternatives to Vault. Vault is mostly for really large companies that want to run things like this by themself.
There is no need for service discovery in the cloud in general.
I have also used Nomad a lot. Maybe it is because we always needed the cutting edge features in general, but in general not very good quality. Core features always worked though. People should use Kubernetes instead in most cases.
There is simply no way Terraform and the HCL2 will survive for cloud environments. For other use cases I do not know.
Vault is the real money cow here, for sure. It is an industry standard in a space that is becoming ever-more important. Honestly, I'm not even sure HC knew how big it would become, but it is huge.
Could you elaborate on the service discovery part? If you're in kubernetes, you have kube-dns which in effect is service discovery right? What does consul provide over this?
A lot of big finance companies use Nomad for all their compute scheduling. Citadel, for instance. They desire the ability to schedule Windows workloads, containers, regular processes, etc. through a common interface. They might not want or need to go all-in on containers.
Vault has a similar target market. Big high-paying institutions. It's not the average market of your tech company, and 100-200 person startups generally won't need it. If you're in the fintech space, maybe you do.
I would suggest that you reevaluate Nomad. It solves basically every one of (many) problem with Kubernetes in an elegant and reasonable way, at a scale Kubernetes is by no means capable of.
Further the idea that cloud vendors will make Terraform irrelevant is laughable. None of them have any impetus to provide a consistent workflow across multiple clouds. The shortcomings of CloudFormation in particular are unlikely ever to be overcome.
I really don´t feel K8S have won this battle. I agree people talk more about K8S but i have seen a trend in people that are disappointed with K8S and move against Nomad instead. I guess K8S is too messy. It´s like taking a 2015 enterprise vsphere datacenter environment and containerizing it. Too many layers..
But of course, there´s no fully managed Hashicorp offer for all products in GCP or AWS, Azure....
I do not really understand why people run so many things in containers in the first place. Sure, for sand-boxing and sometimes resource utilization, but the large services I worked on have always been on 10+ dedicated high-end servers with 200GB+ memory each. Absolutely zero need for any additional abstractions. You can also design solutions that use a lot of memory in contrast to containers.
I have actively used Vagrant, Consul, Terraform, and Vault and I really have never understood all the fanboyism for Hashicorp. Their products are OK but easily replaceable and often redundant in modern cloud providers. Wish them luck on their attempt to cash in but I for one do not intend to buy any stock.
I actually tend to agree, and am surprised that the discussion in this post is so breathless. Having used (and continuing to use) multiple Hashicorp products fairly extensively, they tend to have a lot of warts, just fewer warts than the alternatives.
Terraform is a great example:
* It's slow, and new versions often get slower.
* Apart from the most serious ones, bugs often don't get fixed for years, and GitHub issues and pull requests (both for TF itself and the biggest providers) are a swamp of thousands of issues and hundreds of PRs dating back 4+ years. Issue triage is erratic and often fails to fully read or comprehend the reported issue.
* There are some design deficiencies that seem hard to fix. For example: first-class support for providers that are configured based on other resources in the same Terraform state. This usually doesn't work correctly without hacks like `-target`. The "right way" to do this is to have separate TF states for different "layers" of your infra, which is fine and ends up pretty tidy for large infra, but nobody really talks about this (not even the TF docs), so invariably things will not be architected that way at the start and by the time the TF config has grown, refactoring it to split out the layers will be a deeply unpleasant time-suck. (The awful experience that is refactoring large TF configs being another major negative all by itself.) This fundamental issue is the root cause of hundreds of TF GitHub issues.
* The major Hashicorp-maintained (or co-maintained) providers are often massively underresourced, leading to delays before new cloud features are supported, forcing users of TF to maintain those resources outside of Terraform, which is a mess. If a user of, say, the AWS provider tries to rectify the situation by sending a PR, it will just be lost in the sea of ~3000 open issues and ~500 open PRs unless they put in significant time and effort to get attention to it.
Despite all of this, we still use Terraform heavily because it's less crap than the alternatives, but I can hardly muster the love for it that is expressed elsewhere in these comments.
Judging by https://github.com/hashicorp/vagrant/issues/7263, I would say that they are the "GNOME" of devops tooling. The quickstart is great, and you feel empowered when your use cases are supported. Beyond that, good luck.
> They are the de-facto standard in DevOps tooling
Im in cloud automation 8+ years. No they are not de-facto standard. For AWS projects, I much prefer Cloudformation. App devs use venv or similar, not Vagrant.
I’d use their CI/CD if they had one, like GitLab, but looks like they don’t?
It is true that tools can be emulated (RedHat doing podman as a docker replacement, with same flags) but it is also work.
I wonder whether in the Cloud world, the fan factor is a sign of credibility in a market that is looking for tooling that works across cloud providers.
There have been hundreds of crazy tech success stories in the last few years, but as someone who considers himself an engineer at heart, this one gives me the greatest amount of joy and optimism. Both founders are industry-wide leaders in their field and still treat writing code and solving complex technical problems as their primary job.
$259 million revenue. 2100+ Customers, 1500+ employees, $10 Billion Valuation.........
I mean I felt it wasn't that long ago Vagrant was "the" tool for the job.
How it all started, the submission on HN [1], quote:
>This project has been the love child of myself and John Bender (nickelcode.com) for the past 6 weeks. We're both daily HN readers and would like to use this as a starting point to show Vagrant to the public. Specifically, I'd like to open up to any questions and feedback, so that the HN community can get to know Vagrant. Your feedback is extremely valued. Thanks!
>A bit of background on this project: I work at a development company (citrusbyte.com) in LA. I see new projects almost every couple months, and I'm often working on multiple projects simultaneously due to work, freelance, and personal projects. Managing the development environments between many projects on a local machine became a huge burden and a coworker once mentioned developing in a virtual machine. I thought this was a great idea, and Vagrant was eventually born from it.
Really amazing achievement in such short space of time. Congratulations!
Edit: I wonder how many company started or partially started on HN that went on to IPO. I know Dropbox is one. Do we have a list somewhere?
[1] https://news.ycombinator.com/item?id=1175901
Compare with APPL and FB doing [correction: over $600k] per employee per quarter.
Not a value judgment. But I only recently started noticing these numbers and it really puts the big players’ spending power into perspective. Hiring engineers away from FAANG is incredibly expensive.
Edit: thanks for the corrections in the replies. I read figures for FB and AAPL that are reported quarterly but missed that they are for a trailing 12 month period, not for the quarter itself.
your math if off - FB is $500K/employee/quarter, APPL is ~600K/employee/quarter. That still of course a boatload of money allowing them to pay $600K+/year to the engineers.
https://twitter.com/investing_city/status/142301690347634278...
Dead Comment
That seems to be changing, as the employees at those companies are starting to re-evaluate the ethical choice of staying or leaving a company they thought was "good".
According to LinkedIn the average tenure of employees is a little over a year (likely to hit the vesting cliff and bounce).
Two months ago they didn't have the staff to review pull requests: https://news.ycombinator.com/item?id=28425849
You can love the product, but investors are ultimately betting on the company - which seems shaky.
I think this is usually the case for fast growing companies that typically double employees every year, because:
1/2 people avg. 1/2 year tenure
1/4 people avg. 3/2 year tenure
1/8 people avg. 5/2 year tenure
etc. Which approaches something around ~1 year tenure. You'll notice the same 1.1 year tenure at Stripe, Affirm, etc.
For reference, I also have a friend who applied there in late 2019; he apparently _did_ get a "thanks but no thanks" email about a month later.
Perhaps all of the company is short-staffed, rather than just engineering.
news.ycombinator.com needs a ycombinator.com/topcompanies equivalent.
Vagrant is my safety hatch, in case Docker goes under and aspect of it that's "the best centralized, cross-distro, server-oriented Linux package manager repository around" is, at least temporarily, thrown into disarray. Back to picking a distro and contorting it into what I need, in that case.
And it's still better than Docker if you're really in a hurry and need to get some pile of undocumented shit running locally ASAP.
https://www.vagrantup.com/docs/providers/docker/basics
Maybe that is because he stepped down from leadership to become an IC again? We could speculate that he didn't want to go public, or had no desire to do the S-1 work so he stepped down.
Deleted Comment
There are only a few places where you can easily promote your saas company, it makes sense that Saas startups that IPO now were promoted when they launched ...
Let's not make whether a company went public a metric for assessing quality. Going public is mostly a matter of how much a company expects the public to pay to get a share for the company. Also, it is a matter of how much the bank which facilitates the offering expects to make. Currently, the markets are sky high, so even not so good companies will make crazy amounts of money for the bank and the company which went public. Just look at the dot com bubble and the IPOs for more information.
Bigger fish to fry. I wish I had that level of focus!
[0] http://blog.freshdesk.com/the-freshdesk-story-how-a-simple-c...
[1] https://news.ycombinator.com/item?id=28625195
https://finance.yahoo.com/news/hashicorp-files-u-ipo-said-18...
The tools and vision they created after, just amazing coming from a small scrappy startup crew. Which, IMO, is totally wild given the offerings clearly tend to target bigger Enterprise who have bigger teams/apps/ops demand.
Then to walk away from $50MM barely older than drinking age. [2]
Seriously congratulations to them and the Hashicorp team. Will likely invest and hold for a long time.
[1] https://news.ycombinator.com/item?id=3672149
[2] https://twitter.com/mitchellh/status/1357445215259250689
But, virtual servers I can understand. I've been using Vagrant since 2013 and it ... just works. We've built our own custom box to standardize our development environment as well.
If there is one company and person I'd like to mimic, it's Hashicorp and Mitchell. Work to build an amazing product or products, get it ready for a sale or IPO, and then transition into an IC to continue doing what I love: hacking.
Congratulations on the success!
You can basically just treat it like a package manager and config-assistant. It's often easier(!) to configure a Docker image than the corresponding package, or set of packages, in your typical distro. In part this is because documenting where all the config files and data live just kinda falls naturally out of creating a half-decent image, and in part because good images often put commonly-modified config options—which may correspond to multiple changes in the config files—in single environment variables, for common use cases.
The main gotchas are making sure you've mapped any data directories to something outside the image (which is trivial to do with command-line options, if you prefer writing bash scripts, or in docker-compose yaml, and very easy to test—add some data, destroy the image, bring it back up, is your stuff there? Yes? Good, you got it) so data isn't lost if the image is replaced or destroyed, and making sure your port mapping isn't doing anything dumb like exposing ports it shouldn't on a public interface.
You don't have to use swarm or even actually learn how images work. You can run your application outside of it and just use pre-built official images from PostgreSQL, or whatever, and enjoy a nice, cross-distro, also-sorta-works-on-Mac-and-Windows, consistent set of project daemon dependencies, with an interface that's the same on Red Hat or Gentoo or Arch or wherever, and far more up-to-date than major stable distros (so you could use Debian Stable for simplicity and reliability, for example, but run the latest MySQL or ElasticSearch or whatever on it without mucking with the distro's packages).
I find this massively simplifies server config scripts (Ansible, or bash, or whatever) since I can confine those to fairly generic housekeeping things and put daemon config in much-tidier Docker scripts or yaml.
My original reply was going to be something along the lines of "Bwahahahaha" followed by a comparison of how many seconds it takes to `pip3 install torch` vs how many hours you'd rip your hair out trying to get that running in Docker, let alone on a GPU, and let alone in a way that you can actually develop on it.
Perhaps it's easier to say, "We're not smart." Like Racket, Docker is a marvelous tool, and I'm sure a lot of smart people use it in some incredible ways.
I'll be 34 in Feb. Do I want to spend a month trying to force myself to use Docker for no apparent reason?
At my first job (gamedev), one of my coding heroes happened to work there. One thing he said really bugged me: "Shaders are a young person's game." By "hero" I mean that he single handedly wrote most of the Planetside 1 client code, as well as having developed many other titles that I grew up playing on MPlayer. (God help you if you know what MPlayer was.)
I tried explaining to him, no no, you see, it's not so bad! You can do it! I believe in you. Once you put in a little effort, you'll understand all the parts, and you'll see there's really not that much to it.
Yeah, uh, I was 19. He was like 40. I get it now.
I've personally deployed multiple services to production whose reliability can be measured in years: https://status.shawwn.com/
Sure, none of those are too impressive. Except the one I can't talk about, ha. But they're all variations on "get the server running, make sure the process is simple, make sure it's fail-safe, and put failsafes in place to notice if it breaks."
To my surprise, they almost never break. Isn't that marvelous? Here's me, someone inching closer and closer over the hill, delivering robust software that lasts years. Hell, you can even see for yourself: https://tags.tagpls.com/uptime
Not bad.Sure, I'm being unfair. Because you'll rightly say that there's a world of difference between this and the situations DOcker's designed to solve.
And yet, as I go from company to company, I keep being surprised to find zero people using Docker. Isn't that strange? My wife just got a job at a YC co. I'll ask her whether anyone there uses Docker either. Maybe they do.
Docker's stolen days of my life for no gain. Painful days, because they were days when I was really into hacking, and I could've been busily building a big beaver dam instead of learning infrastructure that none of my colleagues ended up using.
Docker is a time vampire. It's "Nerd Snipe: The Game." You'll want to play with it, and it'll give you just enough happiness to keep you going. But, like a cat, the love is one-way. If Docker were a person, they would totally ditch you on your birthday.
It was much more satisfying to write this than to spend that time staring at yet another damn variation of "how do I forward the port properly?" torrent of blog posts from the legions of developers that Docker has managed to curse, by making the impressive decision to eschew simplicity in favor of being Smart with a capital-S.
But hey, Docker will be around longer than I will, I'm sure. So it'll get the last laugh. In seriousness though, you can get by without it, which is pretty remarkable -- almost as remarkable as it was to try out vagrant and discover that it's the polar opposite of Docker's philosophy.
The difference is easy to spot: Vagrant just works.
Edit: May be this comment from Mitchell sheds some 1st-party perspective on why it may be so:
> ...Terraform is WORKFLOW agnostic, not TECHNOLOGY agnostic. This is a key part of our product philosophy that we make the 1st element of our Tao: https://www.hashicorp.com/tao-of-hashicorp
> I've talked about this more with more references in this tweet: https://twitter.com/mitchellh/status/1078682765963350016
> I don't think we've ever claimed cloud portability through "write once run anywhere;" that isn't our marketing or sales pitch and if we ever did make that claim please let me know and I'll poke some teams to correct it. Our pitch is always to just learn one workflow/tool and use it everywhere, but you explicitly WILL rewrite cloud-specific modules/code/etc.
https://news.ycombinator.com/item?id=29051020
I fail to see in what segment Hashicorp will remain relevant over time.
Terraform is the tool I mostly see companies pay for. Over time cloud vendors will make Terraform obsolete. In fact it is already a problem to use Terraform since it can not move at the same pace as major cloud vendors.
Vault is an extremely complicated niche tool, most companies should not use.
Consul, the service discovery tool is mostly not needed in cloud environments. Don't think any cloud vendor today have Consul as a service on their agenda even though this has been announced years ago which is a warning sign. Personally I really like Consul and the way you can set up ACL for instance.
Vagrant, use whatever.
Nomad has lost the battle with Kubernetes a long time ago. I never trusted Nomad and I never will but I can see that if you really want to orchestrate a lot of containers Nomad may be the right tool.
When selecting an identity platform you mainly have to go along with the corruption in the industry...
I really wish Hashicorp good luck on this journey though.
- Vault is not niche - it’s THE way to manage pki and credentials if you’re half serious about security. Which is why you’re now are starting to see managed vault.
- Consul - EVERYONE should use service discovery, cloud or not. It’s indispensable for numerous reasons. If you doubt it’s relevance, check out the Kubernetes integration work - there’s a reason for that focus. You need service discovery if you operate at any sort of scale, spanning multiple providers and teams (Azure have a managed consul offering btw).
- “Trust” nomad? The team and I have used it since 0.4 and 0.6 in full production at two different companies. K8s as well, but it lacks the unix vibe of “one thing, and do it well”, which is something you get with nomad, consul & vault. Nomad has been rock solid and I’ve so far had no reason to not “trust” it, 100s of thousands of deploys later.
- terraform spans many providers. It’s a good tool, not without it’s quirks. But I’d rather have one quirky tool than multiple quirky vendor ones. Also, we use TF for basically everything - even the stuff we host in-house through lxc and postgres for example, and through home grown providers as well.
I could write pages on the hashicorp products!
There is no need for service discovery in the cloud in general.
I have also used Nomad a lot. Maybe it is because we always needed the cutting edge features in general, but in general not very good quality. Core features always worked though. People should use Kubernetes instead in most cases.
There is simply no way Terraform and the HCL2 will survive for cloud environments. For other use cases I do not know.
Discovery means that you don’t know your service name? Or endpoints? How can one lose his service? Im def missing something :)
Vault has a similar target market. Big high-paying institutions. It's not the average market of your tech company, and 100-200 person startups generally won't need it. If you're in the fintech space, maybe you do.
Further the idea that cloud vendors will make Terraform irrelevant is laughable. None of them have any impetus to provide a consistent workflow across multiple clouds. The shortcomings of CloudFormation in particular are unlikely ever to be overcome.
That's not all companies. It's not even the majority of them. But those companies do tend to be the ones who can afford HashiCorp's premium offerings.
Edit: Fix typo.
Terraform is a great example:
* It's slow, and new versions often get slower.
* Apart from the most serious ones, bugs often don't get fixed for years, and GitHub issues and pull requests (both for TF itself and the biggest providers) are a swamp of thousands of issues and hundreds of PRs dating back 4+ years. Issue triage is erratic and often fails to fully read or comprehend the reported issue.
* There are some design deficiencies that seem hard to fix. For example: first-class support for providers that are configured based on other resources in the same Terraform state. This usually doesn't work correctly without hacks like `-target`. The "right way" to do this is to have separate TF states for different "layers" of your infra, which is fine and ends up pretty tidy for large infra, but nobody really talks about this (not even the TF docs), so invariably things will not be architected that way at the start and by the time the TF config has grown, refactoring it to split out the layers will be a deeply unpleasant time-suck. (The awful experience that is refactoring large TF configs being another major negative all by itself.) This fundamental issue is the root cause of hundreds of TF GitHub issues.
* The major Hashicorp-maintained (or co-maintained) providers are often massively underresourced, leading to delays before new cloud features are supported, forcing users of TF to maintain those resources outside of Terraform, which is a mess. If a user of, say, the AWS provider tries to rectify the situation by sending a PR, it will just be lost in the sea of ~3000 open issues and ~500 open PRs unless they put in significant time and effort to get attention to it.
Despite all of this, we still use Terraform heavily because it's less crap than the alternatives, but I can hardly muster the love for it that is expressed elsewhere in these comments.
My prediction, HashiCorp after IPO'ing will get acquired.
Im in cloud automation 8+ years. No they are not de-facto standard. For AWS projects, I much prefer Cloudformation. App devs use venv or similar, not Vagrant.
I’d use their CI/CD if they had one, like GitLab, but looks like they don’t?