Readit News logoReadit News
f0e4c2f7 · 3 years ago
This article is written in kind of a controversial way but it seems like the throughline of the argument is something like "use heroku until you have 100k users".

This seems very reasonable to me. I thought it was going to be a pitch for on prem, which is also fine for certain scales.

I think generally the scaling steps from startup to megacorp go:

Heroku/Dokku > Public Cloud >Dedicated servers in someone else's DC > Custom Hardware in custom built data centers.

Each makes sense at each scale. I find it to be more of a right tool for the job consideration than one being better than the other.

With modern cloud tooling your infra can also look more or less logically the same once you grow past the heroku level.

sakisv · 3 years ago
> Heroku/Dokku > Public Cloud >Dedicated servers in someone else's DC > Custom Hardware in custom built data centers.

I think stackoverflow and its siblings proved that having a handful of servers can go a very long way, even making cloud ops redundant.

Of course this is function of what you're optimizing for, and whether you want to go down the "boring monolithic app" route.

Nextgrid · 3 years ago
> Of course this is function of what you're optimizing for, and whether you want to go down the "boring monolithic app" route.

Microservices do add some overhead but it's not extreme - even a microservices-based app can run just fine on a decent bare-metal box if you run all the services on it.

Of course, the talk of microservices brings the question of what problem you're actually trying to solve - are you aiming to build a technical solution to a business problem or are you aiming to create an engineering playground so there's endless busywork and justification for hiring lots of engineers? If it's the latter, then bare-metal is going to be a bad option anyway as it's not the kind of toy a typical startup engineer wants to play with.

dijit · 3 years ago
I really agree with you, what's weird though is how many mega-corps are going away from Custom Hardware in Custom Built DC towards Cloud.

There's also something to be said for buying a VPS or a Colo machine, making sure it's backed up and dealing with the 9's that you get from that machine on it's own. I am routinely surprised by how far a single node machine will get you.

sofixa · 3 years ago
> what's weird though is how many mega-corps are going away from Custom Hardware in Custom Built DC towards Cloud

It costs a lot of money to run your own datacenters, and very very few companies are capable of doing it as good as AWS or even Scaleway/OVH can. By that I mean, waiting weeks/months to get through tickets, approvals, multiple different teams just to get a server deployed. Then waiting a few more weeks for monitoring/backups.

Allowing developers and related to have hardware/software at a whim is a massive advantage.

mwcampbell · 3 years ago
> how far a single node machine will get you.

This. I took the wrong lesson from the DDoS attacks on Linode in late 2015 (particularly the one on Christmas Day), and the intermittent issues I encountered with DigitalOcean and Vultr in 2016 while both providers were still fairly young. A single dedicated server from a mature provider (ideally not during its hyper-growth phase) is pretty reliable.

PubliusMI · 3 years ago
It's not weird.

Many mega-corps are extremely bloated and dysfunctional. Their IT (Private) Cloud teams slower and less competent.

With public cloud, a small team can be fully responsible for all their resources with crystal clear cost accounting.

nine_k · 3 years ago
These megacorps likely have IT which is not mega enough to justify owning a massive datacenter.

The right scale is Amazon, Google, Facebook, Microsoft. Likely much fewer than a hundred companies in the entire world.

Fendii · 3 years ago
Every internal it department was always slower than any cloud offering.

Or laged features.

Or had underlying infra issues.

With AWS in one Startup I was able to build and maintain infrastructure were you needed a small team just 10 years before.

P5fRxh5kUvp2th · 3 years ago
At a certain scale you have more negotiating power so it probably makes more sense from that perspective.
jacobr1 · 3 years ago
Data Residency. Scaling to dozens of global regions is not cost effective for running your own DCs.
PragmaticPulp · 3 years ago
> what's weird though is how many mega-corps are going away from Custom Hardware in Custom Built DC towards Cloud.

Why is it surprising? Building and maintaining custom data centers is a big, slow business initiative. It takes months to years of forecasting to get the data center buildout to match the business needs, as opposed to the extreme flexibility of using a cloud provider.

> There's also something to be said for buying a VPS or a Colo machine, making sure it's backed up and dealing with the 9's that you get from that machine on it's own. I am routinely surprised by how far a single node machine will get you.

For personal projects this is exactly what I do. It’s great until something goes wrong with that one machine or VPS.

But it’s not really a good option for any business that needs consistent operations and uptime. Years ago I worked at a company that tried to self-host some of their collaboration tools on a VPS to save money over the cloud-hosted versions. When the server went down it stalled productivity for a day while the team restored a backup, with another week of confusion as we tried to find all of the things that were lost between the last backup and when the server went down.

When someone did the rough estimations on how much it cost to pay everyone’s salaries for that day of lost productivity, the number was far higher than the trivial cost savings we got from self-hosting. We also had a constant background burden on someone internally to maintain and monitor the server, plus the burden of them being on call. Often, moving to cloud anything can be a huge load off the company’s back.

buffalobuffalo · 3 years ago
As a software engineer who doesn't really like devops and has been in this position multiple times, I'm a huge fan of buying à la carte services from different providers that specialize in managing a specific type of service (often since they are the developer/maintainer of said service). As long as you make sure they are all in the same datacenter, you still get great performance. And typically minimal configuration woes.

For example:

datacenter - aws: us-east-2

Dockerized Webservers/task servers: Render or Engineyard

Postgres & Kafka: aiven or 84codes

Redis: Redis labs

Unified logging Elastic or Grafana

I still end up using some underlying AWS services like S3 and lambda, but it's a lot less work than managing an entire AWS ecosystem with security groups/VPC/networking etc.

nijave · 3 years ago
That can work well but you have to be careful/mindful of egress charges and latency (if the server supports co-locating in the same cloud or even managing directly in your account, it can alleviate those issues)
buzzdenver · 3 years ago
> "use heroku until you have 100k users"

Thing is that for most/many startups 100k users is not a lot. Rejiggling your basic infra just as your growth is starting to accelerate is a non-trivial task, a risk, and something that doesn't fundamentally move the needle.

hagbarth · 3 years ago
This really depends on what you are building of course. Working in enterprise SaaS with only a few users per account? You'll be doing really well at 100k users.
lelanthran · 3 years ago
> Thing is that for most/many startups 100k users is not a lot.

Depends; if you're a startup offering free or ad-supported services and the exit plan is "be bought out by existing entrenched competitor", then, yes, 100k users is not enough to hit your goals.

If you're a startup offering B2B services, even 10k users is enough to be madly profitable.

icedchai · 3 years ago
Most B2B startups never get anywhere near 100K users.
cultofmetatron · 3 years ago
> "use heroku until you have 100k users"

nowadays, i'd say use fly.io or render till you have 200k users.

avrionov · 3 years ago
I don't think this is a good advice in general and I haven't seen many companies to do it practice. The initial decisions which a company makes most of the time are hard to change. Moving a successful business with clients from one platform to another 3 or 4 times is very difficult to do or hardly impossible.

It may work for a simple website, but for any more complicated project with web clients, mobile clients third party integrations, migrating from Heroku to cloud provider to on prem means refactoring big parts of the project.

What is even bigger problem a migration like this is hard to do incrementally.

Nextgrid · 3 years ago
I don't see the point of public cloud. In practice, it still requires a sysadmin (now called "DevOps engineers") so it's not any better than rented bare-metal in terms of maintenance overhead, while still being extremely expensive.

Use a managed PaaS to begin with (you pay more but it does genuinely save you time as there is no management overhead), then when you're ready to do things yourself go straight to hosted bare-metal, and only use public cloud services for their managed services that you can't replicate yourself (think Redshift/Athena/Aurora/etc).

tatersolid · 3 years ago
> so it's not any better than rented bare-metal in terms of maintenance overhead

In my experience the maintenance overhead of the cloud is much lower. My dayjob (B2B SaaS) spent about 75% of the infrastructure team’s time on things like patching switch firmware, balancing UPS loads, diagnosing flaky switch ports or transceivers, managing logging growth, etc. None of that made our products better from a customer perspective.

Since our cloud move those same infra staff support many more services and apps with much faster turnaround for product teams. And we traded upcoming multi-million capex investments in servers/switches/appliances into a monthly cloud bill that scales much more closely with revenue.

The public cloud is for businesses constrained by people; we simply could not afford to hire enough people to do the same stuff on-prem or in colo.

blackoil · 3 years ago
I would recommend starting with DigitalOcean. It may have little overhead but a much more cost effective and you can stay on it for longer before migrating.
osigurdson · 3 years ago
I think you can be pretty big and still stay in Public Cloud. There is enough competition that pricing should trend toward commodity over time.
CharlieDigital · 3 years ago
> If you're an indie hacker, a boostrapper, a startup, an agency or a consultancy, or just a small team building a product, chances are you are not going to need the cost and complexity that comes with modern cloud platforms.

Hard disagree.

- On cost: there is almost nothing better for the indie hacker, bootstrapper, or startup than cloud services.

I run apps on all three platforms (Google, AWS, and Azure) and my monthly spend is less than $2.00 < month using a mix of free tier services and consumption based services (Google Cloud Run, Google Firestore, AWS CloudFront, AWS S3, Azure Functions, Azure CosmosDB).

- On complexity: if you've used Google Cloud Run or Azure Container Apps, you know how easy it is to run workloads in the cloud. Exceedingly easy. I can go from code on my machine to running API in the cloud that can scale from 0 - 1000 instances in under 5 minutes just by slapping in a Dockerfile _with no special architecture or consideration, no knowledge of platform specific CLIs, no knowledge of Terraform/Pulumi/etc._

The current generation of container-based serverless runtimes (Google Cloud Run, Azure Container Apps) is pretty much AMAZING for indie hackers; use whatever framework you want, use middleware, use whatever language you want. As long as you can copy/paste an app runtime specific Dockerfile (e.g. Node.js, dotnet, Go, Python, etc.) in there, you can run it in the cloud, and run it virtually for free until you actually get traffic.

If any of the projects take off, then pay to scale. If they don't take off, you've spent pennies. Some months I can't even believe they charge my CC for $0.02.

pclmulqdq · 3 years ago
The AWS free tier lets you do a lot, and if you use it well, it lets you avoid up to about $50/month of digitalocean bills.

If you're never planning on scaling past a hobby project, the free tier is a great place to stay. If your hobby project "goes viral," though, it might cost you a few thousand dollars, but hopefully that helps you get a lot more money to turn your hobby into a business.

If you have commercial intent, however, $50/month goes from an expensive hobby (3 streaming services) to a very cheap business. At that point, the fact that you don't have to pay for scale on DO VMs and other platforms actually makes a lot more sense. You can sleep at night knowing that you will still have a business even under a load spike, and $50 of digital Ocean buys you roughly the compute power of $1000+ of AWS managed services.

CharlieDigital · 3 years ago
The beauty of container based serverless is that you have portability. If your hobby project takes off and you want to run it on DO up to a certain ramp, you can still move your container workload into DO.

Google Cloud Run, Azure Container Apps, and AWS AppRunner (less so because it doesn't scale to zero) are really great tools for hobby devs and small shops.

ericd · 3 years ago
Yep, we were easily saving a developer salary per month vs AWS using colo’d hardware even as a very small company. And god help you if you’re trying to run something bandwidth intensive on AWS.
fullstackchris · 3 years ago
> it lets you avoid up to about $50/month of digitalocean bills.

Wish I'd know about this AWS free tier, because that sounds a lot like my monthly digital ocean bill :')

dimgl · 3 years ago
Yeah I can’t agree with you at all. Without setting up your own NAT Gateway on EC2 on a t2.micro instance (or something cheap like that), it runs you about $30. This isn’t even accounting for database costs, usage costs, development costs, etc.

So right away that “I can’t believe they even charge my CC for $0.02” is real suspect. Do you have a completely empty AWS account?

We haven’t even spoken about dev experience yet.

CharlieDigital · 3 years ago
> Without setting up your own NAT Gateway on EC2 on a t2.micro instance...

The problem is that you're using EC2 instead of AWS App Runner, Google Cloud Run, or Azure Container Apps.

> We haven’t even spoken about dev experience yet.

I'd strongly recommend that you give Google Cloud Run a try. You can go from empty codebase to running, on demand serverlesss runtime via GitHub with only a Dockerfile. I can build an app from scratch and have it running in Google Cloud in probably under 3 minutes with no special CLI knowledge or build.

Here's a sample Dockerfile I'd need to get a dotnet app into Google Cloud Run:

  # The build environment
  FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine as build
  WORKDIR /app
  COPY . .
  RUN dotnet restore
  RUN dotnet publish -o /app/published-app --configuration Release

  # The runtime
  FROM mcr.microsoft.com/dotnet/aspnet:6.0-alpine as runtime
  WORKDIR /app
  COPY --from=build /app/published-app /app

  # The value production is used in Program.cs to set the URL for Google Cloud Run
  ENV ASPNETCORE_ENVIRONMENT=production
  ENV IS_GOOGLE_CLOUD=true

  ENTRYPOINT [ "dotnet", "/app/my-app.dll" ]
Every other aspect of the code remains unchanged. GCR will pull the code from GitHub, build the container, and operationalize it.

https://github.com/CharlieDigital/dn6-mongo-react-valtio

helsinkiandrew · 3 years ago
Agree, we bootstrapped a business from cents a month to $4 or $5 now, there's no maintenance, and I know if we get mentioned on Oprah - we'll be able to cope with a blip thousands of signups a second. I know how I'd run the system on our own hardware but can put off that decision to when (or if) we need it.
yrgulation · 3 years ago
“ there is almost nothing better for the indie hacker, bootstrapper, or startup than cloud services.”

heroku, a vps or a dedicated server are all in the cloud, not sure what you mean by this.

jokethrowaway · 3 years ago
if you're spending less than 2$ per month how much traffic -> how much money can you make?

Sure, I also have plenty of static websites hosted for free by vercel / netlify / heroku / yourpick and even free functions.

As soon as you start hitting traffic, functions start to cost a lot vs your own vps.

My ideal setup right now is free static hosting from the marketing budget of friendly saas, free cloudflare on top and then APIs hosted on small vps (I have plenty of stuff on digitalocean but if I were to start from scratch I'd go fully with hetzner).

I avoid the big 3 as much as I can and I laugh for hours when I see the bills of clients using them.

CharlieDigital · 3 years ago
Google Cloud Run pricing is:

- 2 million requests/mo free

- First 180,000 vCPU seconds free

- First 360,000 GiB seconds free

Then:

- $0.000024 /vCPU seconds

- $0.0000025 /GiB seconds

- $0.40 per million requests

This will get you pretty far for $2/mo. Within the free tier itself, assuming you can process each request in 250ms on a 1 vCPU container, you get 720,000 requests before you start paying for compute usage. Each $1.00 is another ~38,000 vCPU seconds (@1 GiB second) or ~152,000 requests @ 250ms per request.

Roughly speaking, $2/mo. is 1 million requests @ 250ms each request consuming 1 GiB seconds on a 1 vCPU container.

(There's some nominal cost for egress and storage of container images).

perryizgr8 · 3 years ago
Your monthly spend is less than two dollars across GCP, AWS and Azure? If that's true, then of course it makes sense at your scale to stick with them.
Octabrain · 3 years ago
> - Terraform to create the API gateway, database, lambdas, queues, Route 53 records: 1 week

- Terraform to create the IAM policies: 4 weeks

Perhaps it's because I am very familiar with the aforementioned tool and cloud but 5 weeks for writing those resources gives me the impresion of:

1. Lack of experience on AWS.

2. Lack of experience with Terraform.

3. Both.

I don't want to sound arrogant by any means but a Terraform project for something like that, documented, with its CI and applying changes via CD, would take me 4 days being generous.

marcus_holmes · 3 years ago
I got handed a Terraform project for a GCP-based service. Simple dev, staging, prod environment. Secrets managed by secret manager, SQL Run without a public IP address for prod (but accessible via SSH for admins).

I more or less gave up after a month of beating my head on the brick wall. We hired an expert. Took him another month to get it all more or less sorted. There were still aspects that we wanted that we could not get Terraform/GCP to do.

In the end, we dropped Terraform and went back to modifying the GCP manually.

waz0wski · 3 years ago
That's a generic and well documented stack that utilizes GCP defaults and works out of the box. An "expert" should not take a month to fail to set it up.

I've deployed similar, additionally including GKE, via terraform in a day - Checking TF code for an example 3-env GCP/GKE/CloudSQL stack it's less than 300 LoC

That said, it's not all good - my ongoing complaint with terraforming GCP is that the provider lags behind the features & config available in GCP console - worse than the AWS provider - especially w/r/t GKE and CloudSQL

makestuff · 3 years ago
We have been using CDK on AWS and it is really nice because you can do complex things through Typescript.
shadowgovt · 3 years ago
Five weeks sounds about right based on my experience coming up to speed with Terraform. It's flexible enough to solve everybody's problems so it solves nobody's problems. Not until you inundate yourself enough with it to build the intermediary layer between what it does and what you want to do.
throwaway2016a · 3 years ago
Same, I do it routinely and maybe the first time I ever did it, it took me a week but after that it was fast. But I may be being generous.

The only thing that could make that tough is if you put the Lambdas in a VPC. That can get tricky because you have to plan out subnets and whatnot but still not a week.

The AWS documentation is also extremely good with regards to what properties are on each resource. I can't speak for Terraform since I usually use CloudFormation / SAM directly. Maybe it's a Terraform problem?

acdha · 3 years ago
> The only thing that could make that tough is if you put the Lambdas in a VPC. That can get tricky because you have to plan out subnets and whatnot but still not a week.

Yeah, it’s about 20 minutes if you use the VPC and Lambda modules from https://github.com/terraform-aws-modules. I could see a week if you had to learn all of this first with little prior experience but that’s true of everything. A newbie running a Linux colo server isn’t going to get all of the security & reliability issues right in less time, either.

scarface74 · 3 years ago
I know those tools too. It’s kind of my job to know them seeing I work at AWS in ProServe.

But if someone gave me the same use case as the author. I wouldn’t suggest any of those tools. What’s the business case for introducing the complexity of AWS for someone who is just trying to get an MVP out the door who doesn’t know cloud?

I’ve been in the industry for 25+ years and only first logged into the AWS console in mid 2018. I had a job at AWS two years later. That gives me a completely different perspective

lorenzotenti · 3 years ago
It's a joke. Or at least I've interpreted it as such. Still true that you always spend more time terraforming the little things compared to what you expected.
ianbutler · 3 years ago
You have to concede that that's most of the industry right? The state of implementing IaC is new and foreign for the majority of teams.
icedchai · 3 years ago
Based on my own personal experience, 4 weeks for IAM does seem high, especially since it took 1 week for all the other stuff.
nijave · 3 years ago
Four days sounds fair if you're experienced. If you're new to TF/AWS I could easily see it taking significantly longer. If you assume IAM is the devil and refuse to learn it, it will absolutely take a while to get correct
holografix · 3 years ago
The more I use Terraform and GCP the less I want to bother with Terraform.

TF is not Infra as Code it’s infra as configuration files and it’s a mess.

I haven’t used Pulumi but that’s kind of what I really want. Give me Python and better abstractions to gcloud cli.

dimgl · 3 years ago
Agreed here. There is no reason setting up IAM policies through Terraform takes four weeks. Anecdotally, on my own personal projects it took me maybe three hours, or more, to set up IAM policies for AWS Lambda, ECS and RDS.
acedTrex · 3 years ago
Terraform is a nice tool but its a VERY slow development cycle. just due to the nature of the cloud
727564797069706 · 3 years ago
ITT: people who spent many hours learning proprietary (often unnecessarily complex) cloud platforms trying to convince others (and themselves) that it was the best use of their limited time alive.

Stockholm syndrome à la Big Cloud.

It's okay to be interested in elaborate cloud architecture things and learn them because of that, but don't sell it as one-size-fits-all thing that every little company needs.

Most companies don't need that complexity, but of course, Big Cloud with their billions needs to convince you otherwise.

sebazzz · 3 years ago
Exactly. Of course GCP/Azure/AWS have great development kits, of course they make it easy to get a Docker application running for the first time within 1 minute. That is the sales model.

However, to be cost effective, you need to adapt your application to be more cloud native using their propietary SDKs. Azure Functions/Lambdna, CosmosDB, Blob Storage/S3, etc. The application gets cheaper, but you've now also bought yourself into the ecosystem and you're never migrating anywhere else.

And now the pricing increases. Or the cloud provider decides you shouldn't be a client anymore. Too bad. No easy way back.

There is still not much wrong with a webapp on a VM. You still need sysops, except classic sysops instead of cloud certified sysops.

holografix · 3 years ago
Have you tried Cloud Run? It’s just kNative underneath so if you want to take it somewhere else you can.

Deleted Comment

nine_k · 3 years ago
At small scale, you can lower your complexity using cloud. You don't need k8s for a small operation, just spin a couple of VMs and set them up via a few lines of Ansible.

OTOH you can pick a managed datsbase: you just get a connection string to a Postgres with failover and backup already taken care of. Same with queue services, email services, etc. They have really simple APIs.

You only need platform-specific knowledge when you start operating at a larger scale. By that time, you likely can afford to hire a dedicated SRE.

Nextgrid · 3 years ago
> You don't need k8s for a small operation, just spin a couple of VMs and set them up via a few lines of Ansible.

You can replace "couple VMs" with a dedicated Hetzner/OVH/Kimsufi server, it'll be the same except you won't get ripped off on egress bandwidth and performance.

chadash · 3 years ago
I don't know. Maybe I'm in a bubble, but it seems to me that knowing the basics of AWS (or some cloud provider) has become part of the standard developer's toolkit. With AWS specifically, there's so much documentation out there about getting started that I think you can have something up in a day or two on something like ECS or lambda (using something like the Serverless framework). And then when you need the more complex functionality, you are already in the AWS ecosystem.

If you are a startup trying to get a product to market, AWS is typically going to be a very small cost unless you are doing something very compute intensive (in which case something like Heroku, which the author recommends, certainly won't be cheaper anyway). The high bills only come later, if ever, after you've decided to create 20 databases and 50 apps for your 70 person startup.

Aeolun · 3 years ago
The problem with even lambda or ECS though is that it’s all much more complex than a simple rsync to your desired server.
sofixa · 3 years ago
With Fargate or Google Cloud Run, it really isn't. Assuming zero knowledge it's probably easier to learn how to build a docker container and call a binary to send it to the service than it is to setup ssh and rsync and a server to host your website.
scarface74 · 3 years ago
I went from not knowing Docker to having a production capable ECS/Fargate microservice spun up within less than a week.

I based it on this CFT

https://github.com/1Strategy/fargate-cloudformation-example/...

And this walk through for C#. I had a similar walkthrough for building a container for a Node service.

https://aws.amazon.com/blogs/compute/hosting-asp-net-core-ap...

coredog64 · 3 years ago
https://aws.github.io/copilot-cli/

That takes a Dockerfile, manages networking, secrets and CI/CD deployment. I have a few quibbles with what it does, but it generally works and is being maintained/updated.

tyingq · 3 years ago
Though there's lots of different ways to use AWS, so the experience your team brings may be a sort of complicated venn diagram. Even within a simple product, like deploying serverless, there's SAM vs serverless framework, vs scripted AWS cli. Using stacks or not. Using another layer on top or not like Terraform or CDK, and so on. Then the actual pattern of using it, Lambda layers, heavy or light patterns for securing things, using versions/aliases or not, and so on.

It wouldn't be unusual for a tech lead to pick some approach that ends up being new for the rest of the team. So some ecosystem with fewer choices would probably be faster.

hotpotamus · 3 years ago
> but it seems to me that knowing the basics of AWS (or some cloud provider) has become part of the standard developer's toolkit.

This seems to be flirting with the idea that Amazon has become a required component of hosting an application on the internet.

chadash · 3 years ago
Required? No, I'm not saying that. But yes, it's become the industry standard. If you don't know some AWS basics and you are a generalist web developer, you'd probably do well to learn them in order to make yourself a more marketable engineer.

There are plenty of good alternatives, but AWS is the 800-pound gorilla. You have to know at least a little bit about it in order to know why not to use it.

It's like saying you don't want to use React/Angular/Vue for your web app. There are good reasons not to, but at this point you should at least have some experience with web frameworks before making a technical decision not to use them. If your answer is "I don't know them and I don't want to learn them", that's fine for a personal project, but probably not a reason not to use them at your full-time startup. If your reason is "I know React, but for my specific use case, vanilla HTML/CSS/JS is better" then you are making a more informed decision.

vsareto · 3 years ago
The basics give you enough to be dangerous, but cloud stuff has become complex enough to require dedicated people to do it well.

I'd prefer to hire someone dedicated to that and just let them work part time when the environment is simple over a developer with just the basics who's going to try to architect and run everything.

sebazzz · 3 years ago
> I'd prefer to hire someone dedicated to that and just let them work part time when the environment is simple over a developer with just the basics who's going to try to architect and run everything.

And me thinking we got to the cloud to get rid of the BOFH.

cj · 3 years ago
This is just another way of saying “you shouldn’t use AWS if you don’t know how to use it”

Yes, there’s a steep learning curve. But once you’re passed that (or if you gained that knowledge in a prior role) AWS can easily hands down be the easiest, cheapest, and fastest infrastructure platform to use.

…if you know what you’re doing.

If you don’t know the ins and outs of AWS, then yes, you probably shouldn’t use it for your next MVP or startup idea.

TheNewsIsHere · 3 years ago
Different strokes for different folks. (Or at least, use cases.)

We’ve found at work that if you already have the talent, the hyper scale cloud platforms are amongst the most expensive ways to manage infrastructure if you go all in.

For example $0.40/secret/mo is _expensive_ compared to the cost of an HA vault (not necessarily Hashicorp) setup. If you have 1,000 secrets but you only need to access any given secret once a day, that’s a lot of expense against just setting up your own. And then you can take it with you.

Beyond that, we’ve had a LOT more reliable performance from our current VPS provider than we ever got from EC2.

That’s not to say AWS is exactly without competition. We use S3 extensively because nothing compares for our usage.

benmmurphy · 3 years ago
If you don’t need secret manager features like region replication and rotation you can use system manager parameters and the secret type. It’s effectively free. We use secret manager but weren’t aware of the price difference.
scarface74 · 3 years ago
You probably don’t need secrets manager. Just use SSM Parameter Store secret string type. It’s free.
prepend · 3 years ago
> API that works on localhost: 4 days

Lol. This may be true but if kind of pointless as an api on localist isn’t very useful unless you’re automating your home. Of course it’s easier to hack something out on localhost than to design for actual users.

I think it makes more sense to build incrementally with the end in mind. So writing those terraform scripts will take less time if you initially write them to deploy to localhost for testing.

throwaway2016a · 3 years ago
API gateway is simply how you expose Lambdas to an HTTP interface in AWS. It was the easiest way until they recently unveiled a way to expose the Lambda directly. You can also use ALBs (Application Load Balancers) and CloudFront to expose Lambdas to HTTP.

Either way, Lambdas are hard to debug locally, often I just deploy them to test (since deploying is easy). Or I write my code such that it bootstraps differently when launched locally vs Lambda. Either way, unless it is a very complex app that has lots of external dependencies, 4 days is a bit much.

Aeolun · 3 years ago
But don’t try to create a REST API Gateway with more than 200 resources, or CloudFormation will randomly start failing.

Or try to add more than 100 rules to your ALB, because it’ll be impossible.

My biggest issue with AWS is that the limits are so arbitrary, and seem to solely exist due to terrible design decisions.

If my local express server, or nginx can deal with 100 endpoints, how is it possible for this multi billion dollar infinitely scalable service to not do the same…

dogleash · 3 years ago
>This may be true but if kind of pointless as an api on localist isn’t very useful unless you’re automating your home. Of course it’s easier to hack something out on localhost than to design for actual users.

When did developing software on your own machine stop meaning "design for actual users"?

You should have a strong and reliable deployment for production, yes. But not being able run a baby instance locally just as easily means sacrificing your development loop.

prepend · 3 years ago
Designing for me as a single user is different from designing for other users. Other users can’t hit my localhost.

The article talked about how much time it takes to get working so it seems like the author took shortcuts to get it working locally.

I agree that it’s a good practice to dev so deploying locally works as well as deploying remotely (or to lots of environments). But this is different than developing only for localhost.

mwcampbell · 3 years ago
An API on localhost may not be useful, but an API running on a single VPS or dedicated server could take you very far.
bluedino · 3 years ago
We had a 'microservice' running on a server that had a bunch of other random things on it, it ran for 4-5 years until the server got decommissioned.

Nobody knew what all ran on that server, worse yet nobody knew that particular service ran on it. The person who wrote it was long gone.

It took a day to troubleshoot, a day to figure out what actually happened, and 5 days to get the server backup and running.

A couple months later, someone shut the server down again. It only took three days to fix it the second time.

In order to ensure this would never happen again, there were about 15 meetings, 20 people were involved, and then service was re-written and hosted on Azure (with the rest of some of our stuff). It's probably failed about 100 times since then, in about a hundred different ways.

This thing was maybe 500 lines of php.

taylodl · 3 years ago
I've been able to get a lot done with API Gateway, Lambda, S3, RDS, SQS, Lex, and ElasticSearch. I work for a Fortune 200 company who's risk averse and views "the cloud" with suspicion. My team's ability to get so much done is starting to change that perception.

Sure, if you're in a startup and you're doing most of the infrastructure and operational work yourself then working on-premise is often advantageous. If, like me, you're working for a Fortune 200 company and it takes multiple ServiceNow tickets to get on-prem hardware, a lead time of several months to get it through procurement and subsequently racked and stacked, and working with infrastructure solution engineers throughout the process - trust me, AWS is a much better choice and will enable your team to get stuff done.

If you are working for a startup then beware, as you grow avoid the temptation to build a data center - go to the public cloud. I would argue since that's where you're going to be hosted anyway - assuming your successful growth - then you should really consider just starting out there in the first place.

ericbarrett · 3 years ago
> If, like me, you're working for a Fortune 200 company and it takes multiple ServiceNow tickets to get on-prem hardware

What's stopping them, after they "embrace the Cloud," from making it take multiple ServiceNow tickets and several months to change an IAM policy? This has been my experience in very large corps that do use AWS. Typically it's also made a violation of policy to use a team-specific cloud account.

P.S. After having helped a mid-sized company migrate some core functions from DC to cloud, I agree with your startup advice.

taylodl · 3 years ago
You are correct, nothing stops us from taking our terrible on-prem practice and applying them to the cloud except for one thing - it will be more obvious that we screwed the pooch because they let some renegades in before they were able to nail everything down. Now they can't hide behind their gobbledygook BS when they try to apply their existing practices to the cloud. My team is respected so much that we're able to push back on their nonsense in public meetings with the suits. Simply put, I'm enjoying First Mover advantage. Also, doesn't hurt that before joining this team I was on the Enterprise Architecture team and I literally wrote our Cloud Policy! I think that was well-played, even if I say so myself! :)
scarface74 · 3 years ago
> if like me, you're working for a Fortune 200 company and it takes multiple ServiceNow tickets to get on-prem hardware

“I didn’t get into the cloud to avoid administering servers . I wanted to avoid server administrators”