Readit News logoReadit News
BackBlast commented on Bun 1.2 Is Released   bun.sh/blog/bun-v1.2... · Posted by u/ksec
nindalf · 7 months ago
I think their move away from a binary lock file to a text based lock file in this release makes this pretty clear - they shoot first and ask questions later. Any of those problems they've identified with the binary lock file are kinda obvious if you think about it for a bit. A strong indicator that you should think about it is that the popular languages with package managers (npm, ruby, rust) have text based lock files. The fact that the bun team didn't think about it and thought that binary was better because it was faster and no one had thought of this idea feels like hubris to me.

It's cool that they're doing the mainstream thing now, but it's something for them to think about.

BackBlast · 7 months ago
Or they think extensively about performance.

Regardless, the switch shows they pay attention and are willing to change.

BackBlast commented on The Future of Htmx   htmx.org/essays/future/... · Posted by u/polyrand
vlz · 8 months ago
> I don't want to bring politics into this, but I find the hype behind HTMX eerily similar to the populist vibes I get reading what their voters write.

Funnily I get quite irrationally conservative vibes the other way around. People clinging to their tools. All this complexity I learned must have been for a reason! We are building webapps for a reason (and that reason is valid for absolutely everybody and their use case)! "You youngsters do not remember the bad old days when everything was made of spaghetti, never again!"

Fortunately this is not a struggle for democracy but only a quibble in web development.

Your abstractions live either in the frontend or in the backend. For most cases, either will be quite fine.

BackBlast · 8 months ago
> All this complexity I learned must have been for a reason!

It doesn't have to be so emotional.

Htmx can be helpful to keep all your state in one place, it's simpler to reason about and make changes. Lower cognitive load for the system is better for smaller teams and particularly lone developers.

You can accomplish the same thing by going full front end and minimize backend code with a similar small library that takes care of most to all of it for you.

Living in the front end, with all app state in the front end, has distinct advantages. Lower latency for interaction. Lower server costs. Offline capable. It has some cons like slower initial render. If you don't like JavaScript, JavaScript.

BackBlast commented on Why we use our own hardware   fastmail.com/blog/why-we-... · Posted by u/nmjenkins
0xbadcafebee · 9 months ago
I've been doing this job for almost as long as they have. I work with companies that do on-prem, and I work with companies in the cloud, and both. Here's the low down:

1. The cost of the server is not the cost of on-prem. There are so many different kinds of costs that aren't just monetary. ("we have to do more ourselves, including planning, choosing, buying, installing, etc,") Those are tasks that require expertise (which 99% of "engineers" do not possess at more than a junior level), and time, and staff, and correct execution. They are much more expensive than you will ever imagine. Doing any of them wrong will causes issues that will eventually cost you business (customers fleeing, avoiding). That's much worse than a line-item cost.

2. You have to develop relationships for good on-prem. In order to get good service in your rack (assuming you don't hire your own cage monkey), in order to get good repair people for your hardware service accounts, in order to ensure when you order a server that it'll actually arrive, in order to ensure the DC won't fuck up the power or cooling or network, etc. This is not something you can just read reviews on. You have to actually physically and over time develop these relationships, or you will suffer.

3. What kind of load you have and how you maintain your gear is what makes a difference between being able to use one server for 10 years, and needing to buy 1 server every year. For some use cases it makes sense, for some it really doesn't.

4. Look at all the complex details mentioned in this article. These people go deep, building loads of technical expertise at the OS level, hardware level, and DC level. It takes a long time to build that expertise, and you usually cannot just hire for it, because it's generally hard to find. This company is very unique (hell, their stack is based on Perl). Your company won't be that unique, and you won't have their expertise.

5. If you hire someone who actually knows the cloud really well, and they build out your cloud env based on published well-architected standards, you gain not only the benefits of rock-solid hardware management, but benefits in security, reliability, software updates, automation, and tons of unique features like added replication, consistency, availability. You get a lot more for your money than just "managed hardware", things that you literally could never do yourself without 100 million dollars and five years, but you only pay a few bucks for it. The value in the cloud is insane.

6. Everyone does cloud costs wrong the first time. If you hire somebody who does have cloud expertise (who hopefully did the well-architected buildout above), they can save you 75% off your bill, by default, with nothing more complex than checking a box and paying some money up front (the same way you would for your on-prem server fleet). Or they can use spot instances, or serverless. If you choose software developers who care about efficiency, they too can help you save money by not needing to over-allocate resources, and right-sizing existing ones. (Remember: you'd be doing this cost and resource optimization already with on-prem to make sure you don't waste those servers you bought, and that you know how many to buy and when)

7. The major takeaway at the end of the article is "when you have the experience and the knowledge". If you don't, then attempting on-prem can end calamitously. I have seen it several times. In fact, just one week ago, a business I work for had three days of downtime, due to hardware failing, and not being able to recover it, their backup hardware failing, and there being no way to get new gear in quickly. Another business I worked for literally hired and fired four separate teams to build an on-prem OpenStack cluster, and it was the most unstable, terrible computing platform I've used, that constantly caused service outages for a large-scale distributed system.

If you're not 100% positive you have the expertise, just don't do it.

BackBlast · 8 months ago
> 7. ... Another business I worked for literally hired and fired four separate teams to build an on-prem OpenStack cluster, and it was the most unstable, terrible computing platform I've used, that constantly caused service outages for a large-scale distributed system.

I've seen similarly unstable cloud systems. It's generally not the tool's fault, it's the skill of the wielder.

BackBlast commented on Why we use our own hardware   fastmail.com/blog/why-we-... · Posted by u/nmjenkins
0xbadcafebee · 9 months ago
The bigger cost is what will happen to your business when you're hard-down for a week because all your SQL servers are down, and you don't have spares, and it will take a week to ship new servers and get them racked. Even if you think you could do that very fast, there is no guarantee. I've seen Murphy's Law laugh in the face of assumptions and expectations too many times.

But let's not just make vague claims. Everybody keeps saying AWS is more expensive, right? So let's look at one random example: the cost of a server in AWS vs buying your own server in a colo.

  AWS:
    1x c6g.8xlarge (32-vCPU, 64GB RAM, us-east-2, Reserved Instance plan @ 3yrs)
       Cost up front: $5,719
       Cost over 3 years: $11,437 ($158.85/month + $5,719 upfront)

  On-prem:
    1x Supermicro 1U WIO A+ Server (AS -1115SV-WTNRT), 1x AMD EPYC™ 8324P Processor 32-Core 2.65GHz 128MB Cache (180W), 2x 32GB DDR5 5600MHz ECC RDIMM Server Memory, 2x 240GB 2.5" PM893 SATA 6Gb/s Solid State Drive (1 x DWPD), 3 Years Parts and Labor + 2 Years of Cross Shipment, MCP-290-00063-0N - Supermicro 1U Rail Kit (Included), 2 10GbE RJ45 Ports : $4,953.40
    1x Colo shared rack 1U 2-PS @ 120VAC: $120/month (100Mbps only)
      Cost up front: $4,953.40 (before shipping & tax)
      Cost over 3 years: $9,273 (minimum)
So, yes, the AWS server is double the cost (not an order of magnitude) of a ServerMicro (& this varies depending on configuration). But with colocation fees, remote hands fees, faster internet speeds, taxes, shipping, and all the rest of the nickle-and-diming, the cost of a single server in a colo is almost the same as AWS. Switch to a full rack, buy the networking gear, remote hands gear, APCs, etc that you'll probably want, and it's way, way more expensive to colo. In this one example.

Obviously, it all depends on a huge number of factors. Which is why it's better not to just take the copious number of "we do on-prem and everything is easy and cheap" stories at face value. Instead one should do a TCO analysis based on business risk, computing requirements, and the non-monetary costs of running your own micro-datacenter.

BackBlast · 9 months ago
> The bigger cost is what will happen to your business when you're hard-down for a week because all your SQL servers are down, and you don't have spares, and it will take a week to ship new servers and get them racked. Even if you think you could do that very fast, there is no guarantee. I've seen Murphy's Law laugh in the face of assumptions and expectations too many times.

Lets ignore the loaded, cherry picked situation of no redundancy, no spares, and no warranty service. Because this is all magically hard since cloud providers appeared even though many of us did this, and have done this for years....

There is nothing stopping an on-prem user from renting a replacement from a cloud provider while waiting for hardware to show up. That's a good logical use case for the cloud we can all agree upon.

Next, your cost comparison isn't very accurate. One is isolated dedicated hardware, the other is shared. Junk fees such as egress, IPs, charges for access metal instances, IOPS provisioning for a database, etc will infest the AWS side. The performance of SAN vs local SSD is night and day for a database.

Finally, I can acquire that level of performance hardware much cheaper if I wanted to, order of magnitude is plausible and depends more on where it's located, colo costs, etc.

BackBlast commented on After 3 Years, I Failed. Here's All My Startup's Code   dylanhuang.com/blog/closi... · Posted by u/gniting
gortok · 9 months ago
> it is freaking hard to commit to 3-5 years of virtually no salary to maybe get to the same outcome you would have had with a funded business.

I agree and I disagree.

It is hard to bootstrap a product. It's even harder to bootstrap a product that folks want to buy. It's even harder to do that when the prevailing wisdom on this (and other tech sites) is to go the VC-funded route.

The VC funded route -- for the vast, vast, vast majority of software businesses ends up being the exact same as the bootstrapped route -- except that you lose one avenue when you find out your business isn't "hyper-growth" or isn't going to be a huge as you claimed in your pitchdeck to the VCs. You lose the ability to pare back and to be what is described as a "lifestyle business". On failure, the business gets sold or stripped for parts, unless the founder can somehow get the VCs to agree to let them 'buy it back' or write-off their investment and give it back.

Bootstrapping means taking that risk on yourself; but it also means control over your options, and that is one fundamental strength to bootstrapping you don't get with VC funded startups.

Absolutely, it's no salary or the aptly named ramen profitability for a long time if your marketing is not aligned with the folks that will buy your software, or if your software really is just selling a solution to a problem no one actually cares about.

The 'hard part' isn't the engineering. It isn't the technology. The hard part is the marketing -- the connecting the hopefully expensive problem you solve to the right folks who want to buy that solution.

To your second part, I wholly agree that selling $10/month licenses is not a viable way forward if you want to be anything more than a solopreneuer.

But to do that, you need to hone your positioning so that you get in front of the folks with money who need to solve the problem your solution solves.

In your case, it looks like you run a web-auditing tool (according to your bio) called caido.io; and it looks like you're targetting basically everyone who needs to audit a website.

In the thought that "there are more fish in the ocean so why not fish there", that is a seemingly sound idea.

But you don't really want to spend your time trying to fish in the ocean if you have a barrel you could fish in and get the same result... dinner. (I did not come up with this metaphor, that was Jonathan Stark -- who writes a lot about positioning in this context).

The question you have to ask yourself is, are you positioning your product so that the CISOs or the large cyber-security firms would want to buy it? And if you did, do you think they'd trust your product at a mere $25 per month?

Anyway, the point to all this is that the problem is learning how to position the thing you build and get it in front of the right folks, and that's a marketing problem, not a technology problem, and that's something that we as engineers have ignored for far too long collectively.

I wish you the best of luck with your product -- we need more small software businesses in this world!

BackBlast · 9 months ago
> The 'hard part' isn't the engineering

Depends on the problem. But I don't find a lot of companies that are all marketing and a bare cupboard of an engineering department. They exist, but they are not a universal.Also, most companies that are in this state today have shifted to it from one where product development with the engineering, was actually at least competent.

If you find marketing the hardest part, and most here probably will, you are likely an engineer foremost.

You need a good enough product, and you need it in front of the right buyers. Both aspects can be a significant obstacle to create a business.

BackBlast commented on Why Companies Are Ditching the Cloud: The Rise of Cloud Repatriation   thenewstack.io/why-compan... · Posted by u/panrobo
cyberax · 10 months ago
> Cloud providers can also over-provision

But they don't. AWS overprovisions only on T-type instances (T3,T4,T5). The rest of the instance types don't share cores or memory between tenants.

I know, I worked with the actual AWS hardware at Amazon :) AWS engineers have always been pretty paranoid about security, so they limit the hardware sharing between tenants as much as possible. For example, AWS had been strictly limiting hyperthreading and cache sharing even before the SPECTRE/Meltdown.

AWS doesn't actually charge any premium for the bare metal instance types (the ones with ".metal" in the name). They just cost a lot because they are usually subdivided into many individual VMs.

For example, c6g.metal is $2.1760 per hour, and c6g.16xlarge is the same $2.1760. c6g.4xlarge is $0.5440

> And lots of other more esoteric stuff.

Not really. They had some plans for more esoteric stuff, but anything more complicated than EC2 Spot does not really have a market demand.

Customers prefer stability. EC2 and other foundational services like EBS and VPC are carefully designed to stay stable if the AWS control plane malfunctions ("static stability").

BackBlast · 10 months ago
Seems par for the course that even AWS employees don't even understand their pricing. I noticed the pricing similarity and tried to deploy to .metal instances. And that's when I got hit with additional charges.

If you turn on a .metal instance, your account will be billed (at least) $2/hr for the privilege for every region in which you do so. A fact I didn't know until I had racked up more charges than expected. So many junk fees hiding behind every checkbox on the platform.

BackBlast commented on SSDs have become fast, except in the cloud   databasearchitects.blogsp... · Posted by u/greghn
kstrauser · 2 years ago
I'm not certain that's true if you look at TCO. Yes, you can probably buy a server for less than the yearly rent on the equivalent EC2 instance. But then you've got to put that server somewhere, with reliable power and probably redundant Internet connections. You have to pay someone's salary to set it up and load it to the point that a user can SSH in and configure it. You have to maintain an inventory of spares, and pay someone to swap it out if it breaks. You have to pay to put its backups somewhere.

Yeah, you can skip a lot of that if your goal is to get a server online as cheaply as possible, reliability be damned. As soon as you start caring about keeping it in a business-ready state, costs start to skyrocket.

I've worn the sysadmin hat. If AWS burned down, I'd be ready and willing to recreate the important parts locally so that my company could stay in business. But wow, would they ever be in for some sticker shock.

BackBlast · 2 years ago
> I'm not certain that's true if you look at TCO.

Sigh. This old trope from ancient history in internet time.

> Yes, you can probably buy a server for less than the yearly rent on the equivalent EC2 instance.

Or a monthly bill... I can oft times buy a higher performing server for the cost of a rental for a single month.

> But then you've got to put that server somewhere, with reliable power and probably redundant Internet connections

Power:

The power problem is a lot lower with modern systems because they can use a lot less of it per unit of compute/memory/disk performance. Idle power has improved a lot too. You don't need 700 watts of server power anymore for a 2 socket 8 core monster that is outclassed by a modern $400 mini-pc that maxes out at 45 watts.

You can buy server rack batteries now in a modern chemistry that'll go 20 years with zero maintenance. 4U sized 5kwh cost 1000-1500. EVs have pushed battery cost down a LOT. How much do you really need? Do you even need a generator if your battery just carries the day? Even if your power reliability totally sucks?

Network:

Never been easier to buy network transfer. Fiber is available in many places, even cable speeds are well beyond the past, and there's starlink if you want to be fully resistant to local power issues. Sure, get two vendors for redundancy. Then you can hit cloud-style uptimes out of your closet.

Overlay networks like tailscale make the networking issues within the reach of almost anyone.

> Yeah, you can skip a lot of that if your goal is to get a server online as cheaply as possible, reliability be damned

Google cut it's teeth with cheap consumer class white box computers when "best practice" of the day was to buy expensive server class hardware. It's a tried and true method of bootstrapping.

> You have to maintain an inventory of spares, and pay someone to swap it out if it breaks. You have to pay to put its backups somewhere.

Have you seen the size of M.2 sticks? Memory sticks? They aren't very big... I happened to like opening up systems and actually touching the hardware I use.

But yeah, if you just can't make it work or be bothered in the modern era of computing. Then stick with the cloud and the 10-100x premium they charge for their services.

> I've worn the sysadmin hat. If AWS burned down, I'd be ready and willing to recreate the important parts locally so that my company could stay in business. But wow, would they ever be in for some sticker shock.

Nice. But I don't think it cost as much as you think. If you run apps on the stuff you rent and then compare it to your own hardware, it's night and day.

BackBlast commented on SSDs have become fast, except in the cloud   databasearchitects.blogsp... · Posted by u/greghn
malfist · 2 years ago
I keep hearing that, but that's simply not true. SSDs are fast, but they're several orders of magnitude slower than RAM, which is orders of magnitude slower than CPU Cache.

Samsung 990 Pro 2TB has a latency of 40 μs

DDR4-2133 with a CAS 15 has a latency of 14 nano seconds.

DDR4 latency is 0.035% of one of the fastest SSDs, or to put it another way, DDR4 is 2,857x faster than an SSD.

L1 cache is typically accessible in 4 clock cycles, in 4.8 ghz cpu like the i7-10700, L1 cache latency is sub 1ns.

BackBlast · 2 years ago
You're missing the purpose of the cache. At least for this argument it's mostly for network responses.

HDD was 10ms, which was noticeable for cached network request that needs to go back out on the wire. This was also bottle necked by IOPS, after 100-150 IOPS you were done. You could do a bit better with raid, but not the 2-3 orders of magnitude you really needed to be an effective cache. So it just couldn't work as a serious cache, the next step up was RAM. This is the operational environment which redis and such memory caches evolved.

40 us latency is fine for caching. Even the high load 500-600us latency is fine for the network request cache purpose. You can buy individual drives with > 1 million read IOPS. Plenty for a good cache. HDD couldn't fit the bill for the above reasons. RAM is faster, no question, but the lower latency of the RAM over the SSD isn't really helping performance here as the network latency is dominating.

Rails conference 2023 has a talk that mentions this. They moved from a memory based cache system to an SSD based cache system. The Redis RAM based system latency was 0.8ms and the SSD based system was 1.2ms for some known system. Which is fine. It saves you a couple of orders of magnitude on cost and you can do much much larger and more aggressive caching with the extra space.

Often times these RAM caching servers are a network hop away anyway, or at least a loopback TCP request. Making the question of comparing SSD latency to RAM totally irrelevant.

BackBlast commented on Slashing data transfer costs in AWS   bitsand.cloud/posts/slash... · Posted by u/danielklnstein
hobs · 2 years ago
Managing on prem is definitely harder because you are benefiting from the economics of scale of all the management problems that you have to pay yourself, and if you don't have scale then you will be significantly overpaying to get the same type of quality, reliability, or responsiveness.

Most people are not paid to manage infra, they are paid to talk to customers, ship features, fix bugs, and other "core business" items; just like most businesses don't build roads, they pay taxes and utilize them because the cost of doing it themselves for their preferred traffic patterns would be much more than they could justify (for now.)

BackBlast · 2 years ago
If you don't have scale, you don't need most of the features. Fire up PC, load application. Setup egress port open to internet. Setup application backup on cron job. Done until scale problems arise.
BackBlast commented on Slashing data transfer costs in AWS   bitsand.cloud/posts/slash... · Posted by u/danielklnstein
overstay8930 · 2 years ago
Cloud engineers can do the job of 4-5 on-prem people. Our AWS devs don't need to be BGP or ZFS experts, they just need to be AWS experts.
BackBlast · 2 years ago
2015 called and wants it's hype back.

u/BackBlast

KarmaCake day583November 5, 2020View Original