Readit News logoReadit News
anovikov · 2 years ago
Clouds are a scam and are designed for:

- fools who can't do simple math or read fine print ("have no idea how much 20 $2 per hour instances pumping data at 10 mbit/second add up in a month, and OMG that data is not free when i already pay for the instance?!". And besides they give me a whopping $100,000 credit - that will last eternity!")

- corporate tricksters ("if we don't invest into our hardware and buy AWS instead, our next quarter bottom line will look GREAT and i get a nice bonus, and by the time truth transpires, i will jump the boat to the next shop i do the same trick with")

- people with breaks in basic logic and total lack of foresight ("i can't afford buying all the hardware for my small pet startup, and will make do with just $200 a month in AWS, and i don't realise it will only work for as long as my startup is not successful and has no users - and when it's no longer the case, i will be vendor-locked with tech solutions based on AWS and petabytes of data which is $0.05 per GB to download, locked up there, and will bleed money for years").

They should be avoided at all costs except for development purposes, and if you don't know how to or can't afford to do something without clouds, you just don't know how to do it or can't afford it.

In Europe, none of my clients use clouds. They have dedicated setups with reputable providers that work a lot better than cloud-based ones and cost pennies. Also, i realise that my custom software development biz doesn't really work with EU clients, i barely make a profit with them and they get to be real pain. Probably suggests that educational level of Europe is a lot higher.

jesterson · 2 years ago
Doubt anyone would disagree on that, except those who are very new to the industry.

Just like banks, insurances companies and sorts are pure scam, but that's totally different topic.

anovikov · 2 years ago
Don't have much experience with insurance companies - i only use them for mandatory things like health insurance and corporate liability insurance and car insurance - all of which come so cheap here in Europe frankly idgaf if they are indeed scams - plus, my insurer paid a nice amount to the other side when i got into a drunk car accident 13 years ago and i never had a problem with them.

As for banks, what's the problem about them? They get to be a little bit of pain because of KYC, but otherwise, what's wrong about them?

jmarchello · 2 years ago
You should start with a single monolithic application on a single server that you scale vertically as much as possible before even thinking of scaling horizontally. Most apps won’t ever need architecture more complex than this.
necovek · 2 years ago
One thing to remember is that SOA solves two problems: one of organizational scalability, and another of product scalability (with the usual caveat of "if done well").

Monoliths and traditional databases can take a beating before you need something else. It's trickier for rapid growth organizations where you are trying to take on many new members, but there are other solutions there too.

I'd also note that traditional web monoliths really have multiple services too (usually a reverse proxy + CDN, web application, and a data store). There is plenty of business logic on each of these layers too (this also explains the traditional split between Ops, Dev and DBAs), and this actually allows the setup to scale big.

jmarchello · 2 years ago
> organizational scalability

I’m also of the opinion that most engineering/product orgs are extremely bloated and could move much faster with higher quality if they were cut 50-90%.

mattbillenstein · 2 years ago
I like this - have started to think more this way, but I'd almost always deploy three boxes instead of one. I like the flexibility of having something that can auto-failover should an az or instance die or disk die.

That being said, I've seen VMs with multiple years of uptime on various clouds, so ymmv.

jmarchello · 2 years ago
Yeah redundancy and HA are good practices. My point in spirit is more that we usually don’t need 90% of the complexity.
necovek · 2 years ago
Making proper use of functional (stateless) paradigm in non-functional languages embodies a bunch of other good practices (testability, isolation, dependency inversion...).

Refactoring can always be done with a running (no-downtime) system at no extra cost (time or money) compared to rewriting or downtime-requiring approach.

You can always deliver user value and "paying up technical debt" can and should be done as part of regular work (corrollary from above: at no extra cost).

We'll never do away with physical keyboards for inputting text (yet I only have one mechanical keyboard I don't even use regularly :).

amatecha · 2 years ago
So many.

"AI" is the dotcom bubble (notice how every big company HAS to get in on it, no matter how ridiculous their application is?)... Further, it will simply allow those who apply their power unto others to do so in an even more egregious or deeply-reaching way.

Advertising should be illegal.

Proprietary software is basically always a trap (if it's not harmful or coercive at first, it eventually will be, well after you're locked in).

The web has been ruined by turning it into an operating system (also see "advertising should be illegal"). 99% of the time I just want very lightly-styled text, and some images. I don't need (or want) animated, drop-shadowed buttons.

Graphical OS user experience was basically "solved" 30 years ago and there hasn't been much of anything novel since -- in fact, in terms of usability, most newer OSes are far worse to use than, say, Macintosh System 7 (assuming you like a GUI for your OS). The always-online forced updates of modern OSes exacerbates their crappiness -- constant change and thus cognitive load, disrespectfully changing how things work despite how much effort you spent to familiarize yourself with them.

amatecha · 2 years ago
haha, how timely for me to say "proprietary software is basically always a trap"! https://www.reuters.com/technology/cybersecurity/governments... (yeah, I know the same sort of thing can/does happen with FOSS stuff ;) )
_kb · 2 years ago
If these are things that few people agree with, well count me among the few.
gary_0 · 2 years ago
HTML, and retained-mode GUIs and DOMs generally, is all you need. Anything more complex is over-engineering. JavaScript was, broadly speaking, a mistake. 90% of what we need computers to do is do some I/O and put text, colored rectangles, and JPEGs/WEBMs on a screen, and that shouldn't be that complicated.

A lot of good things about way we wrote websites and native applications back in the early 2000's were babies that got thrown out with the bathwater. That's why we can't seem to do what we could do back then anymore--at least not without requiring 4x as many people, 3x as much time, and 20x more computing power.

(Maybe more than a few people on HN will agree with this, now that I think of it...)

jmarchello · 2 years ago
This site (and its popularity) are great examples of this. I very much agree.
amatecha · 2 years ago
Oh heck yeah! Fist bump from me, I wrote something fairly similar, haha :)
caprock · 2 years ago
Almost all software best practices and programming idioms are just shared, personal preferences and not objectively valuable.
RetroTechie · 2 years ago
User time is more valuable than programmer's time. Read: programmers should operate as if CPU cycles, RAM, disk space etc is precious. Less = more.

Why? If programmer builds something only for him/herself, or a few of their peers, it really doesn't matter. Do as you like. But be aware that one-off / prototype != final product.

Commonly held view is that programmers are a small % of population, thus their skills are rare (valuable), thus if programmer's time can be saved by wasting some (user) CPU cycles, RAM etc (scripting languages, I'm looking at you!), so be it. Optimize only if necessary.

BUT! Ideally, the programming is only done once. If software is successful, it will be used by many users, again & again over a long time.

The time / RAM / storage wasted over the many runs of such software (not to mention bugs), by many users, outweighs any saving in programmers time.

In short: fine, kick out a prototype or something duct-taped from inefficient components.

But if it catches on: optimize / re-design / simplify / debug / verify the heck out of it, to the point where no CPU cycle or byte can be taken out without losing the core functionality.

Existing software landscape is too much duct-tape, compute expensive but never-used features, inefficient, RAM gobbling, bug-ridden crap that should never have been released.

And that developer has a beefy machine doesn't mean users do.

didgetmaster · 2 years ago
I have always been of the opinion that no software should ever be released until the entire development team has spent at least a week personally running it on ten year old hardware. Nothing motivates a programmer to optimize their code more than to have to experience the same pain that users without the beefiest hardware have to endure.
muzani · 2 years ago
"running it on ten year old hardware"

This doesn't work for mobile though :p

iOS versions are obsolete within 2 years and Android within 5.

Generally, Android devs tend to have at least one Huawei, Xiaomi, or low end Samsung, because these break a lot and hold a good share of the non-American market.

However, the high end phones have their own pain - notches, fold, edge screens. I've built apps that didn't function well because the edge screens meant that buttons needed extra side padding because they fall off the screen. These are also the devices used by investors & in demos, so often high end phones are higher priority than the low ones.

There's some problems that have nothing to do with device age. Samsung gallery is one of the top image/file picker apps in the world and there's weird behavior once it exceeds 2000 images or so. I ended up hacking a file/image picker that was more optimized than Samsung's and it's why you see why many apps defauting to their own internal file/image pickers.

mikewarot · 2 years ago
We will eventually adopt Capability Based Security out of necessity. Until then you really can't trust computers. I think it's still at least a decade away.

WASM is as close as we've been since Multics. Genode is my backup plan, should someone manage to force POSIX file access in order to "improve" or "streamline" WASM.