Readit News logoReadit News
chipdart commented on Ask HN: How can I grow as an engineer without good seniors to learn from?    · Posted by u/prathameshgh
poisonborz · a year ago
Craving for a team of more senior people to learn from, "yodas" who guide you - is a fallacy. I hear younger people mentioning it in interviews constantly as to what they dream team would look like.

You can learn from everyone around you, regardless of their status. There is no "universal developer experience curve", everyone has more or less knowledge on a field or with a specific tool/framework.

You can learn almost everything alone - I mean learning from the web. There are great forums, groups, discord chats, ask LLMs carefully and check on the answers. It may sound reassuring that someone watches your back and won't allow mistakes or would help clean up a mess, but you should not keep relying on this anyway. Learning by doing and taking responsibility will make you much more self assured, which is actually most of what makes someone senior.

chipdart · a year ago
> You can learn from everyone around you, regardless of their status. There is no "universal developer experience curve", everyone has more or less knowledge on a field or with a specific tool/framework.

There's a big difference between learning from someone and having someone teach you something. The latter expedites your progress and clarifies learning path, whereas the former can even waste your time with political fights pulling you into dead-ends.

chipdart commented on Ask HN: How can I grow as an engineer without good seniors to learn from?    · Posted by u/prathameshgh
crawshaw · a year ago
It is entirely possible to learn by yourself online. I did it! But a warning: you will spend years in the weeds, focusing on things that don't matter. Good in-person advisors can help you avoid some years wasted time.

The fastest way to learn is to move to the Bay Area and work with people who have been doing this for decades. That is not a sufficient criteria (anyone can be a bad teacher), but the experience is extremely useful.

chipdart · a year ago
> But a warning: you will spend years in the weeds, focusing on things that don't matter.

That sums up anyone's college experience.

The hard part is telling apart what doesn't matter from what does. More often than not, what dictates which is which is the project you find yourself working on.

chipdart commented on Cursed Linear Types in Rust   geo-ant.github.io/blog/20... · Posted by u/todsacerdoti
logicchains · a year ago
>One usage pattern many of us have found useful in our software for years is the "set once and only once" (singleton-ish)

C# has this: https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...

chipdart · a year ago
> C# has this:

This is only syntactic sugar to allow using object initializers to initialize specific member varabiles of a class instance instead of simply using a constructor and/or setting member variables in follow-up statements. It's hardly the feature OP was describing.

chipdart commented on Cursed Linear Types in Rust   geo-ant.github.io/blog/20... · Posted by u/todsacerdoti
MrMcCall · a year ago
The problem is that programming languages have always focused on the definition side of types, which is absolutely necessary and good, but the problem is that only limiting use by, e.g., "protected, private, friend, internal, ..." on class members, as well as the complicated ways we can limit inheritance, are barely useful.

We need a way to define how to "use" the types we define. That definitional structure is going to bleed from creation of instances into how they live out their lifetimes. It appears that Rust's design addresses some aspects of this dimension, and it also appears to be a fairly major point of contention among y'all, or at least require a steepish learning curve. I don't know, as I prefer to work in ubiquitous environments that are already featureful on 5-10yo distros.

One usage pattern many of us have found useful in our software for years is the "set once and only once" (singleton-ish) , whether it's for a static class member or a static function var, or even a db table's row(s). I don't know of any programming environment that facilitates properly specifying calculating something even that basic in the init phase of running the system, but I don't explore new languages so much anymore, none of them being mature enough to rely upon. Zig's comptime stuff looks promising, but I'm not ready to jump onto that boat just yet. I am, however, open to suggestions.

The real solution will ultimately require a more "wholistic" (malapropism intended) approach to constraining all dimensions of our software systems while we are building them out.

chipdart · a year ago
> The problem is that programming languages have always focused on the definition side of types, which is absolutely necessary and good, but the problem is that only limiting use by, e.g., "protected, private, friend, internal, ..." on class members, as well as the complicated ways we can limit inheritance, are barely useful.

Virtually all software ever developed managed just fine to with that alone.

> I don't know of any programming environment that facilitates properly specifying calculating something even that basic in the init phase of running the system, (...)

I don't know what I'm missing, but it sounds like you're describing the constructor of a static object whose class only provides const/getter methods.

> or even a db table's row(s).

I don't think you're describing programming language constructs. This sounds like a framework feature that can be implemented with basic inversion of control.

chipdart commented on Why microservices might be finished as monoliths return with a vengeance   venturebeat.com/data-infr... · Posted by u/unclebucknasty
lelanthran · a year ago
> If you work on single-person "teams" maintaining something that is barely used and does not even have SLAs and can be shut down for hours then there's nothing preventing you from keeping all your eggs into a single basket.

There's a whole spectrum between that and "needs to go down for less than a minute per year". For every project/job/app that needs the AWS levels of resilience and availability, there are maybe a few 100k that don't, and none of those are the "barely-used, down for hours" type of thing either.

Having been a developer since the mid-90s, I am always fascinated by the thought that computer, server and/or network resilience is something that humanity only discovered in the last 15 years.

The global network handling payments and transactions worked with unnoticeable downtime for 30-odd years; millions of transactions per second, globally, and it was resilient enough to support that without noticeable or expensive downtime.

chipdart · a year ago
> For every project/job/app that needs the AWS levels of resilience (...)

I don't think you're framing the issue from an educated standpoint. You're confusing high-availability with not designing a brittle service by paying attention to very basic things that are trivial to do. For example, supporting very basic blue-green deployments that come for free in virtually any conceivable way to deploy services. You only need a reverse proxy and just enough competence to design and develop services that can run in parallel. This is hardly an issue, and in this day and age not being able to pull this off is a hallmark of incompetence.

chipdart commented on Kubernetes on Hetzner: cutting my infra bill by 75%   bilbof.com/posts/kubernet... · Posted by u/BillFranklin
threeseed · a year ago
> Has anyone ever toyed around with the idea?

Sidero Omni have done this: https://omni.siderolabs.com

They run a Wireguard network between the nodes so you can have a mix of on-premise and cloud within one cluster. Works really well but unfortunately is a commercial product with a pricing model that is a little inflexible.

But at least it shows it's technically possible so maybe open source options exist.

chipdart · a year ago
> They run a Wireguard network between the nodes so you can have a mix of on-premise and cloud within one cluster.

Interesting.

A quick search shows that some people already toyed with the idea of rolling out something similar.

https://github.com/ivanmorenoj/k8s-wireguard

chipdart commented on Kubernetes on Hetzner: cutting my infra bill by 75%   bilbof.com/posts/kubernet... · Posted by u/BillFranklin
mhuffman · a year ago
For dedicated they say this:

>All root servers have a dedicated 1 GBit uplink by default and with it unlimited traffic.

>Inclusive monthly traffic for servers with 10G uplink is 20TB. There is no bandwidth limitation. We will charge € 1/TB for overusage.

So it sounds like it depends. I have used them for (I'm guessing) 20 years and have never had a network problem with them or a surprise charge. Of course I mostly worked in the low double digit terabytes. But have had servers with them that handled millions of requests per day with zero problems.

chipdart · a year ago
> We will charge € 1/TB for overusage.

It sounds like a good tradeoff. The monthly cost of a small vCPU is equivalent to a few TB of bandwidth.

chipdart commented on Kubernetes on Hetzner: cutting my infra bill by 75%   bilbof.com/posts/kubernet... · Posted by u/BillFranklin
juiyhtybr · a year ago
yes, like i said, throw an overlay on that motherfucker and ignore the fact that when a customer request enters the network it does so at the cloud provider, then is proxied off to the final destination, possibly with multiple hops along the way.

you can't just slap an overlay on and expect everything to work in a reliable and performant manner. yes, it will work for your initial tests, but then shit gets real when you find that the route from datacenter a to datacenter b is asymmetric and/or shifts between providers, altering site to site performance on a regular basis.

the concept of bursting into on-prem is the most offensive bit about the original comment. when your site traffic is at its highest, you're going to add an extra network hop and proxy into the mix with a subset of your traffic getting shipped off to another datacenter over internet quality links.

chipdart · a year ago
> yes, like i said, (...)

I'm sorry, you said absolutely nothing. You just sounded like you were confused and for a moment thought you were posting on 4chan.

chipdart commented on Kubernetes on Hetzner: cutting my infra bill by 75%   bilbof.com/posts/kubernet... · Posted by u/BillFranklin
chipdart · a year ago
I loved the article. Insightful, and packed with real world applications. What a gem.

I have a side-question pertaining to cost-cutting with Kubernetes. I've been musing over the idea of setting up Kubernetes clusters similar to these ones but mixing on-premises nodes with nodes from the cloud provider. The setup would be something like:

- vCPUs for bursty workloads,

- bare metal nodes for the performance-oriented workloads required as base-loads,

- on-premises nodes for spiky performance-oriented workloads, and dirt-cheap on-demand scaling.

What I believe will be the primary unknown is egress costs.

Has anyone ever toyed around with the idea?

chipdart commented on Why microservices might be finished as monoliths return with a vengeance   venturebeat.com/data-infr... · Posted by u/unclebucknasty
jfim · a year ago
Demoralized or denormalized?
chipdart · a year ago
> Demoralized or denormalized?

The database is denormalized. The developers are demoralized.

u/chipdart

KarmaCake day1739March 20, 2024View Original