Readit News logoReadit News
terpans · 3 years ago
2005: your infrastructure is automated using a handful of Bash, Perl and Python scripts written by two system administrators. They are custom, sometimes brittle and get rewritten every 5 years.

2022: your infrastructure is automated using 10 extremely complex devops tools. You automated the two system administrators away - but then had to hire 5 DevOps engineers paid 2x more. The total complexity is 10x.

They wrote YAML, TOML, plus Ansible, Pulumi, Terraform scripts. They are custom, sometimes brittle and get rewritten every 3 years.

EDIT: to the people claiming that today's infra does more things... No, I'm comparing stuff with the same levels of availability, same deployment times, same security updates.

_vertigo · 3 years ago
Not pictured:

2005 your average box served some php and static assets, connecting to some generic relational database. Reading logs means grepping files over ssh.

2022 your architecture runs in the cloud, has multiple flavors of databases, queues, caches, and so on. You have at least two orders of magnitude more complexity because you aren’t just serving a web page anymore - you handle payments, integrate with other services, queue tasks for later, and so on. Your automation may be an order of magnitude more complex than 2005, but it enables two orders of magnitude more functionality.

stathibus · 3 years ago
My classic C programmer curmudgeon take is that the root of the problem, as with everything else in this industry, is bad software built on top of bad software built on top of bad software... and on it goes.

The systems in your 2022 world are hard to test and maintain because they are bad, and the tools we built to test and maintain them are largely built on the same foundational ideas and technologies, so they are even worse.

We're going to have to rip everything back down to the foundation in order to make progress beyond finger-pointing (X is DevOps but Y is software engineering and Z is IT admin).

Sebb767 · 3 years ago
The big question is: Do you need that kind of functionality? I agree that very large and complex infrastructures have their place - the problem is just that they import a ton of complexity and usually cost a lot.

People are always suprised when they see a minimal webserver instance on a ten year old debian [0] handling tens of thousands of requests, without issue. It might go down once a decade because the disk failed, but it cost 1200$ to run over that decade. I don't think that's the perfect way, but modern infrastructures love to include a lot of complexity when it's not needed. The problem is that including something is very easy and the cost is only payed once it breaks. Also, hardware is really cheap (you might not think so, but if you compare it to western IT salaries, it is).

[0] I know outdated instances with no update strategy are not really a benchmark, but I encourage you to go out and ask people what their container base image upgrade strategy is - the situation really did not change that much.

fipar · 3 years ago
I partially agree with what you're saying but let's not pretend we weren't handling payments in 2005, some of us where anyway. I think what changed is the scale of things: we had a lot less people online back then.

I think the increased complexity in our architectures is correlated with the DevOps role coming to the picture, but I'm not sure there's a cause link there. My recollection from having lived through the initial years of DevOps (I began working professionally in 2000) is that an important goal was to reduce the terrible friction between dev and ops (the latter including sysadmins, DBAs, etc). Whatever the extra complexity we have now, I would not want to go back to the days when I carried a pager and responded to find out the problem was caused by an application I had no insight into, and there was no corresponding on call person on the side of the dev team. Another important goal was to manage infrastructure a bit closer to how we develop software, including the big step of having the configuration (or the code that generates it) in SCM. Another thing I don't miss is logging into a server to troubleshoot something and finding /etc/my.cnf, /etc/my.cnf.<some date>, /etc/my.cnf.test, etc. It's much easier to just run tig on either the file or the ansible/chef/whatever that generates it IMHO.

tech_tuna · 3 years ago
This times one million. Back in 2005 there would have been so much shit you wouldn't even dream of doing and a lot of things you did do, you sure as shit wouldn't do now. . . we used to write our cache systems. Migrations were all custom scripts. Fucking nightly builds were a thing because we didn't kick of builds on commit.

Unit tests weren't even common practice back then. Yeah, most places had tests but there was no common language to describe them.

And as much as git can be a big complex pain. . . merging was a BIG thing back then too. I seldom deal with long lived branches and the nightmarish merges they often needed.

Also, to all the young folks who "want to work with physical servers" again. Have fun with the next set of device driver patches you need to roll out.

I heart my containerized IaC buzzword laden cloud existence. #FuckThatNoiseOps

onion2k · 3 years ago
2005 Your app takes some user data, stores it in a database, and presents a filtered view of it to the user on request.

2022 Your app takes some user data, stores it in a database, and presents a filtered view of it to the user on request.

rco8786 · 3 years ago
The fun part is that the 2005 architecture is still plenty sufficient for 99% of deployments but everybody architects for 100x the scale they actually need.
littlestymaar · 3 years ago
> you handle payments,

Yeah, because obviously nobody handled any payment in 2005 … The same can be said for everything in your list.

And more importantly, it's unlikely that your business is more complex than what it used to be in 2005. And if you're spending more resource to deliver the same business value, you're wasting them. (That's why PHP is still ubiquitous btw, it sucks by most standards, but it's good enough to do business and that's what matters).

HelloNurse · 3 years ago
Extremely complicated architectures are a liability, not a feature to be proud of; increasing complication (i.e. costs) doesn't mean increasing functionality (i.e. value).

For example, why do you "queue tasks for later"? Do you have exceptionally punishing latency and throughput requirements that can only be met at a reasonable cost by not answering synchronously, or it's because your database doesn't do transactions?

Similarly, what do you do with "queues, caches, and so on"? Meet further extreme performance and availability requirements, or attempt to mask with additional complications the poor performance of inefficient complicated components?

In 2005, but also in 2000, web applications had already progressed past "just serving a web page", mostly without complicated architectures and therefore without the accompanying tools.

I think tool improvements made cargo-culting actually advanced state of the art software architectures and processes easy and affordable, creating a long term feedback loop between unnecessary demand for complexity (amateurs dreaming of "scaling up") and unnecessary development and marketing of advanced (but not necessarily good) complexity-management tools, often by the same amateurs.

terpans · 3 years ago
I was working on large scale deployments with multiple datacenters, multiple databases, message passing networks and running multiple customer-facing products.

Load balancers and HA setups were already in use. LVS existed. VMs were popular. "CI/CD" existed, without that name.

> 2005 your average box ...

Speak for yourself.

hsn915 · 3 years ago
All you did is describe the increase in complexity of the infrastructure. Which does not rebut the point, because that is the point. The infrastructure people are using now is much more complicated than what was common 15 years ago.

If you want to rebut this you need to demonstrate the kind of capabilities we have now thanks to this complexity that we could not have before.

> you handle payments

No you don't. Most people handle payments by integrating with a 3rd party service, most likely stripe or paypal.

bayindirh · 3 years ago
I find this take apologetic. "But we're running on the cloud, we need this complexity", is a wrong take. Yes, the underlying services may need this, but the infrastructure managing these services don't need to be this complex.

Every complex tool can be configured to work in a simpler manner, and all configuration repositories directing these tools can be organized much better. It's akin to a codebase after all. Many code quality processes and best practices to apply to these, but sysadmins don't like to think like developers, and things get hairy (I'm a sysadmin who does development, I see both sides of the fence).

The sad part is, these allegedly more robust platforms do not provide the transparency neither to developers, nor to sysadmins, nor to users. In the name of faster development, logging/debugging gets awry, management becomes more complex and users can't find the features they have used to have.

Why? We decoupled systems and decided to add this most used feature at a later date, because it's not something making money. Now my account page even doesn't show my e-mail address, I can't change my subscription renewal date, or see my past purchases, why? They're at distant tables or these features are managed by other picoservices which are not worthy to extend or add more communication channels.

Result? Allegedly mature web applications with weird latencies and awkwardly barren pages, with no user preferences or details.

Adding layers and layers of complexity to patch/hide shortcomings of your architecture doesn't warrant a more complex toolset to manage it.

thezilch · 3 years ago
Classic over-engineered DevOps, adding complexity for complexity sake. When you have a hammer...

Everything you described was 2005.

arinlen · 3 years ago
Also not pictured:

* 2005: ship it if it builds.

* 2022: run unit testing, deployment to beta, run integration testing, run UI testing, deployment to preprod, run internationalization testing, run regional/marketplace-specific testing, run consistency checks, run performance tests, deployment to prod.

democracy · 3 years ago
2005 was fullon enterprise websphere/weblogic era, so no, if anything it was much more complex (from architectural side) than today's python/nodejs or even spring boot solutions. Automation (bash/ant) plus ci like cruise control already there.
phendrenad2 · 3 years ago
> you handle payments, integrate with other services, queue tasks for later

I distinctly remember buying things on Amazon in 2005. Actually come to think of it, I did everything you list in 2005, at multiple companies.

wruza · 3 years ago
you handle payments, integrate with other services, queue tasks for later, and so on

This just means “use APIs”, adds {base_url, secret_key} pairs to a config file. Where are orders of magnitude devops-wise?

tsarchitect · 3 years ago
Also not pictured--

2005: two system administrators wrote all automation using a handful of Bash, Perl and Python scripts AND LEAVE THE COMPANY a couple of years laters

20xx: new hired system administrators continue to rewrite scrips. no shared knowledge because scortched-earth policy in effect

2022: HN: DevOps is a failure -- you just need two system administrators...

xkbarkar · 3 years ago
Wanted to add, todays pipelines include a whole new level of security that was pretty much ignored in 2005.

Security adds complexity

GrumpyNl · 3 years ago
The question is, do you need all that or was/is PhP and Mysql enough for the job.
mbesto · 3 years ago
> Your automation may be an order of magnitude more complex than 2005, but it enables two orders of magnitude more functionality.

And can create more than 2 orders of magnitude in terms of economic value (i.e. profit) as a result.

Deleted Comment

Deleted Comment

Jistern · 3 years ago
>> Your automation may be an order of magnitude more complex than 2005, but it enables two orders of magnitude more functionality.

This!

The primary problems with DevOps are...

1. It is still in its infancy therefore it's changing quickly.

2. Bad (or no) documentation is much more painful than before. A single family house without blueprints can, usually, be adequately serviced; whereas, a 27 story office building cannot.

dhzhzjsbevs · 3 years ago
Missing the point:

That code in 2005 was better in every way.

Context: just learning loopback API framework. What a steaming pile of overengineered garbage.

EdwardDiego · 3 years ago
So someone overengineered something in 2022, and therefore, nothing's better?

How about my anecdata:

2010: your infrastructure is automated using a handful of Bash, Perl and Python scripts written by two system administrators. They are custom, brittle in the face of scaling needs, and get rewritten continuously as your market share and resulting traffic grows. Outages happen far too often, you think, but you would, because you're someone who gets paged for this shit, because you know a core system well... ...that got broken by a Perl script you didn't wrote.

2019: your infrastructure runs on EKS, applications are continuously deployed as soon as they're ready using Jenkins and Flux. You wrote some YAML, but it's far better than that Perl stuff you used to have to do. The IDE support is like night vs day. You have two sysops, or devops, or whatever, who watch over the infra. You've had to attend to a system outage once in the past two years, because an AWS datacentre overheated, and the sysops just wanted to be sure.

You write some YAML, the sysops write some CDK. Your system is far more dynamically scalable, auditable, and reliable.

My anecdote can totally beat up your anecdote.(In other words, this is a silly line of argument)

icod1 · 3 years ago
My 2020 looks like this:

no docker no k8s

1 server

  git repo
  /var/www/domain_name
  git clone git_url /var/www/domain_name/backend/
  cd /var/www/domain_name/backend/
  go build
  
Updates

  git pull
  go build
  systemctl restart domain_name.backend.service
I pay 46€/month and I'm looking forward to halve those costs. Server load is mostly <0.5 I call this the incubation server. If a project takes off I rent a more expensive, but dedicated, server. It's very unlikely that I ever need more than 1 single server per project.

I will never write microservices, I can scale fine with a monolith. Lately I even moved away from JS frontends to render everything with Go on the server. Yeah it requires more resources but I'll gladly offer those resources for lower response times and a consistent experience.

Sadly companies that are hiring don't see it that way. That's ok. I'll just stay unemployed and try building my own stuff until something succeeds again.

I had a 7 year long project that brought in 5-7k€/m. The server costed 60€/m. I can do that again. I know it's not your kind of scale or income level, but it allowed me to have a good life living it my way.

barking_biscuit · 3 years ago
I think it's somewhat disingenuous to compare DevOps requirements of 5-7k/m projects with systems run and operated by companies in the mid market.

That said, something I often wonder about is if you could minus out 100% of the cruft systems run by realistic sized companies, exactly how cheaply could you run them and with what DX? Half of the problem is things built by 100 people with competing and shifting priorities will never result in a clean, tidy, sensible system and it's mighty difficult to minus out the effects that the organization scale has on the end result.

I'm currently working through building a hobby project on that as far as I know will only ever have one user, but I'm enjoying the total freedom to take my sweet time building it exactly as nice as I wish the systems I wrangle in my day job would be and I'm 100% looking to run it for free or as close to free but with as much performance as I can get because why the hell not? It's a totally different ballgame.

mmcnl · 3 years ago
No one ever said a VPS with a shell script is terrible. You think of scale the wrong way. Scaling is not only about increasing from 10 requests / second to 1000 requests / second. Scaling is about organizational scale too, i.e. how do you ensure going from 2 to 20 developers increases productivity by at least 20x and not 1.5x?

Tools like Docker, Kubernetes and whatever absolutely help in that regard.

paddlepop · 3 years ago
This is a build deployment perspective.

I for one do not miss hosts never being patched because all those slight modifications to systems files that were tweaked several builds ago and now everyone is too scare to touch.

I won't miss the 12 month projects to upgrade some dated software to a slightly less dated version of that same software.

From my perspective in Security, DevOps has made life much better.

fragmede · 3 years ago
The ability to spin up a box, have it run insecure code, and then spin it down; and the ability to do that all day long, is worth it for the security benefits that all this complexity entails.
tech_tuna · 3 years ago
At my first company, our builds happened whenever the release engineer (he was friends with the milk man and chimney sweep) felt like "doing a build".

As another example, CI/CD adds a lot more work and maintenance but it results in better overall hygiene.

mediascreen · 3 years ago
I run 50+ smallish applications on AWS using Bitbucket Pipelines, Fargate, Aurora MySQL, S3, Cloudfront and a few other services. Most of the setup is scripted using very simple Cloudformation scripts. I estimate that I spend maybe 10% of my time on this and the rest of my time on ordinary dev/architecture tasks.

Before Docker and AWS this would have taken me so much more time.

The only drawback is that we have a hard time finding other developers in the company that want and have the time to learn the setup. It's not very complicated, but require some familiarity with the AWS ecosystem. It can seem daunting to someone who has to learn it from scratch.

russellendicott · 3 years ago
> The only drawback is that we have a hard time finding other developers in the company that want and have the time to learn the setup

This is my experience as well in enterprise cloud. I don't get it. Have these people seen what cloud jobs pay for less work?

shrubble · 3 years ago
1997: a team of 5 sys admins, 3 of them functional alcoholics, writing only in Bash scripts, manage over 2000 internet facing SPARC machines.

A lot of DevOps is actually CVOps - stuff that people get familiar with so they can put it in their resume.

iso1631 · 3 years ago
Those bash scripts probably still in place and working, meanwhile modern ways of managing the servers have been recreated a dozen times over the last 25 years as every time a new person comes in there's always a better way of doing things.

The difference is now you're a failure if you stay at the same job for more than 2 or 3 years.

barking_biscuit · 3 years ago
CVOps - hahahaha. I've always heard and used the term "Resume Driven Development" but I guess CVOps is a nice term too.
iso1631 · 3 years ago
What was wrong with the other 2?
winternett · 3 years ago
It's usually always a solution with an overly complex chain of tools that only do a small part of deployment/security tasks because it's a food chain, where each vendor can eat a part of the company's budget consistently, based on a problem that really isn't consistently solved...

Apps are still no more secure because there are several points where they can be compromised, rather than just a few involved in a less automated, but more easily replicate-able process. Also, I don't need "flavor of the month" skills to get things done. There is always a revolving door of fly-by-night-hype tools and brands that regularly rise and fall in the IT world... I avoid them (new hyped products) like the plague. I'm fine with being the stubborn middle aged IT guy now. :P

It's all a food chain based on making money. What matters to me most is whether money is being made from the product that is deployed, and if it's simple, reliable, and secure enough to be worth development. I don't do my job to make a bunch of companies money by using their DevOps tools.

Screw impressing other engineers with solution complexity every time. Functional reliability always wins at the end of the day. Leveraging a massive list of Ops tools only creates a huge backlog of update work, designing efficiency and simplicity in most of my solutions is what ultimately pleases most of my clients.

Traubenfuchs · 3 years ago
I think you mean:

> your infrastructure is automated using 10 extremely complex devops tools

... held together by ...

> a handful of Bash, Perl and Python scripts written by two system administrators. They are custom, sometimes brittle and get rewritten every 5 years.

We learned nothing in the last decades. If at all, complexity for the same things multiplied.

lox · 3 years ago
In 2005 your infrastructure provisioning wasn’t automated. The complexity has increased, but so has what we get. Being able to provision new hardware stacks like software is amazing, in 2005 I had to get quotes from hosting providers.
iso1631 · 3 years ago
Whereas now nobody cares about the cost because it's all hidden away in a massive bill at the end of the month?
mountainriver · 3 years ago
True but to be fair modern infra does a hell of a lot more
lazyant · 3 years ago
2005: no cloud, had to order 1Us, wait, rack up. Needed a DBA for the database, a network sysadmin for the networking, all to serve a simple website with not the same level of HA. We are doing way more now, which needs some more complexity and yes, in many cases we are overengineering it.
iso1631 · 3 years ago
2005 Linode sold me VMs, no need for a DBA or network admin for a simple main/reserve website

17 years later and getting more than 364 days uptime out of AWS is apparently "not worth the cost"

bluedino · 3 years ago
2005: you had 4 people that understand what everything did

2022: you have a team of monkeys clicking buttons

Joking aside, it seems like the developers these days don't have the understanding that they did a while back. Not being involved with the nitty-gritty causes them to just write code willy-nilly.

rndmind · 3 years ago
Today's shit has 10x or 100x more throughput, it makes sense that upgrading data response and availability requires more people.

But, todays devops has become a proprietary mix of aws protocols, constantly changing standards and languages.

I still use bash scripting wherever I can, it is much more simple and has been ultimately unchanged for decades, which is nice for compatibility

kmitz · 3 years ago
Author claim is more about how to make people work together on deployment. Rather than a rant on devops tools.

You can do simple things with modern devops tools. You can go off rails with simple scripts. It's not the tooling, it's about engineering maturity and the requirements of what you're building.

CommanderData · 3 years ago
IaC wasn't even prevelant or a production ready thing back in 2005. I'm unsure what magical bash scripting would do any of that, maybe it the data centres too!
aghahsdsdfh · 3 years ago
IaC wasn't a thing because you honestly didn't need code to solve the vast majority of deployment problems. It was a configuration issue.

Not 2005, but a year later in 2006 I was using cfengine to deploy code and configuration to servers from an svn repository. The same svn repository had dhcpd configs that described pretty much every device on the network. The dhcp configs also pointed to a tftp service from which new nodes pxe booted to an installer which pulled down the node specific kickstart, and provisioned the machine.

We didn't call it infrastructure as code, but it sure fucking smells the same.

phendrenad2 · 3 years ago
Perl/Bash/Python? Maybe for non-mission-critical things. By 2005 everyone who needs stability/scalability were using J2EE.
mmcnl · 3 years ago
2005: your infrastructure powers a webshop with $1m annual revenue

2022: your infrastructure powers a $5b startup

inopinatus · 3 years ago
All you’ve demonstrated is that some folks are bad at SRE.
bryanrasmussen · 3 years ago
>to the people claiming that today's infra does more things... No, I'm comparing stuff with the same levels of availability, same deployment times, same security updates.

ok but in my experience it seems like more things are being done in places I see with devops nowadays versus back then. I mean I know you say that it's the same, but it's hard to believe your statement in a comment versus my lying eyes. It seems more likely to me that your two examples are both actually fictitious and thus it is easy for you to say that they are exactly the same in what gets output - or have you been at the same place for 17 years, seen the changes, yet have had no input on the company to stop the madness? Because if the latter that would also seem... weird.

thr0wawayf00 · 3 years ago
Were microservices a thing back in 2005? Honest question, I always assumed that SOA was more of a newer philosophy in web software. The scale of what we build has changed a lot over the years, as well as the need to handle the variance of scale through techniques like auto-scaling. All of that adds an incredible amount of complexity in systems that surely didn't exist 18 years ago.
saurik · 3 years ago
> If you look at a DevOps engineer job description, it looks remarkably similar to a System Administrator role from 2013, but...

> If DevOps was supposed to be about changing the overall culture, it can’t be seen as a successful movement. People on the operations side of the fence will...

As someone who was keenly watching this stuff back 15 years ago, parts of this article connect with my understanding, but the core problem I have is that this article itself is somehow bought into the mistake that led to the failure and so almost can't see the failure for what it is: the entire point of DevOps was that "operations" isn't a job or role anymore and has instead become a task that should be done by the developers.

Ergo, if you even still have operations people to comment on it--or certainly if you are somehow hiring dedicated "DevOps" people--you aren't doing DevOps and have already failed. The way to do DevOps is to fire all of the Ops and then tell all of the Devs that they are now doing DevOps; you simply can't have it both ways, as that's just renaming the same two camps instead of merging them into a single unified group.

uberduper · 3 years ago
I've worked in ops roles since about 2000 after a few years in backend corp IT stuff.

I agree that what you've described was the original intent and goal of "devops" but in light of that failure, the "cross functional team" definition took over and then in light of that failure, the SRE was born and we're basically back where we started but now the ops people use git instead of rcs.

In my experience and opinion, developers are really bad at ops and sysadmins/ops are really bad a development. Anyone that is truly good at both is a unicorn that is probably carrying their team.

jve · 3 years ago
Hey, this comment inspired me to create a pool. I'd like to know what is the distribution of unicorns between HN crowd :)

Pool: https://news.ycombinator.com/item?id=31891675

di4na · 3 years ago
Why not sit down and figure out how we can train more unicorns instead.

And maybe make the tooling helps at this

nixlim · 3 years ago
> the entire point of DevOps was that "operations" shouldn't exist, and that operations is a task that should be done by the developers. Ergo, if you even have operations people to comment on it, or if you are hiring dedicated "DevOps" people, you aren't doing DevOps and have already failed.

This. My first thought when I was reading the article. Spot on

citrin_ru · 3 years ago
> the entire point of DevOps was that "operations" isn't a job or role anymore and has instead become a task that should be done by the developers.

This akin to saying "frontent developer isn't a role anymore - both frontend and backend should be handled by a full-stack developer". This works for small companies/projects, but bigger ones can benefit from specialization and division of labor. Body of knowledge required to be a decent software developer and a decent ops engineer is too big to fit into one head. I've seen ops work being done by developers without ops experience and more often than not it was ugly - they didn't had enough experience/knowledge (and time/incentives to gain them) to do ops work well.

To me the best part of DevOps isn't about roles but about team structure - splitting all Dev in one department and all Ops into another usually is a bad idea. And a failure of this split was a motivation to start DevOps movement. Having Ops embedded into Dev teams in my experience works much better.

dragonwriter · 3 years ago
> but bigger ones can benefit from specialization and division of labor

I think it is central to the DevOps concept is that dev vs. ops segregation—at least on the small team level, perhaps not at the individual level—is a counterproductive division of labor that inherently fosters micro-optimizations on both sides of the divide that are counterproductive to effective value delivery. On a continuously available software service, the lowest-level product team should own it's components soup to nuts rather than having a dev team throwing hopefully-deployable code over the wall to an ops team.

pas · 3 years ago
> Body of knowledge required to be a decent software developer and a decent ops engineer is too big to fit into one head.

the usual answer is that in a good agile team everyone should be a little bit T-shaped

https://www.cybermedian.com/scrum-team-i-shaped-vs-t-shaped-...

of course domain experts are real (or what's the term nowadays?), so specialization makes sense. comparative advantage and all. but the idea is to lower the (coordination, communication, conflict due to inevitable misalignment between separate teams) overhead by "onshoring" the basics (eg. writing tests, basic CI stuff, deploying)

the devops manifesto (which allegedly does not exists, but you get the point) basically calls for giving people tools, permissions and authority to do these basic things, giving teams ownership of their stuff. and of course this doesn't mean fire every sysadmin on sight :D (even if that would definitely help with the process of re-owning some ops taks to dev people)

Sebb767 · 3 years ago
> The way to do DevOps is to fire all of the Ops and then tell all of the Devs that they are now doing DevOps

That's like saying "agile is firing your scrum masters and tell your developers you're agile now".

The idea behind DevOps is that applications are provisioned by the people who know the application best, the developers. If everything works out as it should, this gives additional load with creating the deployments, but also removes the overhead when dealing with operations when deploying, updating and debugging - so a net zero in workload, but a gain in the way the application is hosted better and fixing bugs is easier. You still need operations, both for providing the underlying platform (getting a server ready is not a developers core business and it shouldn't be) and for guiding the developers. It should be leaner, but you still need it.

Of course, you can also fire all of infra and tell the developers "that's your job now", but that's like calling biweekly deadlines scrum (and leads to equally bad outcomes).

civilized · 3 years ago
> That's like saying "agile is firing your scrum masters and tell your developers you're agile now".

This would be an excellent idea in many teams.

ebbp · 3 years ago
This can be true, but I would argue not always. Some DevOps teams work in the old mode of “throwing code over to Ops to run” - this isn’t what DevOps intended, but happens.

When they work well, they’re doing things like authoring reusable (by product eng. teams) infrastructure modules, or helping to build “you build it, you run it” tooling like monitoring stacks etc. They’re also helpfully/hopefully subject matter experts on CI/CD, your cloud/hosting of choice, security stuff - things that general developers have mixed levels of interest or competence in.

oweiler · 3 years ago
That is utter BS. DevOps means Ops and Devs working hand in hand in a crossfunctional team. Nothing more, nothing less. The main idea was to tear down silos.
saurik · 3 years ago
Well, you can't simultaneously tear down silos and continue to have two silos... I would hope that would be obvious? The new cross-functional team is made up of the Dev people who are now doing Ops and the Ops people who are now doing Dev, with everyone else no longer fitting into the new DevOps reality, which was itself born from the premise that cloud computing was simply obsoleting the floor of dedicated systems administrators you previously had building machines and coordinating workloads and replacing it with a new deployment paradigm where a developer could develop their operations as easily as as they can develop anywhere else in the product's stack. If you have a special DevOps team you hire people into that is simply a renaming of the people you previously had doing Ops, you either haven't internalized this future or are actively rejecting it (which, I will emphatically state, is a perfectly fair position to maintain) and of course are going to "fail" at DevOps.
Viliam1234 · 3 years ago
> DevOps means Ops and Devs working hand in hand in a crossfunctional team.

Not sure how typical is my experience, but in my experience it always meant "ops is yet another task for the Devs".

Seems like this is a good interview question: "What does 'DevOps' specifically mean in your company?"

dijit · 3 years ago
> That is utter BS.

This is not conducive to the desired environment.

On a more even note I would prefer you spend some time looking at the origins of devops and what it means, it’s a contentious term because it means different things to different people.

The original “Patrick Dubois” (founder of the term) meaning was Systems Administration in an agile fashion.

I suspect that you’re repeating what someone else told you and you’ve just adopted their definition, which is fine, but part of the issue I have with the term myself is that everyone has another meaning than everyone else.

haspok · 3 years ago
At all the places I worked previously in the last ~20 years there was always a sharp separation of development/testing and production environments. I as a developer never had access to any system in production, apart from one place which had a very sophisticated security system in place which could grant you temporary access during deployments. Just think about customer data, and you'll understand why.

So when I hear that someone thinks devops is developers running their own systems in production I always wonder where this is actually possible, let alone whether it is a good idea at all.

Aeolun · 3 years ago
I have the opposite experience.

Given that I've been perfectly capable of doing ops work in the previous 5 companies I've worked at, suddenly being unable to do so in my current company because I'm classified as a 'dev', is supremely frustrating.

Especially when you have more experience by yourself than the entire ops team combined.

VectorLock · 3 years ago
You're just replacing one set of people with access to customer data with another.

In either case you should be implementing least-privileges. Only the access to data that a person needs to get their job done. More frequently this is developers than operations people.

cratermoon · 3 years ago
> The way to do DevOps is to fire all of the Ops and then tell all of the Devs that they are now doing DevOps

That's a way to say you're doing DevOps, but it's not going to work very well.

> merging them into a single unified group

That's the right way to describe it. Just like the old "programmers build it to spec, then throw it over the wall to QA" is out of style, and now good teams have testing specialists in the same room (conceptually) as developers. the goal with DevOps was to stop doing the old "here's a build, deploy it" that caused so much wailing and gnashing of teeth and instead bring ops skills into the team as a first-class expertise. The CI/CD pipeline can mean that development, testing, and release are all together, rapidly iterating and responding to change.

By the way, pretty soon the AppSec/CyberSec people will be folded in, to, so instead of the old "it's done/deployed, run your pentests/analysis tools" secure by design will require those skills to be integrated, too.

Little by little, chipping away at the waterfall

mr_toad · 3 years ago
> the goal with DevOps was to stop doing the old "here's a build, deploy it" that caused so much wailing and gnashing of teeth

From an ops point of view that might be the main selling point. From a dev point of view a key selling point was not having to wait a month for every minor configuration change to go though change management processes.

harryf · 3 years ago
I think you’re onto something. From the article…

> our role is to enable and facilitate developers in getting features into the hands of customers

The problem here is this creates the wrong kind of incentives for developers… somehow elevating the to a level where they don’t have to care about how their code works in production.

As someone that remembers being a developer back in the days of sysadmins, we were AFRAID of upsetting the operations people. If your code brought a server down, you were at least going to face some very awkward conversations. The cartoon “The Bastard Operator from Hell” immortalized that era.

Meanwhile at one company I worked at years ago - an airline - the development team was responsible for keeping the system running 24/7. Nothing makes you think more carefully about your code in production than meeting a colleague on Monday morning who got woken up at 2am by your code failing.

While I’m not arguing for hostility in the workplace, giving developers incentives to care about their code in production seems to me to be one of the things devops got wrong

dasil003 · 3 years ago
Why is getting yelled at the next day more of an incentive than actually getting paged at 2am?
littlestymaar · 3 years ago
There's a reason people end up doing plain ops and calling it devops: it's often too costly to handle this as “a task that should be done by the developers”.

Specialization makes people much more productive, because they face the same kind of issues over and over and know how to fix them quickly. When you distribute the load in your organization, everybody is going to face problems, struggle, learn and never reuse that knowledge again.

drpyser22 · 3 years ago
Developers should have some grasp of ops work, and be able to deal with some ops-related issues, and take part of the design of the ops-side of the software delivery, so as to make sure the infra and deployment workflow they deal with works for them.

But yes it makes sense to me to still have people specialized in ops and infra in teams, collaborating with developers.

Basically, instead of having developers doing everything or just developing and throwing code out to an ops team, we should have developers educated in operations, working in teams with at least one operation specialist (or "DevOps engineer"). That way, you should end up with infra, deployment workflow that really works for the team and is optimized for the needs of the team.

xtracto · 3 years ago
I was just reading the "Building Scalable Websites" book [1] released in 2006. At that time, "DevOps" was called SysAdmins. And there were also DBAs, Network engineers, among others.

> The way to do DevOps is to fire all of the Ops and then tell all of the Devs that they are now doing DevOps; you simply can't have it both ways,

I think this points at what happened: Startup scrappy culture started permeating new technology companies, which meant no budget for DBAs, QAs, SysAdmins and other similar roles. So decision-makers fired all those roles and ask Programmers to fill the voids. At the same time "cloud computing" started to mature, so there was a change from hardware/operating-systems tinkering to software related tinkering.

One just has to see the decline of "SlashDot" which was a very SysAdmin/Operating-System focused website, in favor of news.ycombinator and similar more software-oriented forums.

[1] https://www.oreilly.com/library/view/building-scalable-web/0...

mmcnl · 3 years ago
You're right but in reality DevOps teams in 2022 are managing Kubernetes clusters and gatekeepers to all kinds of cloud services to facilitate development.
kmitz · 3 years ago
Yes. I was confused reading this article because author seems to miss an important point. Devops culture is about "you build it, you run it". Not having a dedicated devops team that tries to make developers do things. I Read a book lately on that topic : Team topologies. It explains this concept pretty well
lamontcg · 3 years ago
> The way to do DevOps is to fire all of the Ops and then tell all of the Devs that they are now doing DevOps

That's slightly hyperbolic and I'd also argue that there's a fundamental error there since you throw away all your platform operations. Now you have dozens of operations engineers working in separate silos and no coordination and you've thrown away platform operations engineering.

What you really need to do is give all your developer teams pagers and point all their monitoring alerts at those pagers. If they setup an oncall rotation of the developers for their own software or they panic and hire a small team of operations engineers and hand them the pager it doesn't really matter. Then you have the problems of coordinating platform ops engineering with the ops team members in the software teams. Whoever handles Ops for the software teams acts as a kind of PM to interface with the centralized platform engineering roles which is responsible for coordinating across development teams to make things look consistent across the Enterprise.

The problem with the bad old ways was that software would write code and then toss if over the wall for operations to run, and all the shitty disk full pages and whatever other crashing software badness fell onto operations, and dev teams would each individually choose to ship software that was shitty to run and all the monitoring alerts would fall under a silo under a completely separate VP where they weren't responsible for their ops metrics and would chose to work on features to deliver for their management. Give the dev teams pagers and make them accurately feel the pain of running their software and then they can make choices about how much they want to abuse their own embedded operations people that they have to chat face to face with every morning at a standup. If you then fire centralized ops, though, you wind up with dozens of different operational fiefdoms inside of one company with everyone repeating the same mistakes and nobody doing the exact same thing anywhere.

I learned that back in 2006 before "DevOps" was a "word" and I don't know what you call it or if that is DevOps.

And it seems like Kubernetes is trying to be Conway's law applied to that. So you have DevOps embedded in Dev teams shipping containers which run on Kube clusters that provide compute as a service to the enterprise (often another company entirely) and the platform ops teams maintain those clusters. Except now you're entirely missing the communication that I outlined needed to happen between the operations virtual team composed of the platform ops and the embedded ops in every dev team. SREs will claim they don't need that any more and that old school SA operations is a dinosaur except that every now and then I see some SRE begging to know how to ship tcpdump to one of their containers to do some debugging and I know that they're dirty little fucking liars...

at_a_remove · 3 years ago
It was not hyperbolic, that was what happened to my team. We had two system administrators, one who specialized in Unix and one in Windows. I was the lone programmer, having escaped a previous sysadmin position. I can do it, I don't like it, but I can do it.

Then one day I was told that our two sysadmins had been traded off and we were getting two new programmers, but they wouldn't be programmers and neither would I, we were going to be doing DevOps. I had just escaped that!

wizofaus · 3 years ago
I just quit a job partly because we lost our key DevOps guy and no serious effort was made to replace them. As a result I ended up wasting huge amounts of my time dealing with operations-level stuff that made it impossible to focus on the key parts of my role (feature development etc.). I subsequently turned down a job offer from elsewhere that explained their policy was not to have dedicated DevOps resources for their SaaS platform (devs themselves being responsible for all deployment and system maintenance), and would do so again. Good DevOps people are worth their weight in gold, and at least in many verticals (e.g. those involving payments ) it's virtually mandated that there is a separation of responsibilities between those writing the code and those responsible for delivering the product to customers. I can't see the need for dedicated DevOps resources going away any time soon.
nrmitchi · 3 years ago
> I can't see the need for dedicated DevOps resources going away any time soon.

But is what you're describing just.... "ops" without any of the "dev"? I'm not saying that there is not a need for dedicated infrastructure and operations teams at a certain size (and in some industries), but that's not an excuse for devs to feel like they can chuck a new feature over the wall and say "Well, I'm done my job. Please run it and make sure it doesn't break".

wizofaus · 3 years ago
No silly, that's what the QA team are for! Anyway, our DevOps guy spent probably most of his time "developing" - just not application-level features.
tamrix · 3 years ago
Agreed. It also goes back to the sys admin problem which devops optimised in that the Dev team owns there code in production.

I've seen success with a dedicated devops member on a team but having a dedicated devops team just introduces delays and latency when fixing pipelines or releasing. The very same problem we had with sys admins.

judge2020 · 3 years ago
At a certain point, hiring dedicated Devops frees up x number of developers to continue to develop features depending on the amount of time each developer is spending on performing those devops duties. It’s just another area management can split up job roles to capture more value and allow deeper specialization among professionals.
conradfr · 3 years ago
I thougt the "dev" part was because the "ops" were using code to build the infrastructure.

In my experience that leads to meh code if the "devops" comes from a former sysadmin, or meh infrastructure if s/he comes from the dev side.

baal80spam · 3 years ago
I keep hearing that "we are all devops" from my PM. The real kicker is that there is a dedicated DevOps team in my organization, they "just have too much to do already".
dilyevsky · 3 years ago
“Dedicated devops” is not devops, never has been. Devops is a culture where, super simplified, you run what you built. “Dedicated devops” is just ops
wizofaus · 3 years ago
I get some people use the term that way, but in our case, our guy really did spend ~50-60% of his time developing IaC and other deployment scripts/tools (in various languages), and the rest handling the operational side of things. He occasionally touched the application code as needed (renaming configuration variables etc.), but he didn't do feature-level development, nor was he interested in doing so.
convolvatron · 3 years ago
'devops' means I'm going to hire you ostensibly to develop software, but in reality thats just going to be your '10%' time if there aren't any operational fires burning too brightly.
russellendicott · 3 years ago
I think what it is is that nobody wants to force more responsibilities and complexity on Developers so they hire DevOps people. Then the DevOps systems are so complex that it needs a ultra high quality engineer to run them.

The whole seed of the DevOps movement was that developers needed to do more or the company would fail. Over time, management lost their conviction when developers push back and didn't want to risk losing devs so they "outsource" the devops skills.

CommanderData · 3 years ago
Not hiring a dedicated DevOps resource and making your developers do it. Guess what? You've just made your developers do operations.

The work doesn't go away just because you've shifted it. I've seen those places too and worked in some, the developers aren't very productive let's just say.

pas · 3 years ago
If the dedicated guy helps devs write deployment scripts, write monitoring scripts, set up backup-restore-verify cycles, etc.. then it's devops. if the devs proclaim that the devops guy should do it, then it's just the old siloed workflow again.

note, that the old flow was not a total shitshow with absolute zero productivity ... it worked for quite a while in many places, but it was bad enough in enough places that a whole "movement" grew out of the recommended solution. it's about keeping the communications/coordination/responsibility-tennis overhead down. sometimes that's best done by saying that you deploy what you wrote in any way you see fit but here's the SLA, and so on. sometimes it makes sense to create infrastructure teams and let dev teams use internal tools to deploy, sometimes this require experts at the team level, sometimes not. and ... of course this can be implemented in the most employee hostile possible way and sometimes in better ways too :)

Deleted Comment

StopHammoTime · 3 years ago
This is a hot take.

The reason that you need to do things the ops way is because ops knows how to run applications in production. There's a reason the meme "worked in dev, ops problem now" exists. You need to meet all of the requirements of an app that's running in production from a technical, availability, security, and policy point-of-view. It's not easy and that's why this will never work.

Software is hard, it's just that a lot of developers used to cut their code, run it on their laptop, and let someone else worry about it. It's different these days (although not as much as I'd like).

We don't make you use these tools because we want to, we use these tools because we're required too. No one cared about ISO27001, SOC2, or PCIDSS compliance for your crappy PHP app you ran on your cpanel. They didn't care back then you were using md5 hashes to "secure" passwords. The world is fundamentally different to what it used to be 10-15 years ago, and the requirements from business are astronomically different.

Edit: and to people saying "oh you could just run it on a single server", no you can't because certifications like ISO27001 require certain levels of availability and DR. You're not going to be able to guarantee that with a single server running in a rack somewhere.

dhzhzjsbevs · 3 years ago
> This is a hot take.

I'm assuming you mean your comment, not the post itself.

> The reason that you need to do things the ops way is because ops knows how to run applications in production

Stability in production is one metric. Ops overindexing on this metric is exactly what causes the friction with developers.

Developers are trying to ship value to customers.

Uptime is only one part of that equation and for most businesses, it's not even a very important one.

The author points this out near the end. DevOps can't convince devs to use ops techniques if all the reasons for using those techniques are based on the flawed assumption that development velocity isn't important.

dragonwriter · 3 years ago
> DevOps can’t convince devs to use ops techniques

If “DevOps” is the name of a role, and part of the funtion of that role is “convince devs to use ops techniques”, then I feel like the concept of DevOps is lost. Devs need to own ops, including its costs, which is what convinces them to use ops-appropriate techniques, not some outsider jawing at them.

gonehome · 3 years ago
> “Developers are trying to ship value to customers.”

I’ve also seen this be fairly rare. Devs shipping nothing - not even aware if what they're merging will turn on.

They write something they haven’t really tested, merge it, and call it done - a user may never see it and they don’t have any knowledge about how the thing actually gets built and shipped.

Obviously this is worst-case, but in my experience this is a common default. The complaints about friction are because they’re actually forced to reason about how the machine works in order to ship something beyond merge.

jen20 · 3 years ago
> You're not going to be able to guarantee that with a single server running in a rack somewhere.

The way most “enterprises” deploy distributed systems, I’d be surprised if a single server didn’t typically result in better uptime to be honest.

skjoldr · 3 years ago
Certifications are a very good point because afaik ISO 27001 is now far more achievable for far more companies of smaller sizes with not that many IT staff. Sometimes even 3 good engineers can set up everything needed to pass ISO in a small company in like half a year or something.

Deleted Comment

hitpointdrew · 3 years ago
Meh…DevOps is just System Administration, and Systems Administration is just Sys Ops. They keep changing the title/role but the work remains largely the same. I think is a bit disingenuous to throw “dev” in title, as a “DevOps Engineer” myself I don’t consider anything I ever do “dev”. Ansible is not “dev”, terraform is not “dev”, ci/cd pipelines are not “dev”, helm charts aren’t “dev”. But for some reason companies seem to love the term.
nrmitchi · 3 years ago
> But for some reason companies seem to love the term.

It's possible that I'm just getting very pessimistic, but at this point I'm fairly confident that companies love it because it makes it way easier to attract candidates and describe one set of responsibilities/position in an interview process, and then bait-and-switch it into what is effectively a systems administrator role.

jimmux · 3 years ago
I've certainly had interviews like that. In fact my first full time job out of uni was one of those and I made the error (in hindsight) of sticking it out until I could transfer into another role. Now I'm much more careful to screen for sys admin keywords in job descriptions.
zero_one · 3 years ago
Just happened to me. Now I'm trying to find a way to make an internal transfer happen or decide how long to stick around before applying elsewhere.
oneplane · 3 years ago
Depends on what you think Developer Operations should be. Our developers instantiate their buckets, databases, cache instances etc. themselves, deploy microservices themselves and update configuration, traffic management and scaling parameters themselves. No 'system' people required. The system people are mostly just keeping the automation running and add features as needed.

The work also really isn't the same. Unless you're stuck in the 90's we aren't building servers, installing operating systems, installing applications and installing patches anymore.

trog · 3 years ago
> Depends on what you think Developer Operations should be. Our developers instantiate their buckets, databases, cache instances etc. themselves, deploy microservices themselves and update configuration, traffic management and scaling parameters themselves. No 'system' people required. The system people are mostly just keeping the automation running and add features as needed.

When I read this though, I just think about how much time your developers are not actually developing because they're doing operational-side work.

I have the situation where my developers do this stuff, then things break or need debugging and they don't really know how to dig into any of this stack in any meaningful way, so the problems tend to compound. Meanwhile, they're not writing code. The cadence of development seems massively slower to me (coming from a traditional background where they're writing to a clear Ops-set target environment).

The logical outcome is to hire someone who is an expert in all this infrastructure stuff to help manage it - ostensibly, a "DevOps" person, but really, a classic Operations person, just for cloud.

hitpointdrew · 3 years ago
> The work also really isn't the same. Unless you're stuck in the 90's we aren't building servers, installing operating systems, installing applications and installing patches anymore.

I guess I am stuck in the 90’s then, I absolutely do all of the those still.

Not every thing is “sever less” you know.

fipar · 3 years ago
I agree with most of what you say, and in particular with companies' love for the term, but I disagree that "the work remains largely the same". When I got started on this line of work, we used cvs to track program code and we used backups to 'track' infrastructure, including code used to manage the infra (mostly shell scripts, though not only that).

There's a long path from that to ansible and terraform on SCM.

Another big difference I have experienced: we used to literally celebrate server uptime (I mean as a celebration, I have a distinct memory of gathering around an IBM "fridge" to celebrate the uptime birthday of a particular RS/6000) while now a piece of infra with too much uptime is a red flag about potential vulnerabilities.

What does largely remain the same, I think, are the skills needed to be good at this. Then, and now, we need people who don't mind reading manuals, searching online (this was already a thing when I started, I guess you'd have to go back to the mid 90s for this to not be the case?) , who can keep track of where they've been during a debugging/troubleshooting session, that sort of thing.

Another thing that changed is that in the past some people considered it a badge of honor to be assholes to others not in sysadmin, even more to others not in IT (remember those "select * from users where clue > 0" t-shirts, or the BOFH stories?), while now that's typically frowned upon and quite a few companies are explicit about a no assholes policy in their hiring material (or perhaps I've just been lucky with my teammates and smarter when picking where to work at than when I was younger).

hitpointdrew · 3 years ago
>but I disagree that "the work remains largely the same

I meant at a very high level. The basic responsibilities haven't changed.

    * Deploy/configure infrastructure
    * Deploy applications into infrastructure
    * Monitor/Secure/Maintain infrastructure
    * Scale infrastructure as needed
Sure in the 90's there was no Terraform, and deploying infrastructure meant getting physical hardware and racking it up. Now, you can use Terraform to deploy infrastructure to the cloud, on hardware you rent. So yeah, of course over the years the tools have changed. And sure, as you pointed out, even mentalities have changed (being proud to have a server with 300 days uptime, vs. being ashamed of that).

You can call it "Sys Admin", "DevOps", "Site Reliability Engineer", or whatever, these are all largely the same "Make sure the infrastructure works, is secure, scalable, and help deploy to it." Even with with "cloud managed" things, you still need to setup, config, and secure it. You can have "cloud managed" k8s, it isn't going to stop developers for using bad practices, like running containers as root, and not having a standard deployment process (because each dev is just doing their own thing).

itsmemattchung · 3 years ago
I think the main problem is that the "DevOps" role significantly differs company to company. A company that desperately needs a solid administrator might not be able to attract the right talent and as a result, end up classifying the open positions as "DevOps engineer". At the same time, there are companies out there legitimately trying to bridge the divide between the two — software and system administrators — job families.
damagednoob · 3 years ago
From my understanding, DevOps was never about technical solutions/processes. It was about giving the same business goals to Dev and Ops. The idea being to eliminate the tension netween these departments because the goal of Dev was to ship features and the goal of Ops was to ensure the system stayed up.
agumonkey · 3 years ago
I think devops means something here, sysops would be about running infrastructure not dev infrastructure, devops would focus on producing dev envs, test envs, CI/CD. Not just setting up the runtime hardware / os configuration.
dpz · 3 years ago
Depends on the job role. I find myself writing more go then anything else...
dilyevsky · 3 years ago
So you’re going to use the wrong definition then complain that name doesn’t work anymore?
jghn · 3 years ago
90+% of companies with "DevOps" mean what the GP is describing.

The biggest tip off - any company with a "DevOps" group or dept isn't doing DevOps. Yet there's a ton of DevOps departments out there

downrightmike · 3 years ago
CEOs like to boast how many Engineers they have.
user00012-ab · 3 years ago
Meh... Dev is just clicking on whatever your IDE autocompletes for you. With copilot you do even less. I think some programmers out there have some big heads that need popping.

Building out and automating cloud infrastructure so your simple code can work is way more complex than most things you do every day. But ya, keep telling yourself how smart you are as your write "connect to database, return a value" for the 1000th time.

throwaway787544 · 3 years ago
I agree.

> the anecdotal evidence I’ve gathered has been that the conference are heavily attended by operations, and less attended by developers.

Most Ops people - fuck, even most actual DevOps Engineers - have no clue what the fuck DevOps is. They (rightfully) assume it's just a trendy new word for the same old Ops bull, but in the cloud and with Terraform.

DevOps failed because it never educated anybody except a very small handful of people who actively were looking to solve big organizational problems. It was too many things to too few people. It could succeed only if you brought everybody in the entire org through three different training courses. And that's because DevOps tries to make Operations uplift literally the entire technology organization.

DevOps is dead. The ideas are great, but we need to bring the ideas to people outside Ops. Until then it will just be a slightly-more-technologically-advanced Ops.

(disclaimer: I am a DevOps Engineer that hates the fact that this is my title)

lox · 3 years ago
My take is different. I think DevOps was wildly successful, most of our infrastructure is now software that can be managed by Software Engineers. The goal posts have shifted, we now have major software challenges where as before we had hardware and operational challenges.

Well written tools and cross-functional teams that do both operations, feature work and security are still the path forward IMO, we just need to refocus on developer experience.

hedora · 3 years ago
We tried to hire senior devs to do DevOps work, but the ones that can pass the interview and have already been at a DevOps shop are too smart to be fooled a second time.

We still use all the DevOps buzzword stacks, but we stopped doing dev ops. Instead, we are building out a really good ops team. This makes it possible to hire developers again.

Personally, I'm one of the better ops people on the developer side of the fence, but I'd need at least 2x typical principle engineer comp to take another job at a DevOps shop, and also wouldn't get even a quarter of my normal productivity.

At that point, you may as well just hire a junior undergrad graduate and burn the rest of your cash.

tkiolp4 · 3 years ago
As a software engineer I don’t want to touch your infrastructure code. I have been lucky so far and I have been doing pure product development instead of being a devops. I do believe though in the idea of cross functional teams: designer, developer, infra engineers, managers.
abhishektwr · 3 years ago
And now 50% time software engineers are writing infrastructure. Don’t know what is solution but cloud-native landscape has increased cognitive overload.
clouded · 3 years ago
The point is obviously to pay less people to do more work. What I don't get is when developers themselves are in favor of it like I constantly see with this devops stuff. There's no way they have any kind of life outside of their job. There's no way they have a wife or children, otherwise I simply don't believe for a second they would be in favor of "developers own all the things yay!".
dilyevsky · 3 years ago
+1 tiny teams run infrastructure that previously was responsibility of whole departments at bigco and things mostly work well. Then as soon as they face some minor, totally solvable issues everyone loses their minds.
ketzo · 3 years ago
Yeah, how much of this is just a matter of perspective?

I’ve never touched a computer running my company’s production software, and I never will.

20 years ago, that would have been literally inconceivable with someone who was called a “Software Engineer.”

CraigJPerry · 3 years ago
I disagree, i move in different circles from the author. In my little world DevOps is often more dev-heavy (and needs to mature it’s ops credentials), SRE has become the new-age Ops.

DevOps is primarily tackling a people problem, there’s certainly plenty of useful tools to help but at its core, it’s about people.

Encouraging people to work in frequent small increments (think XP) rather than quarterly releases. Getting rid of the change management beauraucracy, encouraging developers to consider security as they’re designing and writing software. DevOps is a broad church, it’s effective too.

People like to over-focus on the tools of DevOps but I’d take today’s world of version controlled (vs. “Ohh Brad has the config GUI code on his desktop, speak to him about making a change“), automated CI pipeline (vs. builds at the end of the quarter that require stop-the-world help and assistance from all developers, also RIP merge guy), automated deployment into many environments (vs. Excel list of tasks for QA team and different excel list of tasks for prod ops). I’ll take SOA over the-DB-is-the-network 00’s enterprise software architecture, oh and with automated DB deployment (albeit still without a real solution to automating stateful db rollback even today). These are all DevOps factors, from architecture to testing, from security to working and growing together as a team.

The tool-only view of DevOps is just yet another excuse to ignore the hard problem: helping large numbers of people work productively together.

bovermyer · 3 years ago
It appears that you and the author both agree on the core definition of "DevOps" as a term.

Mr. Briggs' article has two main points:

* convincing developers to do operations is too hard, and

* what they actually need is another type of engineer to build and manage an operational platform for them

With regards to the second point, that already exists: platform engineering.

The first point is messier. I disagree with the point, but it also misses the point. Like you say, the real goal is to help large numbers of people work productively together. That's a nebulous goal, though, and difficult to guide people towards. I think that's why people are confused and frustrated by "DevOps" even this long after its conception. It's also why people will continue to be confused and frustrated by it long after the term itself has gone away.