"Rule 11: Which database technology to choose:
Choose SQL when you need to do ad hoc queries and/or you need support for ACID and transactions. Otherwise choose no-SQL"
I think it should be the contrary: SQL by default, no-SQL if you have a specific need and know what you are doing.
I feel like the unstated caveat is "there will almost always come a time when you need to do ad hoc queries, and there will almost always come a time when you need transactions or the equivalent." Which translates to "use sql unless you are sure your don't and won't need to do ad hoc queries or transactions." Which... seems correct.
I feel like this is a case of "probably shouldn't have a default". SQL should likely be a default consideration but if you're going to say "time to build an app, let's spin up a (insert thing here) to store data" rather than "Let me take some time to consider what my data looks like and select a data persistence strategy accordingly" then you're probably going to wind up also writing a "how my team migrated from <x> to <y> because man did <x> not fit our use case at all" article.
It makes sense as a default because it will be the correct choice 99% of the time. Even when no-SQL is the better option it tends to be a better option for only part of the application.
Picking it as the default would make us wrong less often.
Not exactly, the fact that a NoSQL database doesn't enforce a data scheme doesn't mean you don't need a clear scheme which your app use, even if the schema is as simple as just take whatever the format that the frontend use.
Because if you don't your database essentially becomes a write-only vault since you don't have any idea of how your data is stored or was stored in the past.
To be completely frank, I'm seeing less and less reason to use traditional sql databases. MongoDB offers the ability to make sql queries and even has Acid transactions. Everything SQL can do, it does without slowing down when dealing with big data. The only thing it doesn't offer an efficient solution for is something SQL can't do either, and that's advanced search engine capabilities like Elasticsearch provides.
Some people will argue that PostGreSQL is better in certain ways, but the argument really always comes down to 2 factors. Are you going to hit the cost efficiency performance limits of traditional SQL servers, and do you require advanced searching capabilities like graph queries or synonym matching. Even if both answers are No, I'd still argue for Mongo because it makes it easier to distribute Acid compliant coppies of the data by region, providing backup redundancy as well as fast responses in multiple regions.
> MongoDB offers the ability to make sql queries and even has Acid transactions. Everything SQL can do, it does without slowing down when dealing with big data. The only thing it doesn't offer an efficient solution for is something SQL can't do either, and that's advanced search engine capabilities like Elasticsearch provides.
You seem to be looking at this solely from a perspective of what kind of queries you can run but there's a lot more to it than that. For example how do you model and maintain relational data, which I'd argue is most data? Does MongoDB have support for foreign keys or something like them these days? A quick Google brings up DBRefs but these seem very soft.
I manage a team that's responsible for the Mongo DB that powers essentially the whole business. This is a 10 year old company that started right about when Mongo was trendy. After 10 years it's a nightmare to understand what's going on in that database.
And it's now extremely difficult to get off of it precisely because it doesn't have the schema and referential integrity and constraints that we need to be able to understand our data well enough to actually do the migration. We really want to switch to an RDBMS, but it's going to be risky and difficult.
You could say this is all bad engineering, and I guess that's true in a reductive sense. But it's like arguing that you don't need to climb with a safety rope because good climbers don't fall. Over 10 years and many engineers "bad" engineering happens.
I also believe that reasoning about data is hard, and you should therefore try to avoid doing it. You should do that hard thinking one time, and then rely on your database to enforce the rules until they need changing. Aka: Don't Make Me Think (About This Constantly).
If I believed in conspiracy theories I'd say that Mongo was one of the best vendor lock-in plays in tech. Mongo Corp is going to be profitable for a while because once you're down the Mongo rabbit hole it's a real pain to climb back out. But they'll host your database at least, so you don't also have to deal with that. I will give them credit for having a nice management UI.
But from my experience of the past few years I would never choose Mongo. For documents, Elasticsearch, or Postgres if you don't have too many. For relational data, a relational DB.
This is probably my own fault, but I feel pressured to be constantly doing something towards programming. I feel like I should either be reading a book or starting a project.
I have 10+ programming books where I just finished reading one of them and I have way more unfinished projects than I could count.
It's reassuring to see the push-back against the 10x developer idea, I'm starting to feel less guilty now when I spend my free time NOT on programming.
Something that has also helped me out is picking a project to do, and ignoring everyone else's projects. I would always get pulled away with what I'm starting to work on because I see someone using a new technology or doing something unique. When I would see that I would change up my project because I felt it wasn't good enough or that it wouldn't make me a 10x developer. Now I'm just trying to focus on what makes me happy and what I find enjoyable to work on.
I want to ask though - the author works at Amazon and considers himself a 1x developer. What does that mean for everyone else who doesn't work at Amazon or a FAANG company? Is a 1x developer at a FAANG company a 10x developer elsewhere?
I don't think it's so much about practicing outside of work hours, it's more about being mindful about how you're going about what you're doing and evolving it to make more sense. If there's a trait I notice in mediocre developers it's more just that they have no interest in examining their own work habits. Its the guy that always reaches for the menu bar instead of learning keyboard shortcuts, or the guy that refuses to learn anything about functional programming because they want to stick to the basic java they learned in college.
> Its the guy that always reaches for the menu bar instead of learning keyboard shortcuts
I think judging someone based on their computer habits like being a command-line guy versus a GUI guy might be a risky thing. I'm a command-line guy through and through, but my newest boss comes from a different background and is very GUI-focused, and I think it would be easy to assume he's mediocre based on his choice of tools and such, but the results are more what matters.
That said, he seems like a pretty stubborn guy and I've definitely seen some warning signs on being resistant to change.
A 1x developer at one of the tier 1 tech companies is...just a 1x developer - doesn't really matter where you're at.
Being a multiplier is not about just being technically proficient. It's about enabling your whole team to be more productive, and help getting to the right decisions, instead of decisions that might cost your team or coworkers huge amounts of extra work long term.
> Being a multiplier is not about just being technically proficient. It's about enabling your whole team to be more productive, and help getting to the right decisions, instead of decisions that might cost your team or coworkers huge amounts of extra work long term.
Thank you for this comment! I'll keep this in mind as I progress through my career. It's nice to know that it isn't just about being technical.
> "It's reassuring to see the push-back against the 10x developer idea, I'm starting to feel less guilty now when I spend my free time NOT on programming."
10x developers don't spend their free time programming, at least in my observation. They don't need to.
> "I want to ask though - the author works at Amazon and considers himself a 1x developer. "
A 1x developer would be an average (ok, fine, median) developer, no? About half are better and about half are worse. Compared to the bottom half of the statistical distribution, a 1x developer would by definition perform better.
> 10x developers don't spend their free time programming, at least in my observation. They don't need to.
Ah, I guess I need to read more about what people view as a 10x developer. I didn't mean to imply that a 10x developer spent all of their free time coding. I just meant that with companies, job descriptions, other dev, etc. talking about wanting a 10x developer I feel pressured to spend my free time programming, or at least doing something that would make me feel like I'm a worthy candidate.
> A 1x developer would be an average (ok, fine, median) developer, no? About half are better and about half are worse. Compared to the bottom half of the statistical distribution, a 1x developer would by definition perform better.
That makes sense. I guess what I'm trying to understand is, could you call yourself a 1x developer (or average) when working at a FAANG company? Doesn't working at a FAANG company imply that you are better than average, considering the status of these companies, their rigorous interview processes, and so on? I get the impression that FAANG only hires 10x developers, or at least developers that come across as 10x.
I apologize if I'm coming across argumentatively at all. I'm just trying to understand if the author is actually a 1x developer, or if they're a 1x developer at Amazon. Would a 1x developer at Amazon be a 10x developer elsewhere?
No. Average devs are average devs everywhere. Amazon's scale is different, but it's not like the scope of an individual dev's problems are inherently different at FAANG than not. Not everyone will work on the most intellectually intense pieces of code at FAANG companies; in fact most don't.
"To replace out MediaWiki with a Java-based alternative (XWiki) ended up taking a total fo 24 dev-years over 4+ calendar years for the team, not counting the interruptions to pretty much every other team at Amazon as their pages were getting constantly migrated and un-migrated"
I would love to hear more about this. I'm guessing that's a cost of at least $4M? How was this approved? How did they allow it to continue for four years?
There's a roughly ~12,000-word postmortem document that took an additional 2+ months to produce after the fact, which the OP has lightly quoted or summarized from in his post (including what you quoted). It really is an impressively magnificent failure at scale, but there's nothing really fundamentally earth-shattering about how this project went so far off the rails. Mistakes were made, but IMO nobody was outrageously negligent.
The OP's assertion that it was primarily due to slavish devotion to InfoSec's insistence of getting off PHP-based MediaWiki, is incomplete and misleading. The team itself opted into the migration, and by their own admission in their own postmortem document, they wanted the opportunity to get off MediaWiki (killing two birds with one stone, both as a technology upgrade and as an InfoSec compliance thing), but they fundamentally vastly underappreciated the scale and complexity of the project (some top line #'s: ~4M total documents, 1.7M unique pages visited daily, 600K unique MAU, etc.). What probably doesn't come through in talking about this, is that the wiki is far from just a simple collection of text-based wiki pages in the standard sense; it serves more like something akin to an unholy abomination of a Wordpress-like platform with endless amounts of plugins and macros and templates, and gives you just enough programmability to be dangerous.
The primary reason it took 4+ years, was essentially down to going through multiple rounds of failed migration attempts, including failed attempts at producing automatic translations of wiki's from one platform to another (which proved to be incredibly complex, due to the nature of how customizable MediaWiki was and how many teams had gone and done so many interesting/advanced things with it, which was great... except it made automatic translation/migration near-impossible).
OP here: take another look at the first sentence of that post-mortem doc. I don't think it's misleading to say that this was primarily driven by InfoSec. Though I don't disagree that the purported benefits of XWiki technology were considered as maybe a secondary factor.
Keep in mind stories like the wiki one are anecdotal (meaning it's not representative of all projects), and in fact this is covered in both Patrick's tweet thread and Dan Luu's linked essay ( https://danluu.com/sounds-easy/ ).
So while they do their best to not have _every_ initiative go this way, large companies - even the mythical FAANGs - will inevitably have moments like this wiki debacle, and they need to be able to absorb them without material impact on the bottom line or having to fire a bunch of people for trying and failing.
Amazon's total all-in cost per dev is almost certainly >$300k per year, so more like >$7 million if that's an accurate number.
As far as how it gets approved, that's not enough money to really register at Amazon's scale. I've seen waste similar to that at companies much, much smaller than Amazon that goes entirely un-noticed.
> As far as how it gets approved, that's not enough money to really register at Amazon's scale. I've seen waste similar to that at companies much, much smaller than Amazon that goes entirely un-noticed.
Maybe I'm being dense, but this still doesn't register for me. If 4+ calendar years of work for a single team continues with no noticeable progress, then there's an organizational issue, no matter the size of the company.
>that's not enough money to really register at Amazon's scale.
This viewpoint doesn't really match up with how enterprises operate.
It's possible to function at amazon's scale without meticulous accounting and financial operations. The reason for this is that is extremely easy for wasteful spending to propagate throughout the enterprise if unchecked.
Waste only appears after a project fails...it's easy to point out waste after the fact, but it's downright impossible to call it out as it's happening.
I don't understand why are corporations obsessed with decommissioning wikis.
When I was at Red Hat, they tried to decommission local country wiki in favor of their company-wide wiki (Mojo, absolute shit by the way).
They tried for about 2 years, then finally somebody said "what the fuck are we doing" and the migration was stopped. I believe the banner "Decommissioning soon" still hangs there.
What happens is that different teams build out their own wikis , or come in via acquisition). The company then ends up with a dozen of them, and re-orgs occur and a single team now has two wikis, and people wonder why the information isn't centralized. Everyone agrees it will be great to have a singe company wide wiki and the project is born.
Usually, someone saw a cool feature they wanted and everything moved to that. So often the main purpose of storing information gets lost to flashy new graphics.
Finding the right information is already difficult and a time-suck. Adding an extra place to look (and search interface, etc) for no extra value just makes it suck more. Plus there's all the TCO reasons - licensing costs, extra servers, extra maintenance, etc etc. You could also look at it as "Don't Repeat Yourself" writ large.
That's the famous w.amazon.com internal wiki that is basically the intranet portal for all of Amazon. I worked at AWS from 2016-2019 as a principal engineer. They were trying to deprecate w.amazon.com for the entire 3+ years I worked there, and were still trying when I left in early 2019.
I can tell you that Amazon is pretty great in most ways, but they have a lot of really old school tools. For example, they still use majordomo to manage hundreds of thousands of email lists. They've customized and extended it in so many ways that it probably looks nothing like open source majordomo, but at it's core, it's still majordomo. They just wrapped 1990s majordomo CLI tool with a 1990s static HTML page that authenticates you with oauth and lets you create and manage email distribution lists.
So many tools at Amazon are like that. But they actually function amazingly well once you adapt to their 1990s->early 2000s quirks.
One comment I'll make is this: every environment will have preferred "paved" roads - use X language/toolchain and Y relational database, Z cloud/metal provider. Ultimately these details, other than X and a subset of Y, shouldn't matter if you build the right abstractions. And by "the right abstractions" I mean you should just be able to declare something like this:
relational_database:
mydb:
size: 100GB
readers:
- auditservice
readwriters:
- application1
- application2
Very few organisations really count the cost of things like this internally. And it would have been quoted as "relatively simple" in the beginning; just migrate a wiki from one piece of external software to another, how hard can it be?
("how hard can it be" should be on your Words That Prefigure Expensive Disasters list. It's the business equivalent of "hold my beer and watch this")
Re: big company vs small -- Small companies fail all the time; projects like this pare more prevalent at big companies because it doesn't (always at least) kill them.
This is the most concise, honest summary of "agile" I've ever come across. Well put.
The amount of person-hours spent bikeshedding about various sundry "agile" procedural details is absolutely breathtaking. A perpetual, real-life Dilbert punchline taken seriously by armies of *Managers.
Well that's pretty naive. I've worked with some pretty good teams that wouldn't work any other way. I suspect you've only worked at places that abused agile as described here... https://ronjeffries.com/articles/language-of-hatred/
No, I've worked in places where it works just fine.
The point is that, at the end of the day, it boils down elaborate processes to break things up into two-week chunks of work. That's it. That's the bit I thought was a concise, excellent summary.
Sometimes, for certain flavors of apps and tech and teams, that model works splendidly. Sometimes, it is silly administrative window dressing that is completely disconnected from the reality of what the team is doing. In my experience, it's usually the latter -- and the teams work just fine _in spite_ of the fact that everyone is pretending they are doing "scrum" or "agile".
I'd say what is naive is the belief that such a closely scripted, narrowly defined workflow is a one-sized-fits-all solution to optimizing teams under all circumstances and contexts.
> Rule 20: When somebody says Agile, push for Kanban, not Scrum... ...Scrum can easily mean that you’ll get pressured to work extra hours to complete something within that arbitary two-week horizon.
This is very true. I've worked for over 12 years in the bay area in different software engineering teams at startups and found that scrum just leads to burnout and developer unhappiness and encourages team members to just do the minimum 'slap it together till it works' solution without thinking long-term on codebase stability, architecture and maintainability. By far the best development process that leads to high developer happiness, engagement, productivity and empowers them to go the extra mile to go above and beyond, and produce work that acts as a force-multiplier across the team, is what the author describes here as 'Kanban', also known as eXtreme programming. Most people from Pivotal Labs or Carbon Five or from a startup that adopted their process would recognize what the author describes as 'Kanban'.
To be clear: XP and Kanban are different things. XP has prescribed engineering practices, most forcefully that development should be done via pair programming for continuous code review. Kanban prescribes no engineering practices and could be used in lots of non-engineering contexts (e.g., your marketing team could take all of their tasks, put them in a prioritized backlog, then track the progression of those tasks across Todo/Doing/Done/Blocked and that would still be Kanban). Kanban is great for teams in maintenance mode with a steady in/out of smaller tasks in their backlogs, but it's not great for helping your customers get an idea of when they might see a new product or feature.
I do find it odd that people describe Scrum as "leading to burnout" or a "death march", and I'm guessing most people who do either have not worked in waterfall IT projects that preceded widespread Agile or they're part of "Agile" teams that do waterfall development in two-week chunks with daily standups. (Maybe both.)
Scrum practiced well brings the essence of small-town democracy into the workplace. You have to work a late night the day before sprint release? You were in the room when the team agreed to the body of work that would be committed for delivery in this sprint. You just slapped it together until it works? You'll get to accommodate 0-point bugs in a future sprint, and perhaps you can have a conversation in your team's retrospective about why velocity went down that week. (Speaking of: how many other professions get to have a candid conversation with management about what's going well and not going well on a regular basis? Not many, it's a real privilege!) There are certainly deviations from this, but in my experience Scrum teams' problems are generally of their own making, and many of those problems are refined away over time as teams grow and better define their norms.
> You have to work a late night the day before sprint release? You were in the room when the team agreed to the body of work that would be committed for delivery in this sprint.
Fuck that. The fact that my estimate does not work out as expected should not be a reason to work late nights just to get it done.
That’s exactly the death march that you were saying Scrum is not.
> I'm guessing most people who do either have not worked in waterfall IT projects that preceded widespread Agile or they're part of "Agile" teams that do waterfall development in two-week chunks with daily standups.
I haven’t come into an organization in the past 15 or so years which wasn’t using _some_ form of Agile (generally Scrum). It’s fairly likely anyone who has started their career in software within that timeframe may never have experienced how things were done in the past.
That said, there is certainly always room to improve, which is a critical piece I see teams often miss in Scrum. The two-week cycle isn’t just about planning and doing whatever it takes to hit the commitment. It’s about a) the business having a cycle they can plan to if priorities change and b) the team having a regular feedback loop they can use to help understand where they are doing well and where they aren’t.
Missing a sprint commitment is fine. Missing the commitment for multiple sprints in a row means something is going wrong. This is an opportunity to learn and improve, and the sprint retrospective at the end is as important if not more so than the planning meetings.
And then make time in the sprint to implement improvements in the process, tech, whatever is needed. We use 20% of the sprint time as a rule on this, and move that up and down periodically as needed.
> You have to work a late night the day before sprint release? You were in the room when the team agreed to the body of work that would be committed for delivery in this sprint.
This can get complicated depending on the team dynamics and upper management. Sometimes there are pressures that cause the team to overcommit (throw lower estimates, giving the illusion that things can be done within the 2 weeks).
Overall, I think it kind of depends on the product you're building. If it's some SaaS product in the modern world of CI/CD, it doesn't really make sense to hold everything up for 2 weeks and then release. If new things are being continuously deployed/delivered, then value is being delivered faster and the business is achieving goals faster, learning faster, getting returns faster, rather than having to wait 2 weeks at a time. This in turn makes your business more 'agile', and all the productivity boosts, developer empowerment and happiness are great side benefit of the iterative process.
> And what I have found is that most – say, 90%, of what I learn at one job is completely useless for the next one.
Then the problem isn't with what's being learned. The problem is that the author lacks a framework for generalizing and internalizing the lessons learned.
Letting 90% of lessons learned on the job go to waste later in life is going to lead to a very difficult career path.
Later...
> Compounding is a pretty important concept that shows up in compound interest, in Moore’s Law, all over the place. It’s about virtuous cycles. And so in the limited flexible time that I have, I think the rule of thumb is to focus on things that could trigger a virtuous cycle.
In other words, focus on activities that can capture more value from the 90% of wasted lessons.
Writing, both publicly and privately, is an excellent way to do that. It seems the author may have an inkling but isn't saying it. Given there are only two posts on the blog, this could be the author's attempt to test the idea.
> So I’m going to try to bootstrap my software engineering longbeard wisdom in the following manner:
> 1) Write out my inane thoughts on some software engineering topics.
> 2) Share out my thoughts to people smarter than me and invite vicious critique.
> 3) Update based on #2.
> 4) Attempt to come up with a methodology for finding and prioritizing useful information to read about software engineering, or reflections based on new projects, and integrating into #1.
OP here. You're spot on. Both in calling out that this is a solvable problem (not a fact of nature) and that writing is probably a pretty good way to get started. That was definitely the idea with putting up the blog.
I came up with the 90% number not because I have any real data on this, but as a way to provoke myself to challenge some assumptions. Earlier on in my career, it was easier to tell myself the story that everything I learned, all the hours I spent doing stuff, were all part of a bigger narrative. That it would all build upon itself in a one-directional way. Even if I forgot specific facts, I'd still always be learning and growing in one way or another. I'd be developing critical thinking, learning how to learn, moving to higher levels of abstraction, pruning out useless knowledge and strengthening core insights. That kind of thing.
That was actually a pretty useful mentality, and still is in a lot of ways. It gave me the confidence to do a bunch of career changes (like I think I mentioned in that section of the blog) because I did believe that I was drawing lessons from each successive career area to the next. And there was a lot of truth in that.
However.
At the end of the day, as a developer, I'm certainly not doing Leetcode or Kaggle all day. The majority of what I've spent my time learning has been very specific knowledge: learning the components and business logic in specific systems, getting comfortable with internal tools and processes, getting to become very familiar with some subset of all the company code, working with clients and dependent teams and internal customers. I do believe that being a tech giant can make the situation worse in that regard since the specific problems tend to be so narrow. But my hunch is that it's still the norm for developers.
I'll put things in a different perspective. According to a lot of studies, doctors get worse at their jobs on average over time (link: https://hbr.org/2017/05/do-doctors-get-worse-as-they-get-old...). Most of what they learn on a day-to-day basis is very specific, time-bound knowledge (specific patient info, office info) rather than general-purpose lessons about medicine. By going for the 90% number I was trying to push myself to consider that that sort of effect is the norm (in programming and elsewhere) in contrast to my earlier notions about "constant generalization."
I think this number can be changed but like you mention, it requires an amount of deliberate effort like writing that just hasn't been baked into my usual routines.
Congratulations. You are still getting paid the same.
Boosts in productivity need to come with boosts in pay, otherwise the logical thing to do is to scale back your effort to a point where the amount you get done matches the amount you get paid for.
Programming is a marathon, not a sprint. That will catch up on you some day. Plus, personally, it just makes life hell. The day passes faster when you get into it and those things prevent me from getting into it (not true of everyone from what I've observed)
I've thought about this as well. If you have flow for ~6 hrs a day with ~2 hrs a day of buffer time for meetings, lunch, coffee...etc. You'd be in the top third of software developers. Anecdotal for sure but I'd love to see a study on this.
With rule #2, (knowledge not transferring), that's always a sign you're not working somewhere good, or you're in a role that will stunt your growth. You really should be able to transfer a lot of what you learn. If you're stuck with a bunch of proprietary gunk that does the same thing as open source, but worse, good engineering management knows to replace that stuff because it will inevitably drive talent away. Or at the very least rotate people through those roles. I know I've left jobs where I've been stuck maintaining legacy systems while the rest of the company moved on to open standards because if you do that for years it's like slow career suicide.
I think it should be the contrary: SQL by default, no-SQL if you have a specific need and know what you are doing.
Although I guess if you need blog fodder...
Picking it as the default would make us wrong less often.
The issue with SQL is that the DB needs to be designed first. But when it is done correctly, the advantages are numerous.
Because if you don't your database essentially becomes a write-only vault since you don't have any idea of how your data is stored or was stored in the past.
Some people will argue that PostGreSQL is better in certain ways, but the argument really always comes down to 2 factors. Are you going to hit the cost efficiency performance limits of traditional SQL servers, and do you require advanced searching capabilities like graph queries or synonym matching. Even if both answers are No, I'd still argue for Mongo because it makes it easier to distribute Acid compliant coppies of the data by region, providing backup redundancy as well as fast responses in multiple regions.
You seem to be looking at this solely from a perspective of what kind of queries you can run but there's a lot more to it than that. For example how do you model and maintain relational data, which I'd argue is most data? Does MongoDB have support for foreign keys or something like them these days? A quick Google brings up DBRefs but these seem very soft.
And it's now extremely difficult to get off of it precisely because it doesn't have the schema and referential integrity and constraints that we need to be able to understand our data well enough to actually do the migration. We really want to switch to an RDBMS, but it's going to be risky and difficult.
You could say this is all bad engineering, and I guess that's true in a reductive sense. But it's like arguing that you don't need to climb with a safety rope because good climbers don't fall. Over 10 years and many engineers "bad" engineering happens.
I also believe that reasoning about data is hard, and you should therefore try to avoid doing it. You should do that hard thinking one time, and then rely on your database to enforce the rules until they need changing. Aka: Don't Make Me Think (About This Constantly).
If I believed in conspiracy theories I'd say that Mongo was one of the best vendor lock-in plays in tech. Mongo Corp is going to be profitable for a while because once you're down the Mongo rabbit hole it's a real pain to climb back out. But they'll host your database at least, so you don't also have to deal with that. I will give them credit for having a nice management UI.
But from my experience of the past few years I would never choose Mongo. For documents, Elasticsearch, or Postgres if you don't have too many. For relational data, a relational DB.
And Mongo's slow, too.
This is probably my own fault, but I feel pressured to be constantly doing something towards programming. I feel like I should either be reading a book or starting a project.
I have 10+ programming books where I just finished reading one of them and I have way more unfinished projects than I could count.
It's reassuring to see the push-back against the 10x developer idea, I'm starting to feel less guilty now when I spend my free time NOT on programming.
Something that has also helped me out is picking a project to do, and ignoring everyone else's projects. I would always get pulled away with what I'm starting to work on because I see someone using a new technology or doing something unique. When I would see that I would change up my project because I felt it wasn't good enough or that it wouldn't make me a 10x developer. Now I'm just trying to focus on what makes me happy and what I find enjoyable to work on.
I want to ask though - the author works at Amazon and considers himself a 1x developer. What does that mean for everyone else who doesn't work at Amazon or a FAANG company? Is a 1x developer at a FAANG company a 10x developer elsewhere?
I think judging someone based on their computer habits like being a command-line guy versus a GUI guy might be a risky thing. I'm a command-line guy through and through, but my newest boss comes from a different background and is very GUI-focused, and I think it would be easy to assume he's mediocre based on his choice of tools and such, but the results are more what matters.
That said, he seems like a pretty stubborn guy and I've definitely seen some warning signs on being resistant to change.
Being a multiplier is not about just being technically proficient. It's about enabling your whole team to be more productive, and help getting to the right decisions, instead of decisions that might cost your team or coworkers huge amounts of extra work long term.
Thank you for this comment! I'll keep this in mind as I progress through my career. It's nice to know that it isn't just about being technical.
10x developers don't spend their free time programming, at least in my observation. They don't need to.
> "I want to ask though - the author works at Amazon and considers himself a 1x developer. "
A 1x developer would be an average (ok, fine, median) developer, no? About half are better and about half are worse. Compared to the bottom half of the statistical distribution, a 1x developer would by definition perform better.
Ah, I guess I need to read more about what people view as a 10x developer. I didn't mean to imply that a 10x developer spent all of their free time coding. I just meant that with companies, job descriptions, other dev, etc. talking about wanting a 10x developer I feel pressured to spend my free time programming, or at least doing something that would make me feel like I'm a worthy candidate.
> A 1x developer would be an average (ok, fine, median) developer, no? About half are better and about half are worse. Compared to the bottom half of the statistical distribution, a 1x developer would by definition perform better.
That makes sense. I guess what I'm trying to understand is, could you call yourself a 1x developer (or average) when working at a FAANG company? Doesn't working at a FAANG company imply that you are better than average, considering the status of these companies, their rigorous interview processes, and so on? I get the impression that FAANG only hires 10x developers, or at least developers that come across as 10x.
I apologize if I'm coming across argumentatively at all. I'm just trying to understand if the author is actually a 1x developer, or if they're a 1x developer at Amazon. Would a 1x developer at Amazon be a 10x developer elsewhere?
I would love to hear more about this. I'm guessing that's a cost of at least $4M? How was this approved? How did they allow it to continue for four years?
The OP's assertion that it was primarily due to slavish devotion to InfoSec's insistence of getting off PHP-based MediaWiki, is incomplete and misleading. The team itself opted into the migration, and by their own admission in their own postmortem document, they wanted the opportunity to get off MediaWiki (killing two birds with one stone, both as a technology upgrade and as an InfoSec compliance thing), but they fundamentally vastly underappreciated the scale and complexity of the project (some top line #'s: ~4M total documents, 1.7M unique pages visited daily, 600K unique MAU, etc.). What probably doesn't come through in talking about this, is that the wiki is far from just a simple collection of text-based wiki pages in the standard sense; it serves more like something akin to an unholy abomination of a Wordpress-like platform with endless amounts of plugins and macros and templates, and gives you just enough programmability to be dangerous.
The primary reason it took 4+ years, was essentially down to going through multiple rounds of failed migration attempts, including failed attempts at producing automatic translations of wiki's from one platform to another (which proved to be incredibly complex, due to the nature of how customizable MediaWiki was and how many teams had gone and done so many interesting/advanced things with it, which was great... except it made automatic translation/migration near-impossible).
https://www.wikimatrix.org/compare/xwiki+moinmoin+gwiki
https://twitter.com/patio11/status/1255371954443505665
It's always satisfying to read reports from the trenches demonstrating the exact opposite.
So while they do their best to not have _every_ initiative go this way, large companies - even the mythical FAANGs - will inevitably have moments like this wiki debacle, and they need to be able to absorb them without material impact on the bottom line or having to fire a bunch of people for trying and failing.
As far as how it gets approved, that's not enough money to really register at Amazon's scale. I've seen waste similar to that at companies much, much smaller than Amazon that goes entirely un-noticed.
Maybe I'm being dense, but this still doesn't register for me. If 4+ calendar years of work for a single team continues with no noticeable progress, then there's an organizational issue, no matter the size of the company.
This viewpoint doesn't really match up with how enterprises operate.
It's possible to function at amazon's scale without meticulous accounting and financial operations. The reason for this is that is extremely easy for wasteful spending to propagate throughout the enterprise if unchecked.
Waste only appears after a project fails...it's easy to point out waste after the fact, but it's downright impossible to call it out as it's happening.
Getting documentation off google docs on the other hand seems impossible from a behavior standpoint :(
I can tell you that Amazon is pretty great in most ways, but they have a lot of really old school tools. For example, they still use majordomo to manage hundreds of thousands of email lists. They've customized and extended it in so many ways that it probably looks nothing like open source majordomo, but at it's core, it's still majordomo. They just wrapped 1990s majordomo CLI tool with a 1990s static HTML page that authenticates you with oauth and lets you create and manage email distribution lists.
So many tools at Amazon are like that. But they actually function amazingly well once you adapt to their 1990s->early 2000s quirks.
One comment I'll make is this: every environment will have preferred "paved" roads - use X language/toolchain and Y relational database, Z cloud/metal provider. Ultimately these details, other than X and a subset of Y, shouldn't matter if you build the right abstractions. And by "the right abstractions" I mean you should just be able to declare something like this: relational_database: mydb: size: 100GB readers: - auditservice readwriters: - application1 - application2
("how hard can it be" should be on your Words That Prefigure Expensive Disasters list. It's the business equivalent of "hold my beer and watch this")
I mean, it gave a lot of people something to do (and get paid to do) for 4 years. There's your answer, most of the time.
This is the most concise, honest summary of "agile" I've ever come across. Well put.
The amount of person-hours spent bikeshedding about various sundry "agile" procedural details is absolutely breathtaking. A perpetual, real-life Dilbert punchline taken seriously by armies of *Managers.
The point is that, at the end of the day, it boils down elaborate processes to break things up into two-week chunks of work. That's it. That's the bit I thought was a concise, excellent summary.
Sometimes, for certain flavors of apps and tech and teams, that model works splendidly. Sometimes, it is silly administrative window dressing that is completely disconnected from the reality of what the team is doing. In my experience, it's usually the latter -- and the teams work just fine _in spite_ of the fact that everyone is pretending they are doing "scrum" or "agile".
I'd say what is naive is the belief that such a closely scripted, narrowly defined workflow is a one-sized-fits-all solution to optimizing teams under all circumstances and contexts.
This is very true. I've worked for over 12 years in the bay area in different software engineering teams at startups and found that scrum just leads to burnout and developer unhappiness and encourages team members to just do the minimum 'slap it together till it works' solution without thinking long-term on codebase stability, architecture and maintainability. By far the best development process that leads to high developer happiness, engagement, productivity and empowers them to go the extra mile to go above and beyond, and produce work that acts as a force-multiplier across the team, is what the author describes here as 'Kanban', also known as eXtreme programming. Most people from Pivotal Labs or Carbon Five or from a startup that adopted their process would recognize what the author describes as 'Kanban'.
I do find it odd that people describe Scrum as "leading to burnout" or a "death march", and I'm guessing most people who do either have not worked in waterfall IT projects that preceded widespread Agile or they're part of "Agile" teams that do waterfall development in two-week chunks with daily standups. (Maybe both.)
Scrum practiced well brings the essence of small-town democracy into the workplace. You have to work a late night the day before sprint release? You were in the room when the team agreed to the body of work that would be committed for delivery in this sprint. You just slapped it together until it works? You'll get to accommodate 0-point bugs in a future sprint, and perhaps you can have a conversation in your team's retrospective about why velocity went down that week. (Speaking of: how many other professions get to have a candid conversation with management about what's going well and not going well on a regular basis? Not many, it's a real privilege!) There are certainly deviations from this, but in my experience Scrum teams' problems are generally of their own making, and many of those problems are refined away over time as teams grow and better define their norms.
Fuck that. The fact that my estimate does not work out as expected should not be a reason to work late nights just to get it done.
That’s exactly the death march that you were saying Scrum is not.
I haven’t come into an organization in the past 15 or so years which wasn’t using _some_ form of Agile (generally Scrum). It’s fairly likely anyone who has started their career in software within that timeframe may never have experienced how things were done in the past.
That said, there is certainly always room to improve, which is a critical piece I see teams often miss in Scrum. The two-week cycle isn’t just about planning and doing whatever it takes to hit the commitment. It’s about a) the business having a cycle they can plan to if priorities change and b) the team having a regular feedback loop they can use to help understand where they are doing well and where they aren’t.
Missing a sprint commitment is fine. Missing the commitment for multiple sprints in a row means something is going wrong. This is an opportunity to learn and improve, and the sprint retrospective at the end is as important if not more so than the planning meetings.
And then make time in the sprint to implement improvements in the process, tech, whatever is needed. We use 20% of the sprint time as a rule on this, and move that up and down periodically as needed.
This can get complicated depending on the team dynamics and upper management. Sometimes there are pressures that cause the team to overcommit (throw lower estimates, giving the illusion that things can be done within the 2 weeks).
Overall, I think it kind of depends on the product you're building. If it's some SaaS product in the modern world of CI/CD, it doesn't really make sense to hold everything up for 2 weeks and then release. If new things are being continuously deployed/delivered, then value is being delivered faster and the business is achieving goals faster, learning faster, getting returns faster, rather than having to wait 2 weeks at a time. This in turn makes your business more 'agile', and all the productivity boosts, developer empowerment and happiness are great side benefit of the iterative process.
Then the problem isn't with what's being learned. The problem is that the author lacks a framework for generalizing and internalizing the lessons learned.
Letting 90% of lessons learned on the job go to waste later in life is going to lead to a very difficult career path.
Later...
> Compounding is a pretty important concept that shows up in compound interest, in Moore’s Law, all over the place. It’s about virtuous cycles. And so in the limited flexible time that I have, I think the rule of thumb is to focus on things that could trigger a virtuous cycle.
In other words, focus on activities that can capture more value from the 90% of wasted lessons.
Writing, both publicly and privately, is an excellent way to do that. It seems the author may have an inkling but isn't saying it. Given there are only two posts on the blog, this could be the author's attempt to test the idea.
> So I’m going to try to bootstrap my software engineering longbeard wisdom in the following manner:
> 1) Write out my inane thoughts on some software engineering topics.
> 2) Share out my thoughts to people smarter than me and invite vicious critique.
> 3) Update based on #2.
> 4) Attempt to come up with a methodology for finding and prioritizing useful information to read about software engineering, or reflections based on new projects, and integrating into #1.
I came up with the 90% number not because I have any real data on this, but as a way to provoke myself to challenge some assumptions. Earlier on in my career, it was easier to tell myself the story that everything I learned, all the hours I spent doing stuff, were all part of a bigger narrative. That it would all build upon itself in a one-directional way. Even if I forgot specific facts, I'd still always be learning and growing in one way or another. I'd be developing critical thinking, learning how to learn, moving to higher levels of abstraction, pruning out useless knowledge and strengthening core insights. That kind of thing.
That was actually a pretty useful mentality, and still is in a lot of ways. It gave me the confidence to do a bunch of career changes (like I think I mentioned in that section of the blog) because I did believe that I was drawing lessons from each successive career area to the next. And there was a lot of truth in that.
However.
At the end of the day, as a developer, I'm certainly not doing Leetcode or Kaggle all day. The majority of what I've spent my time learning has been very specific knowledge: learning the components and business logic in specific systems, getting comfortable with internal tools and processes, getting to become very familiar with some subset of all the company code, working with clients and dependent teams and internal customers. I do believe that being a tech giant can make the situation worse in that regard since the specific problems tend to be so narrow. But my hunch is that it's still the norm for developers.
I'll put things in a different perspective. According to a lot of studies, doctors get worse at their jobs on average over time (link: https://hbr.org/2017/05/do-doctors-get-worse-as-they-get-old...). Most of what they learn on a day-to-day basis is very specific, time-bound knowledge (specific patient info, office info) rather than general-purpose lessons about medicine. By going for the 90% number I was trying to push myself to consider that that sort of effect is the norm (in programming and elsewhere) in contrast to my earlier notions about "constant generalization."
I think this number can be changed but like you mention, it requires an amount of deliberate effort like writing that just hasn't been baked into my usual routines.
Is there a right way to go about this? This is something I need to start doing, personally.
During work hours stay away from:
* HN
* Facebook
* Twitter
* Reddit
* Any other thing constantly distracting you and taking away your attention from your job
Congratulations, you are now 3-10x developer.
Congratulations. You are still getting paid the same.
Boosts in productivity need to come with boosts in pay, otherwise the logical thing to do is to scale back your effort to a point where the amount you get done matches the amount you get paid for.
All of my raises and promotions came from results that exceeded expectations.
After the first week, yes.
Whether you are still being paid the same after a year depends on you and your negotiation skills.