One of my best commits was removing about 60K lines of code, a whole "server" (it was early 2000's) with that had to hold all of its state in memory and replacing them with about 5k of logic that was lightweight enough to piggyback into another service and had no in-memory state at all. That was pure a algorithmic win - figuring out that a specific guided subgraph isomorphism where the target was a tree (directed, non cyclic graph with a single root) was possible by a single walk through the origin (general) directed bi-graph while emitting vertices and edges to the output graph (tree) and maintaining only a small in-process peek-able stack of steps taken from the root that can affect the current generation step (not necessarily just parent path).
I still remember the behemoth of a commit that was "-60,000 (or similar) lines of code". Best commit I ever pushed.
Those were fun times. Hadn't done anything algorithmically impressive since.
I’m a hobby programmer and lucky enough to script a lot of things at work. I consider myself fairly adept at some parts of programming, but comments like these make it so clear to me that I have an absolutely massive universe of unknowns that I’m not sure I have enough of a lifetime left to learn about.
I want to believe a lot of these algorithms will "come to you" if you're ever in a similar situation; only later will you learn that they have a name, or there's books written about it, etc.
But a lot is opportunity. Like, I had the opportunity to work on an old PHP backend, 500ms - 1 second response times (thanks in part to it writing everything to a giant XML string which was then parsed and converted to a JSON blob before being sent back over the line). Simply rewriting it in naive / best practices Go changed response times to 10 ms. In hindsight the project was far too big to rewrite on my own and I should have spent six months to a year trying to optimize and refactor it, but, hindsight.
Read some good books on data structures and algorithms, and you'll be catching up with this sort of comment in no time. And then realise there will always be a universe of unknowns to you. :-) Good luck, and keep going.
(More than?) half of the difficulty comes from the vocabulary. It’s very much a shibboleth—learn to talk the talk and people will assume you are a genius who walks the walk.
A lot if it is just technical jargon. Which doesn't mean it's bad, one has to have a way to talk about things, but the underlying logic, I've found, is usually graspable for most people.
It's the difference between hearing a lecture from a "bad" professor in Uni and watching a lecture video by Feynman, where he tries to get rid of scientific terms, when explaining things in simple terms to the public.
As long as you get a definition for your terms, things are manageable.
You could've figured out this one with basic familiarity with how graphs are represented, constructed, and navigated, and just working through it.
One way to often arrive at it is to just draw some graphs, on paper/whiteboard, and manually step through examples, pointing with your finger/pen, drawing changes, and sometimes drawing a table. You'll get a better idea of what has to happen, and what the opportunities are.
This sounds "Then draw the rest of the owl," but it can work, once you get immersed.
Then code it up. And when you spot a clever opportunity, and find the right language to document your solution, it can sound like a brilliant insight that you could just pull out of the air, because you are so knowledgeable and smart in general. When you actually had to work through that specific problem, to the point you understood it, like Feynman would want you to.
I think Feynman would tell us to work through problems. And that Feynman would really f-ing hate Leetcode performance art interviews (like he was dismayed when he found students who'd rote-memorize the things to say). Don't let Leetcode asshattery make you think you're "not good at" algorithms.
I deleted an entire micro service of task runners and replaced it with a library that uses setTimeout as the primitive driving tasks from our main server.
It’s because every task was doing a database call but they had a whole repo and aws lambdas for running it. Stupidest thing I’ve ever seen.
> I deleted an entire micro service of task runners and replaced it with a library that uses setTimeout as the primitive driving tasks from our main server.
Your example raises some serious red flags. Did it ever dawned upon you that the reason these background tasks were offloaded to a dedicated service might have been to shed this load from your main server and protect it from handling sudden peaks in demand?
If you flatten both of your trees/graphs and regard the output as strings of nodes, you reduce your task to a substring search.
Now if you want to verify if the structures and not just the leave nodes are identical, you might be able to encode structure information into you strings.
Hi I'm a mathematician with a background in graph theory and algorithms. I'm trying to find a job outside academia. Can you elaborate on the kind of work you were doing? Sounds like I could fruitfully apply my skills to something like that. Cheers!
Look into quantitative analyst roles at finance firms if you’re that smart.
There’s also a role called being an algorithms engineer in standard tech companies (typically for lower level work like networking, embedded systems, graphics, or embedded systems) but the lack of an engineering background may hamstring you there. Engineers working in crypto also use a fair bit of algorithms knowledge.
I do low level work at a top company, and you only use algorithms knowledge on the job a couple of times a year at best.
You can try to get a job at an investment bank, if you're okay with heavy slogging, i.e., in terms of hours, which I have heard is the case, although that could be wrong.
I heard from someone who was in that field, that the main qualification for such a job is analytical ability and mathematics knowledge, apart from programming skills, of course.
That was about 20 years ago. Not much translates to today's world. I was in the algorithms team working on a CMDB product. Great tech, terrible marketing.
These days it's very different, mostly large-ish distributed systems.
I would love a little more context on this, cause it sounds super interesting and I also have zero clue what you’re talking about. But translating a stateful program into a stateless one sounds like absolute magic that I would love to know about
He has two graphs. He wants to determine if one graph is a subset of another graph.
The graph that is to be determined as a subset is a tree. From there he says it can be done in an algorithm that only traverses every node at most one time.
I’m assuming he’s also given a starting node in the original graph and the algorithm just traverses both graphs at the same time starting from the given start node in the original graph and the root in the tree to see if they match? Standard DFS or BFS works here.
I may be mistaken. Because I don’t see any other way to do it in one walk through unless you are given a starting node in the original graph but I could be mistaken.
To your other point, The algorithm inherently has to also be statefull. All traversal algorithms for graphs have to have long term state. Simply because if your at a node in a graph and it has like 40 paths to other places you can literally only go down one path at a time and you have to statefully remember that node has another 39 paths that you have to come back to later.
The target being a tree is irrelevant right? It’s the “guided” part that makes a single walk through possible?
You are starting at a specific node in the graph and saying that if there’s an isomorphism the target tree root node must be equivalent to that specific starting node in the original graph.
You just walk through the original graph following the pattern of the target tree and if something doesn’t match it’s false otherwise true? Am I mistaken here? Again the target being a tree is a bit irrelevant. This will work for any subgraph as long as as you are also given starting point nodes for both the target and the original graph?
In college I worked for a company whose goal was to prove that their management techniques could get a bunch of freshman to write quality code.
They couldn't. I would go find the code that caused a bug, fix it and discover that the bug was still there. Because previous students had, rather than add a parameter to a function, would make a copy and slightly modify it.
I deleted about 3/4 of their code base (thousands of lines of Turbo Pascal) that fall.
Bonus: the customer was the Department of Energy, and the program managed nuclear material inventory. Sleep tight.
In addition to not breaking existing code, also has added benefit of boosting personal contribution metrics in eyes of management. Oh and it's really easy to revert things - all I have to do is find the latest copy and delete it. It'll work great, promise.
I work with someone who has a habit of code duplication like this. Typically it’s an effort to turn around something quickly for someone who is demanding and loud. Refactoring the shared function to support the end edge case would take more time and testing, so he doesn’t do it. This is a symptom of the core problem.
I have a habit of doing this for data processing code (python, polars).
For other code it's an absolute stink and i agree. But for data transforms... I've seen the alternative, a neatly abstracted in-house library of abstracted combinations of dataframe operations with different parameters and.. It's the most aesthetically pleasing unfathomable hell I've ever experienced.
So now, when munging dataframes, i will be much faster to reach for 'copy that function and modify it slightly' - maintenance headache, but at least the result is readable.
But it's a false premise; the claim is that just copy/pasting something is faster, but is it really?
The demanding / loud person can and should be ignored; as a developer, you are responsible for code quality and maintainability, not your / their manager.
> I work with someone who has a habit of code duplication like this.
Are you sure it's code duplication?
I mean, read your own description: the new function does not need to support edge cases. Having to handle edge cases is a huge code smell, and a clear sign of premature generalization.
And you even admit the guy was more productive and added less bugs?
There is a reason why the mistakes caused by naive approaches to Don't Repeat Yourself (DRY) are corrected with Write Everything Twice (WET).
This reminds me of my experience. I've worked for one company based in SEA that had almost identical portals in several countries in the region. Portals were developed by an Australian company and I was hired to maintain existing/develop new portals.
Source code for each portal was stored in a separate Git repository. I've asked the original authors how am I supposed to fix bugs that affect all the portals or develop new functionality for all the portals. The answer was to backport all fixes manually to all copies of the source code.
Then I've asked: isn't it possible to use a single source repository and use feature flags to customize appearance and features of each portals. Original authors said that it is impossible.
In 2-3 months I've merged the code of 4-5 portals into one repository, added feature flags, upgraded the framework version, release went flawlessly, and it was possible to fix a bug simultaneously for all the portals or develop a new functionality available across all the countries where the company operated. It was a huge relief for me as copying bugfixes manually was tedious and error-prone process.
I once had to deal with some contractors that habitually did this, when confronted on how this could lead to confusion they said "that's what Ctrl+F is for."
Oh boy! This reminded me of one of my worst tech leads. He pushed secret tokens to github. When I asked in the team meeting why would we do this instead of using secrets manager, the response was: "These are private respos. Also we signed an NDA before joining the company"
> Bonus: the customer was the Department of Energy, and the program managed nuclear material inventory. Sleep tight.
These are my favorite (in a sense) programmer stories--that there's these incomprehensible piles of rubbish that somehow, like, run The World and things, and yet somehow things manage to work (in an outwardly observable sense).
Although, I recall two somewhat recent stories where this wasn't the case. The unemployment benefits fiascos during early Covid-era, and some more recent air traffic control-related things (one which effected me personally).
Note for anyone wondering: reposts are ok after a year or so (https://news.ycombinator.com/newsfaq.html).In addition to it being fun to revisit perennials sometimes (though not too often), this is also a way for newer cohorts to encounter the classics for the first time—an important function of this site!
I am a simple man
I see -2k lines of code, I upvote
I've told this story to every client who tried schemes to benchmark productivity by some single-axis metric. The fact that it was Atkinson demonstrates that real productivity is only benchmarkable by utility, and if you can get a truly accurate quantification for that then you're on the shortlist for a Nobel in economics.
Important enough to re-state whenever it arises - once you have 2 or more axes/dimensions, you no longer have a linear ordering. You need to map back to a number line to "compare". This is the motivation or driving force toward your "single axis". { That doesn't mean it's a goal any easier to realize, though. I am attempting to merely clarify/amplify rather than dispute here.. }
I figured that articles like folklore are like an amusing movie file (say someone chopping a skin of a watermelon) that's repeatedly being passed around reddit.
An old Dilbert cartoon had the pointy haired boss declare monetary rewards for every fixed bug in their product. Wally went back to his desk murmuring "today I'm going to code me a minivan!"
it's just a stand-in for "expensive but relatable purchase". He's saying "I'm about to write so many bugs that the sum reward will be in the tens of thousands"
I've become something of the guy that's the main code remover at my current job. Part of it is because I've been here the longest on the team, so I've got both the knowledge and the confidence to say a feature is dead and we can get rid of it. But also part of it is just being the one to go in and clean up things like release flags after they've gone live in prod.
I'm trying to socialize my team to get more in the habit of this, but it's been hard. It's not so much that I get pushback, it's just that tasks like "clean up the feature flag" get thrown into the tech debt pile. From my perspective, that's feature work, it just happens to take place after the feature goes live instead of before. But it's work that we committed to when we decided to build the feature, so no, you don't get to put it on the tech debt board like it was some unexpected issue that came up during development.
Curious to hear other perspectives here, I do worry that I'm a bit too dogmatic about this sometimes. Part of it maybe comes from working in shared art / maker spaces a lot in the past, where "clean up your shit" was rule #1, and I kind of see developers leaving unused code throughout the codebase for features they owned through the same lens.
I probably spend 30% of time on refactoring. Deduplicating common things different people have done, adding seperating layers between old shitty code and the fancy new abstractions, adding friction to some areas to discourage crossing module boundaries, that sort of thing.
For some reason new devs keep telling me how easy it is to implement features.
Really wonder why that is. The managers keep telling me that refactoring is a nice-to-have thing and not necessary and maybe we have time next sprint.
You just have to do it without telling anyone, it improves velocity for everyone. It's architecture work on the small scale.
On days I write code, I try to do one "cleanup" PR a day just to get myself warmed up. Sometimes it is removing a feature flag, sometimes it is rewriting a file to use some new standards like a better logger library or test pattern. None of this is ticketed work, and if something takes longer than ten minutes or so I drop it and work on whatever I was going to work on originally. Make (trivial) cleanups a fun treat and a break from real work and it is easier to get other people excited about them.
Of course, lately anything trivial I ask codex to do - but there is still fun in figuring out what trivial thing I should have it take on next.
Cleanup doesn't get me a raise or promoted. In a world with constant threats of layoffs, cleanup may even be penalized depending on what's rewarded. "Clean up your shit" doesn't work when my job is on the line.
It needs to be rewarded properly to be prioritized.
Cleaning up of feature flags was something that I excelled at failing to do. If you are the one cleaning them up, then you sir deserve a raise. Don't question it. It's a service.
Well, we prioritize amongst the tech debt on that board and then move it onto the main board for sprint, it's not like it's a completely separate process. Things do go there to die sometimes though.
This is a good example[1] at 64k LOC removal. We removed built-in support for C# + WinRT interop on Windows and instead required users to use a source-generation tool (which is still the case today). This was a breaking change. We realized we had one chance to do this and took it.
Microsoft, the number being 30%; whether that's accurate is another matter. Twenty years ago people already used IDEs to generate boilerplate code (remember Java's getters/setters/hashCode/toString?) because some guy in a book said you had to.
About 1.5 years ago I inherited a project with ~ 250,000 lines of code - in just the web UI (not counting back end).
The developer who wrote it was a smart guy, but he had never worked on any other JS project. All state was stored in the DOM in custom attributes, .addEventListeners EVERYWHERE... I joke that it was as if you took a monk, gave him a book about javascript, and then locked him in a cell for 10 years.
I started refactoring pieces into web components, and after about 6 months had removed 50k lines of code. Now knowing enough about the app, I started a complete rewrite. The rewrite is about 80% feature parity, and is around 17k lines of code (not counting libraries like Vue/pinia/etc).
So, soon, I shall have removed over 200,000 loc from the project. I feel like then I should retire as I will never top that.
> The rewrite is about 80% feature parity, and is around 17k lines of code (not counting libraries like Vue/pinia/etc).
This is exactly where these comparisons break down. Obviously you don't need as much code to get passable implementations of a fraction of all the features.
It's definitely a good argument for not reinventing the wheel though.
I'd rather have 250,000 lines of code but 230,000 of that is in battle tested libraries. And of which only 20,000 lines are what we ever need to read/write.
I mean, you can get basic implementations of Vue and state management libs in a few hundred (maybe thousand?) LOCs (lots of examples on the interweb) that are probably less "toyish" than whatever this person had handrolled
> I joke that it was as if you took a monk, gave him a book about javascript, and then locked him in a cell for 10 years.
I've had a similar experience (see other comment), the original author was a junior developer at best, but unfortunately, a middle-aged, experienced developer, one of the founders of the company, and very productive. But obviously, not someone who had ever worked in a team or who had someone else work on their codebase.
Think functions thousands of lines long, nested switch/case/if/else/ternary things ten levels deep, concatenated SQL queries (it was PHP because of course), concatenated JS/HTML/HTML-with-JS (it was Dojo front-end), no automated tests of any sort, etc.
I still remember the behemoth of a commit that was "-60,000 (or similar) lines of code". Best commit I ever pushed.
Those were fun times. Hadn't done anything algorithmically impressive since.
But a lot is opportunity. Like, I had the opportunity to work on an old PHP backend, 500ms - 1 second response times (thanks in part to it writing everything to a giant XML string which was then parsed and converted to a JSON blob before being sent back over the line). Simply rewriting it in naive / best practices Go changed response times to 10 ms. In hindsight the project was far too big to rewrite on my own and I should have spent six months to a year trying to optimize and refactor it, but, hindsight.
It's the difference between hearing a lecture from a "bad" professor in Uni and watching a lecture video by Feynman, where he tries to get rid of scientific terms, when explaining things in simple terms to the public.
As long as you get a definition for your terms, things are manageable.
One way to often arrive at it is to just draw some graphs, on paper/whiteboard, and manually step through examples, pointing with your finger/pen, drawing changes, and sometimes drawing a table. You'll get a better idea of what has to happen, and what the opportunities are.
This sounds "Then draw the rest of the owl," but it can work, once you get immersed.
Then code it up. And when you spot a clever opportunity, and find the right language to document your solution, it can sound like a brilliant insight that you could just pull out of the air, because you are so knowledgeable and smart in general. When you actually had to work through that specific problem, to the point you understood it, like Feynman would want you to.
I think Feynman would tell us to work through problems. And that Feynman would really f-ing hate Leetcode performance art interviews (like he was dismayed when he found students who'd rote-memorize the things to say). Don't let Leetcode asshattery make you think you're "not good at" algorithms.
Jokes aside, could I get a layman's explanation of the graph theory stuff here? Sounds pretty cool but the terminology escapes me
It’s because every task was doing a database call but they had a whole repo and aws lambdas for running it. Stupidest thing I’ve ever seen.
Your example raises some serious red flags. Did it ever dawned upon you that the reason these background tasks were offloaded to a dedicated service might have been to shed this load from your main server and protect it from handling sudden peaks in demand?
Given two graphs one is a tree you cannot determine if the tree is a subgraph of the other graph in one walk through?
It’s only possible if you’re given additional information? Like a starting node to search from? I’m genuinely confused?
http://www.nsl.com/papers/samefringe.htm
If you flatten both of your trees/graphs and regard the output as strings of nodes, you reduce your task to a substring search.
Now if you want to verify if the structures and not just the leave nodes are identical, you might be able to encode structure information into you strings.
Deleted Comment
There’s also a role called being an algorithms engineer in standard tech companies (typically for lower level work like networking, embedded systems, graphics, or embedded systems) but the lack of an engineering background may hamstring you there. Engineers working in crypto also use a fair bit of algorithms knowledge.
I do low level work at a top company, and you only use algorithms knowledge on the job a couple of times a year at best.
I heard from someone who was in that field, that the main qualification for such a job is analytical ability and mathematics knowledge, apart from programming skills, of course.
These days it's very different, mostly large-ish distributed systems.
The graph that is to be determined as a subset is a tree. From there he says it can be done in an algorithm that only traverses every node at most one time.
I’m assuming he’s also given a starting node in the original graph and the algorithm just traverses both graphs at the same time starting from the given start node in the original graph and the root in the tree to see if they match? Standard DFS or BFS works here.
I may be mistaken. Because I don’t see any other way to do it in one walk through unless you are given a starting node in the original graph but I could be mistaken.
To your other point, The algorithm inherently has to also be statefull. All traversal algorithms for graphs have to have long term state. Simply because if your at a node in a graph and it has like 40 paths to other places you can literally only go down one path at a time and you have to statefully remember that node has another 39 paths that you have to come back to later.
You are starting at a specific node in the graph and saying that if there’s an isomorphism the target tree root node must be equivalent to that specific starting node in the original graph.
You just walk through the original graph following the pattern of the target tree and if something doesn’t match it’s false otherwise true? Am I mistaken here? Again the target being a tree is a bit irrelevant. This will work for any subgraph as long as as you are also given starting point nodes for both the target and the original graph?
the select-a-bunch-of-code-and-then-zap-it-with-the-Del-key is the best hardware algorithm.
Dead Comment
They couldn't. I would go find the code that caused a bug, fix it and discover that the bug was still there. Because previous students had, rather than add a parameter to a function, would make a copy and slightly modify it.
I deleted about 3/4 of their code base (thousands of lines of Turbo Pascal) that fall.
Bonus: the customer was the Department of Energy, and the program managed nuclear material inventory. Sleep tight.
In addition to not breaking existing code, also has added benefit of boosting personal contribution metrics in eyes of management. Oh and it's really easy to revert things - all I have to do is find the latest copy and delete it. It'll work great, promise.
For other code it's an absolute stink and i agree. But for data transforms... I've seen the alternative, a neatly abstracted in-house library of abstracted combinations of dataframe operations with different parameters and.. It's the most aesthetically pleasing unfathomable hell I've ever experienced.
So now, when munging dataframes, i will be much faster to reach for 'copy that function and modify it slightly' - maintenance headache, but at least the result is readable.
The demanding / loud person can and should be ignored; as a developer, you are responsible for code quality and maintainability, not your / their manager.
Are you sure it's code duplication?
I mean, read your own description: the new function does not need to support edge cases. Having to handle edge cases is a huge code smell, and a clear sign of premature generalization.
And you even admit the guy was more productive and added less bugs?
There is a reason why the mistakes caused by naive approaches to Don't Repeat Yourself (DRY) are corrected with Write Everything Twice (WET).
Source code for each portal was stored in a separate Git repository. I've asked the original authors how am I supposed to fix bugs that affect all the portals or develop new functionality for all the portals. The answer was to backport all fixes manually to all copies of the source code.
Then I've asked: isn't it possible to use a single source repository and use feature flags to customize appearance and features of each portals. Original authors said that it is impossible.
In 2-3 months I've merged the code of 4-5 portals into one repository, added feature flags, upgraded the framework version, release went flawlessly, and it was possible to fix a bug simultaneously for all the portals or develop a new functionality available across all the countries where the company operated. It was a huge relief for me as copying bugfixes manually was tedious and error-prone process.
It was so long ago it feels half mythical to me.
These are my favorite (in a sense) programmer stories--that there's these incomprehensible piles of rubbish that somehow, like, run The World and things, and yet somehow things manage to work (in an outwardly observable sense).
Although, I recall two somewhat recent stories where this wasn't the case. The unemployment benefits fiascos during early Covid-era, and some more recent air traffic control-related things (one which effected me personally).
Dead Comment
Negative 2000 Lines of Code (1982) - https://news.ycombinator.com/item?id=33483165 - Nov 2022 (167 comments)
-2000 Lines of Code - https://news.ycombinator.com/item?id=26387179 - March 2021 (256 comments)
-2000 Lines of Code - https://news.ycombinator.com/item?id=10734815 - Dec 2015 (131 comments)
-2000 lines of code - https://news.ycombinator.com/item?id=7516671 - April 2014 (139 comments)
-2000 Lines Of Code - https://news.ycombinator.com/item?id=4040082 - May 2012 (34 comments)
-2000 lines of code - https://news.ycombinator.com/item?id=1545452 - July 2010 (50 comments)
-2000 Lines Of Code - https://news.ycombinator.com/item?id=1114223 - Feb 2010 (39 comments)
-2000 Lines Of Code (metrics == bad) (1982) - https://news.ycombinator.com/item?id=1069066 - Jan 2010 (2 comments)
Note for anyone wondering: reposts are ok after a year or so (https://news.ycombinator.com/newsfaq.html).In addition to it being fun to revisit perennials sometimes (though not too often), this is also a way for newer cohorts to encounter the classics for the first time—an important function of this site!
I've told this story to every client who tried schemes to benchmark productivity by some single-axis metric. The fact that it was Atkinson demonstrates that real productivity is only benchmarkable by utility, and if you can get a truly accurate quantification for that then you're on the shortlist for a Nobel in economics.
Bill Atkinson has died - https://news.ycombinator.com/item?id=44210606 - June 7, 2025 (277 comments)
I didn't see that post, but I'm glad we're able to remember Bill through humorous anecdotes and eternally relevant lessons like this.
My manager has it pinned on the breakroom wall.
[0]: https://thedailywtf.com/articles/The-Defect-Black-Market
I'm trying to socialize my team to get more in the habit of this, but it's been hard. It's not so much that I get pushback, it's just that tasks like "clean up the feature flag" get thrown into the tech debt pile. From my perspective, that's feature work, it just happens to take place after the feature goes live instead of before. But it's work that we committed to when we decided to build the feature, so no, you don't get to put it on the tech debt board like it was some unexpected issue that came up during development.
Curious to hear other perspectives here, I do worry that I'm a bit too dogmatic about this sometimes. Part of it maybe comes from working in shared art / maker spaces a lot in the past, where "clean up your shit" was rule #1, and I kind of see developers leaving unused code throughout the codebase for features they owned through the same lens.
For some reason new devs keep telling me how easy it is to implement features.
Really wonder why that is. The managers keep telling me that refactoring is a nice-to-have thing and not necessary and maybe we have time next sprint.
You just have to do it without telling anyone, it improves velocity for everyone. It's architecture work on the small scale.
Of course, lately anything trivial I ask codex to do - but there is still fun in figuring out what trivial thing I should have it take on next.
It needs to be rewarded properly to be prioritized.
I haven't seen a lot of other good suggestions for how to accomplish this, so maybe you're being just the right amount of dogmatic.
Taking you to literally mean you have a separate board for tech debt, that's your problem right there.
[1] https://github.com/dotnet/runtime/pull/36715/files
https://forum.cursor.com/t/cursor-yolo-deleted-everything-in...
Deleted Comment
The developer who wrote it was a smart guy, but he had never worked on any other JS project. All state was stored in the DOM in custom attributes, .addEventListeners EVERYWHERE... I joke that it was as if you took a monk, gave him a book about javascript, and then locked him in a cell for 10 years.
I started refactoring pieces into web components, and after about 6 months had removed 50k lines of code. Now knowing enough about the app, I started a complete rewrite. The rewrite is about 80% feature parity, and is around 17k lines of code (not counting libraries like Vue/pinia/etc).
So, soon, I shall have removed over 200,000 loc from the project. I feel like then I should retire as I will never top that.
This is exactly where these comparisons break down. Obviously you don't need as much code to get passable implementations of a fraction of all the features.
I'd rather have 250,000 lines of code but 230,000 of that is in battle tested libraries. And of which only 20,000 lines are what we ever need to read/write.
You make a fair point that a basic framework can be expressed with much less code.
And that the remaining 20% probably contains more edge cases with proportionally more code.
But do you think the last 20% will eventually make up anywhere near 233k lines of code?
The real save here comes from rewriting: seeing all the common denominators and knowing what's ahead.
I've had a similar experience (see other comment), the original author was a junior developer at best, but unfortunately, a middle-aged, experienced developer, one of the founders of the company, and very productive. But obviously, not someone who had ever worked in a team or who had someone else work on their codebase.
Think functions thousands of lines long, nested switch/case/if/else/ternary things ten levels deep, concatenated SQL queries (it was PHP because of course), concatenated JS/HTML/HTML-with-JS (it was Dojo front-end), no automated tests of any sort, etc.