Readit News logoReadit News
aetherspawn · a year ago
When you have no tests your problems go away because you don’t see any test failures.

Never have I tested anything and NOT found a bug, and most things I tested I thought were already OK to ship.

Thus, when you delete your tests, the only person you are fooling is probably yourself :(

From reading your page I get the impression you are more burnt out from variation/configuration management which is completely relatable… I am too. This is a hard problem. But user volume is required to make $$. If the problem was easy then the market would be saturated with one size fits all solutions for everything.

IgorPartola · a year ago
I think this is highly domain dependent. I currently am working on codebase that has tests for a part of it that are an incredibly useful tool at helping me refactor that particular part. Other parts are so much UI behavior that it is significantly faster to catch bugs by manual testing because the UI/workflow either changes so fast that you don’t write tests for it (knowing they’ll be useless when the workflow is redesigned in the next iteration) or so slow that that particular UI/workflow just doesn’t get touched again so refactors don’t happen to it to introduce more bugs.

I have never found tests to be universally necessary or helpful (just like types). They are a tool for a job, not a holy grail. I have also never met a codebase that had good test coverage and yet was free of bugs that aren’t then found with either manual testing or usage.

Somewhat hyperbolically and sarcastically: if you are good enough to write perfect tests for your code, just write perfect code. If you aren’t perfect at writing tests, how do you know the tests are complete, bug free, and actually useful? :)

scott_w · a year ago
> if you are good enough to write perfect tests for your code, just write perfect code. If you aren’t perfect at writing tests, how do you know the tests are complete, bug free, and actually useful?

This sentence makes no sense. Tests are infinitely more straightforward than code. I always go back to my dad's work as a winder before he retired:

After repairing a generator, they'd test it can handle the current that it's expected to take by putting it in a platform and... running electricity through it. They'd occasionally melt all the wiring on the generator and have to rewind it.

By your logic, since they weren't "good enough" to fix it perfectly, how could they know their test even worked? Should they have just shipped the generator back to the customer without testing it?

Cthulhu_ · a year ago
IMO if your implementation is that unstable (you mentioned the UI/workflow changes fast) it isn't worth writing a test for it, but also, I don't think it shoud be released to end-users because (and this is making a big assumption, granted), it sounds like the product is trying to figure out what it wants to be.

I am a proponent of having the UI/UX design of a feature be done before development gets started on it. In an ideal XP/agile environment the designers and developers work closely together and constantly iterate, but in practice there are so many variables involved in UX design and so many parties that have an opinion, that it'll be a moving target in that case, which makes development work (and automated tests) an exercise in rework.

256_ · a year ago
> Somewhat hyperbolically and sarcastically: if you are good enough to write perfect tests for your code, just write perfect code. If you aren’t perfect at writing tests, how do you know the tests are complete, bug free, and actually useful? :)

Well obviously, you just write tests for the tests. :3

It's called induction.

watwut · a year ago
> if you are good enough to write perfect tests for your code, just write perfect code.

I have yet to see anyone claim they write perfect tests.

> If you aren’t perfect at writing tests, how do you know the tests are complete, bug free,

I never claimed to produce or seen complete tests. I never claimed or seen bug free tests.

> and actually useful? :)

I know that whenever I fix something or refactor, test fails and I found a bug in code. I know that when we do not have have the same bag again and then again the same bug and again the same bug.

I know when testers time is saved and they dont have to test repetitive basic stuff anymore and can focuse on more complicated stuff.

bubblebeard · a year ago
Types are there to ensure against human error and reduce the amount of code we need to write.

Tests exist to guarantee functionality and increase productivity (by ensuring intended functionality remains as we refactor/change our code/UI).

There may be cases where some tests are too expensive to write, but I have never come across this myself. For example, in functional tests you would attempt to find a secure way to distinguish elements regardless of future changes to that UI. If your UI changes so much between iterations that this is impossible it sounds like you need to consider the layout a little more before building anything. I’m saying that based on experience, having been involved in several projects where this was a problem.

Having said that, I’m myself horrible at writing tests for UI, an area I’m trying to improve myself, it really bothers me :)

klyrs · a year ago
> the UI/workflow either changes so fast that you don’t write tests for it

This is my number one pet peeve in software. Every aspect of every interface is subject to change always; not to mention the bonanza of dickbars and other dark patterns. Interfaces are a minefield of "operator error" but really it's an operational error.

bregma · a year ago
Tests are just a way of providing evidence that your software does what it's supposed to. If you're not providing evidence, you're just saying "trust me, I'm a programmer."

Think back to grade school math class and your teacher has given you a question about trains with the requirement "show your work." Now, I know a lot of kids will complain about that requirement and just give the answer because "I did it in my head" or something. They fail. Here's the fact: the teacher already knows the trains will meet in Peoria at 12:15. What they're looking for is evidence that you have learned the lesson of how to solve a certain class of problems using the method taught.

If you're a professional software developer, it is often necessary to provide evidence of correctness of your code. In a world where dollars or even human lives are on the line, arrogance is rarely a successful defense in a court of law.

saithound · a year ago
> If you aren’t perfect at writing tests, how do you know the tests are complete, bug free, and actually useful? :)

I did like the rest of the post, but this is not hyperbole. It's just a disingenuous argument, and one that looks orthogonal to your point that "tests are a tool for a job".

If you aren't perfect at magnetizing iron, and you need a working compass, you better magnetize two needles and use one to test the other. The worse you are at magnetizing iron, the more important it is that you do this if you want to end up with a working compass.

ffsm8 · a year ago
I feel like our industry kinda went the wrong way wrt UI frontend tests.

It should be much less focused on unit testing and more about flow and state representation, both of which can only be tested visually. And if a flow or state representation changed, that should equate to a simple warning which automatically approves the new representation as the default.

So a good testing framework would make it trivial to mock the API responses to create such a flow, and then automatically do a visual regression of the process.

Cypress component tests do some of this, but it's still a lackluster developer experience, honestly

This is specifically about UI frontend tests. Code that doesn't end up in the DOM are great for unit tests.

akkartik · a year ago
> When you have no tests your problems go away because you don’t see any test failures.

The flip side of this is the quote that "tests can show the presence of bugs, but never their absence". It better fits my experience here; every few months I'd find a new bug and diligently write a test for it. But then there was a new bug in a few months, discovered by someone in the first 10 minutes of using it.

I'm sure I have bugs to discover in the new version. But the data structures I chose make many of the old tests obsolete by construction. So I'm hopeful that I'm a few bugs away from something fairly stable at least for idle use.

Tests are definitely invaluable for a large team constantly making changes to a codebase. But here I'm trying to build something with a frozen feature set.

monkpit · a year ago
If your tests break or go away when your implementation changes, aren’t those bad tests by definition?
creesch · a year ago
Automated tests ideally don't entirely replace manually executed tests. What they do replace is repetitive regression tests that don't need to be executed manually.

In an ideal world this opens up room for exploratory testing where someone goes "off-script" and focuses specifically on those areas that are not covered by your automated tests.

The thing is that automated tests aren't really tests, even though we call them that. They are automated checks at specified points, so they only check the outcome at those point in time. So yeah, they are also completely blind from the sort of thing a human* might easily spot while using the application.

*Just to be ahead of the AI bros, we are not there yet, hold your horses.

dgb23 · a year ago
I watched a video by Russ Cox that was recommended in a recent thread, Go Testing By Example:

https://www.youtube.com/watch?v=X4rxi9jStLo

There's _a lot_ of useful advice in there. But what I wanted to mention specifically is this:

One of the things he's saying is that you can sometimes test against a simpler (let's say brute force) implementation that is easier to verify than what you want to test.

There's a deeper wisdom implied in there:

The usefulness of tests is dependent on the simplicity of their implementation relative to the simplicity of the implementation of what they are testing.

Or said more strongly, tests are only useful if they are simpler than what they test. No matter how many tests are written, in the end we need to reason about code. Something being a "test", doesn't necessarily imply anything useful by itself.

This is why I think a lot of programmers are wary of:

- Splitting up functions into pieces, which don't represent a useful interface, just so the tests are easier to write.

- Testing simple/trivial functions (helpers, small queries etc.) just for coverage. The tests are not any simpler than these functions.

- Dependency inversion and mocking, especially if they introduce abstractions just in order to write those tests.

I don't think of those things in absolute terms though, one can have reasons for each. The point is to not lose the plot.

ChrisMarshallNY · a year ago
> Never have I tested anything and NOT found a bug, and most things I tested I thought were already OK to ship.

I have found that, in my own case, every time I’ve written a unit test, it has exposed bugs.

I don’t usually do the TDD thing, where I write failing tests first (but I do it, occasionally), so these tests are usually against code that I already think works.

That said, I generally prefer test harnesses to unit tests[0]. They still find bugs, but the workflow is less straightforward. They also cause me to do more testing, as I develop, so the bugs are fixed in situ, so to speak.

[0] https://littlegreenviper.com/testing-harness-vs-unit/

drewcoo · a year ago
> That said, I generally prefer test harnesses to unit tests[0].

That's a strange redefinition of harness.

The larger-scoped tests are more often called integration or even system tests.

And while I'm here, those are slow tests that are harder to debug and require more maintenance (often maintenance of an entire environment to run them in!). Unit tests are closer to what they test, fast, and aren't tied to an environment - they can be run on every push.

YZF · a year ago
The focus on automated unit/integrations tests is a relatively modern thing (late 90's?). There was some pretty large and extremely reliable software shipped before that focus. Random example is that the Linux kernel didn't have much tests (I think these days there is more testing). Unix likely didn't have a lot of "tests". Compilers tended to have them. Operating systems less so. Games (e.g. I'm sure Doom) didn't tend to have tests.

You need to find a balance point.

I think we know that (some) automated tests (unit, integration, end to end) can help build quality software. We also know good tests aren't always easy to write, bad tests make for harder refactoring and flaky tests can suck a lot of time on large projects. At the same time it's always interesting to try different things and find out what works, especially for you if you're a solo developers.

amluto · a year ago
> Random example is that the Linux kernel didn't have much tests (I think these days there is more testing).

As the author of many of Linux’s x86 tests: many of those tests would fail on old kernels, and a decent number of those failures are related to very severe bugs. Linux has worked well for many years, but working well didn’t mean it wasn’t buggy.

RandomThoughts3 · a year ago
Most video games have a full team of QA testers doing functional testing on the games as they go along.

Same thing for the kernel, plus some versions are fully certified for various contexts so you can be sure fully formalised tests suites exists. And that’s on top of all the testing tools which are provided (Kunit, tests from user spaces, an array of dynamic and static testing tools).

But I would like to thank all the people here who think testing is useless for their attitude. You make my job easier while hiring.

dgb23 · a year ago
My old man who will always gladly mention that „we did this already in the 80‘s and it was called frobniz“ whenever I bring up a technique, architecture etc. would beg to differ.

When I asked him about TDD he said they did practically the same thing. Forgot how it was called though.

One recent gem was when he shared a video where they explained the recent crowdstrike debacle: „Look they’re making the same mistakes as 40 years ago. I remember when we dynamically patched a kernel and it exploded haha…“.

In any case, writing tests before writing the implementation was a thing during the 80‘s as well for certain projects.

galaxyLogic · a year ago
"Unit-Testing" became popular about the time of Extreme Programming. The reason I think it became so popular was that its proponents programmed in dynamically typed languages like Smalltalk, and later JavaScript. It seems to me that synamic languages needs testing more than statically typed ones.
Ma8ee · a year ago
Of course there were tests, just not automated tests!

In better run organisations they had test protocols, that is, long lists of tests that had to be run by manual testers before any new version could be released. Your manager had to make sure these testers were scheduled well in advance before the bi-annual release date of your latest version of the software.

So that listing old software and claim that they didn't have much tests is misleading, to say the least.

6510 · a year ago
> When you have no tests your problems go away because you don’t see any test failures.

> Never have I tested anything and NOT found a bug, and most things I tested I thought were already OK to ship.

I wasn't a very fast typist, I could do about 180 strokes per minute. My teacher, a tiny 80 year old lady, talked the whole time to intentionally distract her 5-6 students. It was a hilarious experience. One time, when I had an extra slow day, the monologue was about her learning to type, the teaching diploma required 300 strokes per minute, from print, hand writing and dictation. Not on such a fancy electronic type writer! We had mechanical type writers! And no correction lint! She was not the fastest in her class by far and many had band-aids around smashed fingers. Trying to read type, not listen and not burst out in laughter I think she forced me down to 80 strokes per minute. Sometimes she had me sit next to a girl doing 450 strokes per minute. Sounded like a machine gun. They would have casual conversation with eye contact. I should not have noticed it, I was suppose to be typing.

When writing code and think about those "inevitable" bugs I always think of the old lady, who had 1000 ways of saying: you only think you are trying hard enough... and: we had no correction lint....

Take a piano, there is no backspace. You are suppose to get it right without mistakes.

If you have all of those fancy tools to find bugs, test code, the ability to quickly go back and forwards, of course there will be plenty mistakes.

If they need to be there no one knows.

Yossarrian22 · a year ago
World class best in the world gymnasts still fall off a balance beam from time to time.

Mistakes are inevitable, it’s why whiteout and then word processors were made

h1fra · a year ago
I'm puzzled by people debating tests. why such hate? They catch bugs, prevent breaking changes, and ensure API stability. I have never seen tests preventing me from refactoring anything. I guess it depends on the company and the processes :thinking:
codr7 · a year ago
There are different kinds of tests.

Integration tests at the outer edges often gives you most bang for buck.

Granular, mocked unit tests often add little value and will become a maintenance burden sooner or later.

And some of it is unconscious; maybe having that big, comfy test suite is preventing the software from evolving in optimal directions; because it would just be too much work and risk.

swat535 · a year ago
Because writing good tests is very hard and many engineers are simply mediocre so they write brittle tests that require a lot of time to fix and don't actually test the right things (e.g too many mocks) or simply overconfident (like some people in the thread) that their code will always work.

Also the TDD cultists are partially to blame for this attitude as well. Instead of focusing on teaching people how to write valuable tests, they decided to preach dogma and that frustrated many engineers.

I'm firmly in the circle of writing tests of course, I don't think a system that is not tested should ever be in production (and no, you opening your browser on a local machine to see if it works is not sufficient testing for production..).

HelloNurse · a year ago
I think there is a mostly psychological "problem": tests are not perceived as progress (unless you are mature enough to treat quality assurance as an objective) and finding them fun to write or satisfying to run is an unusual acquired taste.
eithed · a year ago
Tests are tools - you won't be using screwdriver for everything, even though it's a tool that useful in many things.

Having said that - tests, codebase and data consistency, static types are things I'd not want to be without

whatever1 · a year ago
A test will only catch an edge case you already thought of. If you thought of it anyway why just not fix the bug instead?

Tests have burned out software engineers who waste the majority of their time deriving tests that will pass anyway. And then a significant code change will render them useless, at which point they have to be rewritten from scratch.

No your program will not be more correct with more tests. Deal with it.

wesselbindt · a year ago
> A test will only catch an edge case you already thought of. If you thought of it anyway why just not fix the bug instead?

The reason I do this is to prevent the bug from re-occurring with future changes. The alternative is to just remember for every part of the system I work on all edge cases and past bugs, but sadly I simply do not have the mental capacity to do this, and honestly doubt if anyone does.

quectophoton · a year ago
Will all your team members also think about those edge cases when changing that part of the code? Will they ensure the behavior is the same when a library dependency is updated?

So, tests catch edge cases that someone else thought of but not everyone might have. This "not everyone" includes yourself, either yourself from the future (e.g. because some parts of the product are not so fresh in your mind), or yourself from now (e.g. because you didn't even know there was a requirement that must be met and your change here broke a requirement over there).

To put an easy to understand example, vulnerability checkers are still tests (and so are linters and similar tools, but let's focus on vulnerabilities). Your post implies you don't need them because you can perfectly prevent a vulnerability from ever happening again once you know about it, both because you write code that doesn't have that vulnerability and because you check that your dependencies don't have that vulnerability.

So, think of tests more like assertions or checksums.

becquerel · a year ago
You write the test to prevent the bug from being accidentally reintroduced in the future. I have seen showstopper bugs reintroduced into production multiple times after they were fixed.
bubblebeard · a year ago
For me at least, designing a test will usually let me discover problems with my code which may otherwise gone unnoticed.

Leaving the tests there once written to help us in future refactoring costs nothing.

Granted, in some languages tests are more complicated to write compared to others. In PHP it’s a nightmare, in Rust it’s so easy it’s hard to avoid doing.

I hear what you are saying though, sometimes writing tests consume more time then is necessary.

7bit · a year ago
Do you think the test is written and the bug left in? What a weird take.

And then, you write the test so that future changes (small or big) that causes regressions get noticed before the regression is put into production again. Especially in complex systems, you can define the end result and test if all your cases are covered. You do this anyway manually, so why not just write a test instead?

drewcoo · a year ago
> A test will only catch an edge case you already thought of.

Property-based tests and model-based tests can catch edge cases I never thought of.

> Tests have burned out software engineers who waste the majority of their time deriving tests that will pass anyway.

Burn, baby, burn! We don't need programmers who can't handle testing.

ahartmetz · a year ago
There are things that are easier to verify than to do correctly. Almost anything that vaguely looks like a proper algorithm has that property. Sorting, balanced trees, hashtables, some kinds of data splicing, even some slightly more complicated string processing.

Sometimes it's also possible to do exhaustive testing. I once did that with a state machine-like piece of code, test transitions from all states to all other states.

creesch · a year ago
I assume you are talking about unit tests here.

Thinking of edge cases is exactly what unit tests are for. They are, when used properly, a way to think about various edge cases *before* you write your code. And then, once you have written your code, validate that it indeed does what you expected to do so beforehand.

The issue I am seeing, more often than not, is that people try to write unit tests after the fact. Which means that a lot of the value of them will be lost.

In addition to that, if you rewrite your code so often that it renders many of your tests invalid I'd argue that there is a fundamental issue elsewhere.

In more stable environments, unit tests help document the behavior of your code, which in turn helps when rewriting your code.

Basically, if you are just writing tests because people told you to write tests, it is no surprise you burn out over them. To be fair, this happens all too often. Certainly with the idiotic requirement added to it that you need 80% coverage without any other context.

If you write tests while understanding where they fit in the process, they can actually be valuable for you.

codr7 · a year ago
Writing a test is often the best way to reproduce and make sure you fixed a bug.

Keeping them for a while lets you make sure it doesn't pop up again.

10 years later, they probably don't add much value.

Tests are tools, that's like saying 'No, your food won't taste better with more salt.', it depends.

jjice · a year ago
Completely agree on tests. It's much more enjoyable for me to write some automated tests (unit or integration) and be able to re-run them over and over again than it is for me to manually run some HTTP requests against the server or something. While more work up front, they stay consistent and I can feel more comfortable with my code when I release.

It's also just more fun to write code (even a test) than it is to manually run some tests over and over again, at which point I eventually get lazy and skip it for that last "simple, inconsequential" commit.

Coming from a place where we never wrote tests, I introduce way fewer bugs and feel way more confident every day, especially when I change code in an existing place. One trick is to not go overboard and to strike an 80/20 balance for tests.

devjab · a year ago
It depends a lot on what you work on and how you program. Virtually none of our software has actual coding errors, and when developers write new parts or change them, it’s always very obvious if something breaks. Partly because of how few abstractions we use, partly because of how short we keep our chains. Letting every function live in isolation and almost never being used by multiple parts of the software. Both the lack of abstractions and the lack of reuse is against a lot of principles, and it’s not exactly like we refuse to do either religiously, but the only real principle we have is YAGNI, and if you build and abstraction before you need it you’re never going to pass a code review. As far as code reuse goes, well, in the perfect world it’s sort of stupid to have a lot of duplicate code. In a world where a lot of code is written on a Thursday afternoon by people who are tired, their babies kept them awake, the meetings were horrible, management doesn’t do the right things and so on. Well, in that world it’s almost always better to duplicate code so that it doesn’t eventually become a complicated abstract mess. It shouldn’t, and I’m sure it doesn’t in some places, I’ve just never worked in such a place. I have worked with a lot of people who followed things like clean code religiously and the results were always unwieldy code where even small changes would take weeks to implement. Which is completely counterproductive to what the actual business needs. The benefit of YAGNI is that it mostly applies to tests as well, exactly because it’s basically impossible to make changes without knowing exactly what impact you’re having on the entire system.

What isn’t easy is business logic, and here I think tests are useful. Or at least they can be. Because far too often, the business doesn’t have a clue what they want up front. Even more often the business logic will change so rapidly that tests automated tests become virtually useless since you’re going to rely on acceptance tests anyway.

Like I said, I’m not religious about it. I sometimes write tests, but in my anecdotal experience things like full test-coverage is an insane waste of time over a long period.

datavirtue · a year ago
He was basically starting over. Definitely need to delete the tests. One of the issues with enterprise development is choking the project with tests and other compliance shit as soon as people start coding. Any project should be in a workable/deployable state before you commit to tests.
osigurdson · a year ago
Tests written for pure functions are great. Tests written for everything else may be helpful but might not be.
Ma8ee · a year ago
You need tests for all part of the functionality you care about. I write tests for making sure that what is persisted is what we get back. Just the other day I found a bug due to our database didn't care about the timezone offset for our timestamps.
lelanthran · a year ago
> When you have no tests your problems go away because you don’t see any test failures.

>

> Never have I tested anything and NOT found a bug, and most things I tested I thought were already OK to ship.

It's a trade-off. Most of the business world ran on, and to some extent still runs on, Excel programs.

There are no tests there, but for the non-tech types who created these monsters, spending time on writing a test suite has a very real cost - there's less time to do the actual job they were hired for!

So, yeah, each test you write means one less piece of functionality you add. You gotta make the trade-off between "acceptably (in frequency and period) buggy" and "absolutely bullet-proof no matter what input is thrown at it".

With Excel programs, for example, if the user sees an error in the output, they fix the input data, they don't typically fix the program. It has to be a dealbreaker bug before they will dive into their code again to fix the program.

And that is acceptable to them.

Ma8ee · a year ago
> There are no tests there, but for the non-tech types who created these monsters, spending time on writing a test suite has a very real cost - there's less time to do the actual job they were hired for!

Not spending time on writing tests has a very real cost - a lot of time is spent on figuring out why your forecast was way off, or your year end figures don't add up.

Not to mention how big parts of the world are thrown into austerity, causing hundred of thousand dead, due to errors in your published research [0].

[0] https://en.wikipedia.org/wiki/Growth_in_a_Time_of_Debt#Metho...

munchler · a year ago
> Giving up tests and versions, I ended up with a much better program.

I can’t understand how anyone would willingly program without using source code control in 2024. Even on a single-person project, the ability to work on multiple machines, view history, rollback, branch, etc. is extremely valuable, and costs almost nothing.

Maybe I’m misunderstanding what the author means by “versions”?

akkartik · a year ago
I'm trying to build something small with a quickly frozen feature set. I've chosen to build on a foundation that changes infrequently. There is more background at https://akkartik.name/freewheeling.

You're absolutely right that this approach doesn't apply to most programs people build today, with large teams and constantly mutating requirements.

I do still have source control. As I say in OP, I just stopped worrying about causing merge conflicts with other forks. (And I have over 2 dozen of them now; again, see the link above for details.) So I have version control for basic use cases like backups or "what did I just change?" or getting my software on new machines. I've just stopped thinking of version control, narrowly for this program, as a way to help _understand_ and track what changed. (More details on that: https://akkartik.name/post/wart-layers) One symptom of that, just as an example of what I mean: I care less about commit message hygiene. So version control still exists, but it's lower priority in my mind as a part of "good programming practice" for the narrow context of programs like this with frozen feature sets, intended to turn into durable artifacts that last decades.

galaxyLogic · a year ago
O the joys of solo-programming! I do it too and the thing I find interesting about it is I think a lot about how to program better like you are. If I was working on a team I would probably not think much about it, I would be doing just what my boss tells me to do.
pseudoramble · a year ago
This context helps me understand more what you're getting at quite a bit. I dunno if I could manage the same approach but I at least appreciate how you're thinking about it. Thanks!
nine_k · a year ago
The author does not seem to have to support any professional / paying users, and wants freedom to experiment more than a guarantee of a known working version. The author also does not seem to work on large systems, or do significant teamwork (that is, not being the only principal author).

In such a situation, all these tools may not provide a lot of value. A flute player in a large orchestra playing a complex symphony needs notes and/or a conductor; a flute player playing solo against a drum machine, or, playing free jazz, does not much need notes, and would likely be even hindered by them.

imiric · a year ago
Tests and version control still have immense value when working solo.

Tests help with ensuring that you don't introduce regressions, and that you can safely refactor. It's likely that you test changes manually anyway, so having automated tests simply formalizes this, and saves you time and effort in the long run.

Version control helps you see why a change was done, and the ability to revert changes, over longer periods of time. We tend to forget this even after a few weeks, so having a clean version control history is also helpful for the future version of you.

Not having the discipline to maintain both, and choosing to ignore them completely, is just insane to me. But, hey, whatever works for OP. I just wouldn't expect anyone else to want to work with them.

The only scenario where I could conceive not using either is in very small projects with a short lifespan: throwaway scripts, and the like. The author is writing their own language and virtual machine, which don't really align with this. Knowing their philosophy, I would hesitate to use anything they made, let alone contribute to it.

raincole · a year ago
The author is probably experiencing mental fatigue or even burnout about programming.

If version control bothers you that much I'd say it's a good sign that you need to take a break.

akkartik · a year ago
This seems very far from my subjective experience. The little platform-independent programs I write for myself and publish are a source of spiritual rejuvenation that support my day job in a more conventional tech org with a large codebase, large team and constantly changing requirements.

I'm not "bothered" by version control. I've not even stopping using it. As I say in the post, I just don't think about it much, worrying about merge conflicts and so on, when I'm programming. I've stopped leaning on version history as a tool for codebase comprehension. (More details: https://akkartik.name/post/wart-layers)

This comment may also help clarify what I mean: https://news.ycombinator.com/item?id=41158040

xelxebar · a year ago
As programmers we are inundated with choice and options. Our tooling and whatever the zeitgeist considers "best tooling" tends to err on the side of making $THING easier to do.

But having 1000 easy options always available introduces severe cognitive burden to pick the correct choice. That's part of the reason why we as an industry have enshrined all shorts of Best Practices and socially shame the non-adherents.

Don't get me wrong, bad architecture and horrible spaghetti code is terrible to work with. However, questioning the things that feel Obviously Correct and exploring different and austere development environments that narrow our set of available choices and tools can sincerely operate to sharpen our focus on the end goal problem at hand.

As for version control, branching encourages cutting a program into "independent features"; history encourages blind usage of potentially out-of-date functional units; collaborative work reifies typically-irrelevant organizational boundaries into the code architecture (cf Mel Conway); etc.

Version control's benefits are also common knowledge, but there are real tradeoffs at the level of "solving business problem X". It's telling that such tradeoffs are virtually invisible to us as an industry.

sethherr · a year ago
> branching encourages cutting a program into "independent features"

But, you can choose not to branch then?

I’m really confused about the trade offs of version control. I can understand trade offs of branching strategies, but at its most fundamental (snapshots of your code at arbitrary times), I can’t think of any drawbacks?

shermanyo · a year ago
I think in this case, the author means coding version logic into the app itself. eg. versioned API endpoints for backwards compatibility
shepherdjerred · a year ago
I don't think so:

> Back in 2015 I was suspicious of abstractions and big on tests and version control. Code seemed awash in bad abstractions, while tests and versions seemed like the key advances of the 2000s.

> In effect I stopped thinking about version control. Giving up tests and versions, I ended up with a much better program.

> Version control kept me attached to the past. Both were counter-productive. It took a major reorientation to let go of them.

resonious · a year ago
He specifically mentions version control and avoiding merge conflicts, so I'm pretty sure it's stuff like git that he's finding himself cautious about.
layer8 · a year ago
This is about a desktop text editor built with LUA on a C++-based native framework for writing 2D games: https://git.sr.ht/~akkartik/lines2.love Very unlikely to have versioned API endpoints involved.
voiper1 · a year ago
Yep, commit your code when it "works". Then I can safely go off on a hair brained experiment, knowing I can easily throw away the changes to get back to what worked.
shepherdjerred · a year ago
Yeah, this is not good advice for the average person, even for solo projects.
codr7 · a year ago
I agree, and the author probably does as well.

I didn't get the feeling it was meant as general advice.

bugbuddy · a year ago
Could this person be intentionally giving bad advice?
bubblebeard · a year ago
I think it’s just an alternative way of thinking. It’t not one I agree with, but I can see where the author is coming from. Think he’s just tired of spending time on useless tasks around his projects. For all we know they may be, but I do have hard time viewing testing and version control as overhead xD
shepherdjerred · a year ago
At first glance I thought the author was plain wrong, but I think there is some good insight here.

This workflow works very well for the author. Most of us can probably think of a time when Git or automated tests frustrated us or made us less productive. There are similar solutions that are simpler and get out of the way, e.g. backing up code with Dropbox, FTP, whatever.

The above is works well because the author is optimizing for their productivity on a passion project where they collaborate with few others.

Automated tests are useful, but it sounds like the author likes creating programs so small that the value might not surface. I think that automated tests still have value even in this context, but I think we can all agree that automated tests slow you down (though many would argue that you see eventual returns).

Version control and automated tests solve real problems. It would be insane to start a project without VC today, and automated tests are a best practice for a reason. But, for the authors particular use case, this sounds reasonable.

---

Aside from the controversial bits around VC/tests, I think items 7/8/9 perfectly capture my mindset when writing/refactoring a large program. Write, throw it away, write again.

fendy3002 · a year ago
Disagree on VC, even for solo project and no multiple version branching. Human make mistakes, knowing what you change in the last 3 weeks for >100k LOC project are godsend. It helps to find and fix issues. The better feature is branching out, because you can do what you want while still having a way to go back to previous stable.

As for automated tests? That's fine.

yellowapple · a year ago
I think it's still worth asking "which VC?" through that lens, though. Git was designed for developing the Linux kernel - with countless LOC and contributors and commits pouring in constantly. It happened to also be readily suitable for GitHub's model of "social" FOSS development, with its PRs and such (a model that most other Git hosting systems have adopted).

...but that ain't applicable to all projects, or possibly even most projects. The vast majority of my FOSS contributions have been on projects with one or maybe two primary authors, and without all that many PRs. What is Git, or any particular Git repository host (GitHub included), really offering me?

I need to track changes (so I can revert them if necessary), I need to backup the code I'm writing, and I need to distribute said code (and possibly builds thereof). Just about any VCS can do those things. I ended up trying Fossil for various new projects, and I'm liking it enough that I plan on migrating my existing projects into Fossil repos (with Git mirroring) at some point, too. It's unsurprisingly more optimized toward the needs of the SQLite development team - a small cathedral rather than a Linux-style giant bazaar - and considering that all my projects' development "teams" are tiny cathedrals it ain't terribly surprising that Fossil would be the right fit.

fragmede · a year ago
imo taking the time to learn enough git to setup an ignore file, then run be able to run git init; git add -A, git commit -a -m "before I changed the foo function to use bar" and then go back to older revisions is well worth it. you don't have to master it, but just having a commit message and a version to get back to has saved my bacon more times than I can remember, nevermind more advanced operations.
layer8 · a year ago
This is quite a confused article.

I really wonder what about it made it be upvoted to first place.

rectang · a year ago
I keep trying to figure out the joke.
namaria · a year ago
Author successfully drove engagement with psychological baits like bashing commonly accepted tools and practices and being intentionally obscure so a lot of people would comment about it.
bubblebeard · a year ago
On the one hand this may be an article from a developer experimenting with different tools and techniques to advance themselves in life.

On the other hand it may just be the author wanted to gaslight ppl into a debate xD

082349872349872 · a year ago
Given that the author has been exploring these themes* throughout the years since I first encountered them, I've got a strong weighting for the former.

* with varied approaches; I even recall a "test all the things" experiment

AdieuToLogic · a year ago
> In 2022 I started working on Freewheeling Apps. I started out with no tests, got frustrated at some point and wrote thorough tests for a core piece, the text editor.

This is a primary motivation for having a reasonable test suite - limiting frustration. Test suites gives developers confidence to evolve a system. When done properly, contributors often form an opinion similar to:

> But I struggled to find ways to test the rest, and also found I was getting by fine anyway.

This is also a common situation. As functional complexity increases, the difficulty to test components or the system as a whole can become prohibitive.

> Now it's 2024, and a month ago I deleted all my tests. ... In effect I stopped thinking about version control. Giving up tests and versions, I ended up with a much better program.

This philosophy does not scale beyond one person and said person having recent, intimate, memory of all decisions encoded in source code (current or historical). Furthermore, given intimate implementation knowledge, verifying any change by definition must be performed manually.

082349872349872 · a year ago
> This philosophy does not scale beyond one person ... having recent, intimate, memory of all decisions encoded in source code

Some time ago on HN, I ran across a tale of someone who never merged code unless they'd written it all that day. If they got to the end of the day without something mergeable, well, that just meant they didn't understand the problem well enough to express it in under a day, and they tried afresh the following morning.

Anyone else remember this, or am I confusing sites/anecdotes again?

gavinhoward · a year ago
> This philosophy does not scale beyond one person and said person having recent, intimate, memory of all decisions encoded in source code (current or historical). Furthermore, given intimate implementation knowledge, verifying any change by definition must be performed manually.

As a one-man programming team, you are correct. And quite frankly, I shudder to think of not programming with a test suite or version control, even though I work alone!

Docs, tests, and version control reduce what I have to remember about the code context. Yes, I have to remember the details of the code in front of me, but if I document it, test it, and check it in with a good commit message describing the why and how and whatever, then I can discard that code from my memory and move on to the next thing.

AdieuToLogic · a year ago
All of the tools and artifacts you reference as important contribute to the same goal, whether it is for me or a future-you:

Understanding.

pmontra · a year ago
My favorite example for point number 3 "Small changes in context (people/places/features you want to support) often radically change how well a program fits its context." is K9 Mail, which is becoming the Android version of Thurderbird now.

It started with an unconventional UI with a home page listing email accounts and for each account the number of unread and total messages. There was a unified inbox but it was not forced on users.

I remember that I explicitly selected this app because it fit my needs: one personal account, one work account, several work accounts that my customers gave me. I wanted those account to stay separated.

Probably a lot of K9 users picked that app precisely for the same reason because there were many complaints when the developer migrated to a conventional Android UI with a list of accounts sliding from the left and an extra tap to move from an account to another. If we had liked that kind of UI chances are that we won't have picked K9 to start with.

So one small change (but probably a lot of coding) destroyed the fitness of the app to its users. I keep using the old 5.600 version, the latest with the old UI, and I sideload it to any new device I buy.

Furthermore, to make things even more unusual, I only use POP3 to access my accounts (I preview on phone, delete stuff, possibly reply BCCing myself, eventually download on my laptop) and K9 fit perfectly that workflow. I don't need anything fancy. An app from the 90's would be good enough for me.

akkartik · a year ago
I really appreciate[1] the concrete example. Worth more than my opinion in OP and everybody's opinions in this thread put together.

[1] https://news.ycombinator.com/favorites?id=akkartik&comments=...

codr7 · a year ago
I too keep wondering where this path leads.

One thing is clear to me though, creating (software) by yourself is a completely different activity from doing it in a team.

About testing. Tests are means, not ends. What we're looking for is confidence I think. So when I feel confident about an implementation, I'll test less. And if I desperately need to make sure something keeps working, I'll add a few integration tests at the outer edges that are not so affected by refactorings and thus won't slow me down as much. E.g poking a web backend from the outside, as opposed to testing the internals. Unit tests are good for fleshing out the design of new API's, but those tests are pretty much useless once you know where you're going.

sebstefan · a year ago
Plus there's so many good reasons to have tests in a single person project

* Hotwiring if statements with "true ||" to go straight to the feature you're building takes time, and you're gonna have to tear it down later. Just build a test and run it, that way you get to keep it for regression testing

* If you're shipping something big, or slow, (which can just mean 'I use qt' sometimes) and launching the app/building the app takes ages, just make a test. A single test loads quicker and runs quicker

* If you're debugging and reproducing the bug takes 45 seconds, just write a test. It automates away the most boring part of the job, keeps your flow going, allows you to check the status of the bug as often as you want without having to think about if it's worth it or not, and, same as #1, you get to keep the test for regression testing

akkartik · a year ago
hiAndrewQuinn · a year ago
Just dropping by to say I adore this author and Mu is one of my favorite projects. A modern Lisp machine, kinda! In QEMU! So much fun!
akkartik · a year ago
Thank you so much, you made my day.