I don't buy this argument. Most game developers I know have said that unit tests are a waste of time so they never use them, but they're struggling with making changes to utility code and making sure that it doesn't do the wrong thing. Y'know, what unit tests are for.
I think the key here is that the perceived cost / benefit ratio is too high. It's the perception that drives their behavior though. I'm in a company now that has zero unit tests, because they just don't see the value in it (and in their case they may be right for a whole slew of reasons).
Also, remember that games are not very long-lived pieces of software. You build it, release it, maybe patch it, and move on. If the game moves to version 2 then you're probably going to re-write most of the game from scratch. When you support software for a decade then the code is what's valuable, and unit tests keep institutional knowledge about the code. But with disposable software like games, the mechanics of the game and IP are what's valuable.
Why would you write a unit test for something you know you're going to throw away in 6 months?
I’ve seen people slog through untested code where they fear to make a change but I’ve also seen people slog through code with too much test coverage where the tests go through constant churn.
I don’t understand why people don’t just add one test even if the codebase otherwise has zero tests if they’re so scared of one area and I don’t get why people keep adding excessive coverage if it’s wasting their time.
It’s like people pick a stance and then stick with it forever when I couldn’t care less how I’ve been doing something for 10 years if today you showed me a better way.
>too much test coverage where the tests go through constant churn
This doesn't sound so much as too much coverage but rather like having your automated tests be coupled to implementation details. This has a multitude of possible causes, for example too the tests being too granular (prefer testing at the boundary of your system). I've worked in codebases where test-implementation detail coupling was taken seriously, and in those I've rarely had to write a commit message like "fix tests", and all that without losing coverage.
This is the way. My work codebase has probably 5% unit test coverage -- it's frontend and a lot of it isn't sensible to unit test -- but I'm quite happy to have the tests we do. If it's nontrivial logic, just test it. If it isn't (it's trivial, it's aesthetic, whatever your reason)... just don't.
All the places I've worked for had some balance here, but it would definitely be on the very few tests end.
We would write tests to catch a bug in a low level system, and keep the test after. We had lots of Design by Contract, including Invariants that were enabled in debug mode.
But the reality was that we couldn't test gameplay code very well. That changed so dramatically over the course of a project that if we did test we would just end up commenting tests by the end of a project.
And as an optimisation guy, I would often have to change the "feel" of gameplay code to get performance out of code, which is checked by a Quality Assurance team, because it's subjective. That kind of stuff would make gameplay tests very brittle.
The pace of game Dev was incredibly fast. We were struggling to get all our stuff in, never mind adding any scaffolding that would slow us down.
Valve became serious about software quality in Dota 2 around 2017 - about 7 years after launch. Before that game updates were accompanied with lots of bugs that would take weeks to fix. These days, there are still tons of bugs, but much better than before. They just released one of the biggest updates in the game's history this week, and there are hardly any bugs being reported.
I am pretty sure there is some sort of automated testing happening that is catching these bugs before release.
Reminds me of an article about the testing infrastructure of League and Legends [1] back in 2016. 5500 tests per build in 1 to 2 hours.
Games are extremely hard to test. For me it falls into the same category like GUI testing frameworks which imho are extremely annoying and brittle. Except that games are comparable to a user interface consisting of many buttons which you can short and long press and drag around while at the same time other bots are pressing the same buttons, sharing the same state influenced by a physics engine.
How do you test such a ball of mud which also constantly changes by devs trying to follow the fun? Yes you can unittest individual, reusable parts. But integration tests, which require large, time sensitive modules, all strapped together and running at the same time? It's mindboggling hard.
Moreover if you're in a conceptual phase of development and prototyping and idea, tests make no sense. The requirements change all the time and complex tests hold you back. But the funny thing is, that game development stays in that phase most of the time. And when the game is done, you start a new one with a completely different set of requirements.
There are exceptions, like League of Legends. The game left the conceptual phase many years ago and its rules are set in stone. And a game which runs successfully for that long is super rare.
Testing is a continuum. I don't write a test for every change. Sometimes I spend a week writing tests for a simple change.
I will say that I've never said "I wish I didn't write a test for that". I have also never said, "your PR is fine, but please delete that test, it's useless".
I throw away a lot of code. I still test stuff I expect to throw away. That's because it probably needs to run once before I throw it away, and I can't start throwing it away until it works :/
What it comes down to is what else you have to spend your time on. Sometimes you need to experiment with a feature; get it out to customers, and if it's buggy and rough around the edges, it's OK, because you were just trying out the idea. But sometimes that's not what you want; whatever time you spend on support back and forth finding a bug would have been better spent not doing that. The customer needed something rock solid, not an experiment. Test that so they don't have to.
There are no rules. "Write a test for every change" is just as invalid and unworkable as "Never write any tests". It's a spectrum, and each change is going to land somewhere different. If you're unsure, ask a coworker. I have been testing stuff for 20+ years, and I usually guess OK (that is when I take a shortcut and don't test as much as I should, it's rarely the thing that caused the production outage), but a guess is just that, a guess. Solicit opinions.
> Also, remember that games are not very long-lived pieces of software. You build it, release it, maybe patch it, and move on.
This was true a couple decades ago. Nowadays many games are cash cows for decades. Path of Exile was released in 2013, Minecraft in 2011, and World of Warcraft in 2004, and all of those continue to receive regular updates (and have over the course of their lives) and still make plenty of money today. Dwarf Fortress has been in continual development since 2002! (Although probably not your ideal cash-flow model.)
Or you have the EA Sports model where you use the same "engine" and just re-skin some things and re-release the same game over and over. There has been a new "Football Manager" game every year since 2005 -- do you really think they throw out all their code and start over every year?
I maintain that the majority of games are still disposable, despite the occasional subscription model or long-lived hit that pops up. Remember that most games aren't made by AAA studios.
Wasn't Minecraft completely rewritten from scratch in Java after a few years?
And the EA one, like you said, it's just model updates. Very few gameplay mechanics get more than a simple tweak. Just recompile with the new models. You don't need unit tests if the code never changes.
We produce a library that gets included in software made by our clients, and we have several thousand clients. The uptake on new releases is low (most of the clients believe in "if it ain't broke, don't fix it"). So every release has the potential to live in the wild and need support for a long time.
We're also in an industry with a ton of competitors.
On top of that, the company was founded by some very junior engineers. for most of them this was their first or second job out of college. Literally every anti-pattern is in our codebase, and a lot of them are considered best practices by them. Unit tests were perceived as a cost with little benefit, so none were written. New engineers were almost always new grads to save on money.
These facts combined make for an interesting environment.
For starters, leadership is afraid to ship new code, or even refactor existing code. Partially because nobody knows how it works, partially because they don't have unit tests to verify that things are going well. All new code has to be gated by feature flags (there's an experiment right now to switch from try-finally to try-with-resources). If there isn't a business reason to add code, it gets rejected (I had a rejected PR that removed a "synchronized" block from around "return boolValue;"). And it's hard to say they're wrong. If we push out a bad release, there's a very real chance that our customers will pack up and migrate to one of our competitors. Why risk it?
And the team's experience level plays a role too. With so many junior engineers and so much coding skill in-breeding, "best practices" have become pretty painful. Code is written without an eye towards future maintainability, and the classes are a gas factory mixed with a god object. It's not uncommon to trace a series of calls through a dozen classes, looping back to classes that you've already looked at. And trying to isolate chunks of the code is difficult. I recently tried to isolate 6 classes and I ended up with an interface that used 67 methods from the god object, ranging from logging, to thread management, to http calls, to state manipulation.
And because nobody else on the team has significant experience elsewhere, nobody else really sees the value of unit tests. They've all been brought up in this environment where unit test are not mentioned, and so it has ingrained this idea that they're useless.
So the question is how do you fix this and move forward?
Ideally we'd start by refactoring a couple of these classes so that they could be isolated and tested. While management doesn't see significant value in unit tests, they're not strictly against them, but they are against refactoring code. So we can't really add unit tests on the risky code. The only places that you can really add them without pushback would be in the simplest utility classes, which would benefit from them the least, and in doing so prove to management that unit tests aren't really valuable. And I mean the SIMPLEST utility classes. Most of our utility classes require the god object so that we can log and get feature flags.
I say we take off and nuke the entire site from orbit (start over from scratch with stronger principles). It's the only way to be sure. But there's no way I'm convincing management to let the entire dev team have the year they'd need to do that with feature parity, and leadership would only see it as a massive number of bugs to fix.
In the meantime developer velocity is slowing, but management seems to see that as a good thing. Slower development translates into more stable code in their minds. And the company makes enough that it pays well and can't figure out what to do with the excess money. So nobody really sees a problem. Our recruiters actually make this a selling point, making fun of other companies that say their code is "well organized".
I’ve found that the one thing you can always count on engineers to do is to dismiss sensible tools from adjacent domains using flimsy, post hoc justifications.
All product development involves poorly defined boundaries where the product meets the user, where requirements shift frequently, and where the burdens of test maintenance have to be weighed against the benefits.
You don’t throw out all of unit testing because it doesn’t work well for a subset of your code. You throw out all of unit testing because writing tests is annoying, none of your coworkers have set it up, and the rest of your industry doesn’t do it, so you feel justified in not doing it either.
Right. And because the rest of the industry isn’t doing it, there’s no institutional knowledge of how to do it well. So someone tries it, they do a crap job of it out of understandable ignorance, and rather than taking forward any lessons learned the effort is discarded as a waste of time.
I didn’t say engineers are dismissive of other engineers’ practices. The general pattern is “that makes sense for your field, but we can’t use it because…” followed by silly reasons.
I was guilty of this myself back when I was an indie dev. It took me an embarrassingly long time, for example, to admit that git wasn’t just something teams needed to coordinate, and that I should be using it as the sole developer of a project.
Arguments against testing tend to fall prey to the von Neumann Objection: they insist there is something tests can’t catch, and then they tell you precisely what it is that tests can’t catch… so you can always imagine writing tests for that specific thing.
E.g. this article uses an example of removing the number 5, causing the developer to have to implement a base-9 numbering system. Unit tests that confirm this custom base number system is working as expected would be extremely reassuring to have. Alternatively, you could keep the base-10 system everyone is familiar with, and just have logic to eliminate or transform any 5s. This would normally be far too risky, but high coverage testing could provide strong enough assurance to trust that your “patched base-10” isn’t letting any 5s through.
The same is true for the other examples - unit testing feels like the first thing I’d reach for when told about flaming numbers.
Nah, my objection to unit testing is that too often it devolves into what I call "Testing that the code does what the code does." If you find yourself often writing code that also requires updating or rewriting unit tests, your tests are mostly worthless. Unit tests are best for when you have a predefined spec, or you have encountered a specific bug previously and make a test to ensure it doesn't reoccur, or you want to make sure certain weird edge cases are handled correctly. But the obsession with things like 100% unit test coverage is a counterproductive waste of time.
I partially agree - I would say more specifically “those situations are the easiest to write good tests for”, ie having a predefined spec will strongly guide you towards writing good and useful tests.
“Testing that the code does what it does” is of course a terrible waste of both the time spent writing those tests, and of future time spent writing code under those tests. With skill and practice at writing tests, you make that mistake less often. Perhaps there’s a bit of a self-fulfilling prophecy for game developers: due to industry convention, they’re unfamiliar with writing tests, they try writing tests, they end up with a superfluous-yet-restrictive test suite, thus proving the wisdom of the industry convention against testing.
Unfortunately this can also be the case of integration test : I spent the last week trying to understand if the regressio n in an integration test was a bug or if it was the previous behaviour which was buggy..
The lesson is more about the degree of churn and how game rules are not hard rules. A valid base 9 number system is NOT a design goal and doing that work can be a waste.
It's like testing that the website landing page is blue. Sure you can but breaking that rule is certainly valid and you'll end up ripping out a lot of tests that way.
Now, instead of calcifying the designer's whims, testing should be focused around things that actually need to make sense, ie abstract systems, data structures etc etc.
Tests that “calcify the designer’s whims” - great way to put it - can be quite useful if your job description happens to be “carrying out the whims of the designer” (and for many of us, it is!)
With high coverage and dry-ish tests, changing the tests first and seeing which files start failing can function as a substitute for find+replace - by altering the tests to reflect the whims, it’ll tell you all the places you need to change your code to express said whims.
Tests can't catch race conditions in multithreaded code. Now that I told you what the tests can't catch, can you imagine writing tests for that specific thing?
I've written tests around multithreaded code, but they typically catch them in a statistical manner - either running a bit of code many times over to try and catch an edge condition, or by overloading the system to persuade rarer orderings to occur.
tsan will catch a bunch of potential race conditions for you, under the condition that you run it somehow. How to make sure it's run? Well, add a test for the relevant code and add it to your tsan run in your CI and you'll certainly catch a bunch of race conditions over time.
This has saved me a bunch of times when I've be doing work in code with proneness to those kind of issues. Sometimes it will just lead to a flaky test, but the investigation of the flake will usually find the root cause in the end.
I’ve written tests to do exactly that, by adding carefully placed locks that allow the test to control the pace at which each thread advances. It’s not fun but you can do it.
Having written a 3d game engine from scratch, I had automated tests, but they were more comparable to "golden" tests, which are popular in the UI test world. Basically, my renderer needed to produce a pixel-perfect frame. If a pixel didn't match, an image diff was produced. This saved my butt numerous times when I broke subtle parts of the renderer.
Ok I know which AAA game studio this might be because I interviewed with them and had to sign an NDA.
In their case their flagship game is full of bugs, and they had to ship their product asap pre-aquistion when they were a startup.
Because of the mentality of the managers, and weak minded devs, they don't write unit tests, and instead spend the vast majority of their days fighting bugs, so much so they have to hire dedicated staff for their (single game) backlog as they were struggling to keep up "with its success".
This is BS of course, I saw their backlog and it was a shit show, with Devs expected to work overtime free of charge to get actual features out (funny how this works isn't it, never affects the business execs' time/life who make the demands of no tests).
I was asked what I would bring to the company to help them support their now AAA game, and I stated up front "more unit tests" and indirectly criticised their lack of them. I got a call later that day that (the manager thought) "I would not be a good fit".
I got a lead job elsewhere that has the company's highest performing team, literally because of testing practices being well balanced between time and effectiveness (i.e. don't bother with low value tests, add tests if you find a bug etc, if an integration test takes too long leave it and use unit tests).
I think back to that interview every time I interview at games studios now, and wonder if I shouldn't push unit tests if they're missing. I'd still do it. The managers at that job were assholes to their developers, and I now recognise the trait in a company.
Most video game bugs are subtle and not things that are easy to catch with unit testing because they are dynamic systems with many interacting parts. The interaction is where the bugs come from.
Perhaps in development, but the stuff that tends to make it into the release of games seems to be gameplay related. Npc behaviors not lining up, the developer literally not implementing certain stats in the game (looking at you, Diablo 4), graphical bugs caused by something not loading or loading too slowly, performance issues from something loading 1000 copies of itself etc.
It's an interesting idea, but here you have the game designer taking the place of the product manager stereotype - coming up with bizarre unfeasible ideas and the programmer is to make it happen.
In any games company I've worked for the designer is responsible for mapping and balancing the rules and mechanics of the game, they would provide a specification of what "red vs blue numbers" would look like and a balanced idea of how to remove the number 5 from the game (balancing and changing the rules like this being entirely within the domain of game design). incidentally any game company I've worked at has had an extensive set of test suites.
1. Most game engines have the horrible compatibility layers abstracted away, and already fully tested under previous mass deployments
2. Anything primarily visual, audio, and control input based is extremely hard to reliably automate testing. Thus, if the clipping glitches are improbable and hardly noticeable... no one cares.
Some people did get around the finer game-play issues by simply allowing their AI character to cheat. Mortal Kombat II was famous for the impossible moves and combos the AI would inflict on players... yet the release was still super popular, as people just assumed they needed more practice with the game.
I think the key here is that the perceived cost / benefit ratio is too high. It's the perception that drives their behavior though. I'm in a company now that has zero unit tests, because they just don't see the value in it (and in their case they may be right for a whole slew of reasons).
Also, remember that games are not very long-lived pieces of software. You build it, release it, maybe patch it, and move on. If the game moves to version 2 then you're probably going to re-write most of the game from scratch. When you support software for a decade then the code is what's valuable, and unit tests keep institutional knowledge about the code. But with disposable software like games, the mechanics of the game and IP are what's valuable.
Why would you write a unit test for something you know you're going to throw away in 6 months?
I don’t understand why people don’t just add one test even if the codebase otherwise has zero tests if they’re so scared of one area and I don’t get why people keep adding excessive coverage if it’s wasting their time.
It’s like people pick a stance and then stick with it forever when I couldn’t care less how I’ve been doing something for 10 years if today you showed me a better way.
This doesn't sound so much as too much coverage but rather like having your automated tests be coupled to implementation details. This has a multitude of possible causes, for example too the tests being too granular (prefer testing at the boundary of your system). I've worked in codebases where test-implementation detail coupling was taken seriously, and in those I've rarely had to write a commit message like "fix tests", and all that without losing coverage.
We would write tests to catch a bug in a low level system, and keep the test after. We had lots of Design by Contract, including Invariants that were enabled in debug mode.
But the reality was that we couldn't test gameplay code very well. That changed so dramatically over the course of a project that if we did test we would just end up commenting tests by the end of a project.
And as an optimisation guy, I would often have to change the "feel" of gameplay code to get performance out of code, which is checked by a Quality Assurance team, because it's subjective. That kind of stuff would make gameplay tests very brittle.
The pace of game Dev was incredibly fast. We were struggling to get all our stuff in, never mind adding any scaffolding that would slow us down.
I am pretty sure there is some sort of automated testing happening that is catching these bugs before release.
Games are extremely hard to test. For me it falls into the same category like GUI testing frameworks which imho are extremely annoying and brittle. Except that games are comparable to a user interface consisting of many buttons which you can short and long press and drag around while at the same time other bots are pressing the same buttons, sharing the same state influenced by a physics engine.
How do you test such a ball of mud which also constantly changes by devs trying to follow the fun? Yes you can unittest individual, reusable parts. But integration tests, which require large, time sensitive modules, all strapped together and running at the same time? It's mindboggling hard.
Moreover if you're in a conceptual phase of development and prototyping and idea, tests make no sense. The requirements change all the time and complex tests hold you back. But the funny thing is, that game development stays in that phase most of the time. And when the game is done, you start a new one with a completely different set of requirements.
There are exceptions, like League of Legends. The game left the conceptual phase many years ago and its rules are set in stone. And a game which runs successfully for that long is super rare.
[1] https://technology.riotgames.com/news/automated-testing-leag...
I will say that I've never said "I wish I didn't write a test for that". I have also never said, "your PR is fine, but please delete that test, it's useless".
I throw away a lot of code. I still test stuff I expect to throw away. That's because it probably needs to run once before I throw it away, and I can't start throwing it away until it works :/
What it comes down to is what else you have to spend your time on. Sometimes you need to experiment with a feature; get it out to customers, and if it's buggy and rough around the edges, it's OK, because you were just trying out the idea. But sometimes that's not what you want; whatever time you spend on support back and forth finding a bug would have been better spent not doing that. The customer needed something rock solid, not an experiment. Test that so they don't have to.
There are no rules. "Write a test for every change" is just as invalid and unworkable as "Never write any tests". It's a spectrum, and each change is going to land somewhere different. If you're unsure, ask a coworker. I have been testing stuff for 20+ years, and I usually guess OK (that is when I take a shortcut and don't test as much as I should, it's rarely the thing that caused the production outage), but a guess is just that, a guess. Solicit opinions.
This was true a couple decades ago. Nowadays many games are cash cows for decades. Path of Exile was released in 2013, Minecraft in 2011, and World of Warcraft in 2004, and all of those continue to receive regular updates (and have over the course of their lives) and still make plenty of money today. Dwarf Fortress has been in continual development since 2002! (Although probably not your ideal cash-flow model.)
Or you have the EA Sports model where you use the same "engine" and just re-skin some things and re-release the same game over and over. There has been a new "Football Manager" game every year since 2005 -- do you really think they throw out all their code and start over every year?
Wasn't Minecraft completely rewritten from scratch in Java after a few years?
And the EA one, like you said, it's just model updates. Very few gameplay mechanics get more than a simple tweak. Just recompile with the new models. You don't need unit tests if the code never changes.
Though I'm sitting at a hobbyist with electrical and commissioning background.
We're also in an industry with a ton of competitors.
On top of that, the company was founded by some very junior engineers. for most of them this was their first or second job out of college. Literally every anti-pattern is in our codebase, and a lot of them are considered best practices by them. Unit tests were perceived as a cost with little benefit, so none were written. New engineers were almost always new grads to save on money.
These facts combined make for an interesting environment.
For starters, leadership is afraid to ship new code, or even refactor existing code. Partially because nobody knows how it works, partially because they don't have unit tests to verify that things are going well. All new code has to be gated by feature flags (there's an experiment right now to switch from try-finally to try-with-resources). If there isn't a business reason to add code, it gets rejected (I had a rejected PR that removed a "synchronized" block from around "return boolValue;"). And it's hard to say they're wrong. If we push out a bad release, there's a very real chance that our customers will pack up and migrate to one of our competitors. Why risk it?
And the team's experience level plays a role too. With so many junior engineers and so much coding skill in-breeding, "best practices" have become pretty painful. Code is written without an eye towards future maintainability, and the classes are a gas factory mixed with a god object. It's not uncommon to trace a series of calls through a dozen classes, looping back to classes that you've already looked at. And trying to isolate chunks of the code is difficult. I recently tried to isolate 6 classes and I ended up with an interface that used 67 methods from the god object, ranging from logging, to thread management, to http calls, to state manipulation.
And because nobody else on the team has significant experience elsewhere, nobody else really sees the value of unit tests. They've all been brought up in this environment where unit test are not mentioned, and so it has ingrained this idea that they're useless.
So the question is how do you fix this and move forward?
Ideally we'd start by refactoring a couple of these classes so that they could be isolated and tested. While management doesn't see significant value in unit tests, they're not strictly against them, but they are against refactoring code. So we can't really add unit tests on the risky code. The only places that you can really add them without pushback would be in the simplest utility classes, which would benefit from them the least, and in doing so prove to management that unit tests aren't really valuable. And I mean the SIMPLEST utility classes. Most of our utility classes require the god object so that we can log and get feature flags.
I say we take off and nuke the entire site from orbit (start over from scratch with stronger principles). It's the only way to be sure. But there's no way I'm convincing management to let the entire dev team have the year they'd need to do that with feature parity, and leadership would only see it as a massive number of bugs to fix.
In the meantime developer velocity is slowing, but management seems to see that as a good thing. Slower development translates into more stable code in their minds. And the company makes enough that it pays well and can't figure out what to do with the excess money. So nobody really sees a problem. Our recruiters actually make this a selling point, making fun of other companies that say their code is "well organized".
All product development involves poorly defined boundaries where the product meets the user, where requirements shift frequently, and where the burdens of test maintenance have to be weighed against the benefits.
You don’t throw out all of unit testing because it doesn’t work well for a subset of your code. You throw out all of unit testing because writing tests is annoying, none of your coworkers have set it up, and the rest of your industry doesn’t do it, so you feel justified in not doing it either.
I was guilty of this myself back when I was an indie dev. It took me an embarrassingly long time, for example, to admit that git wasn’t just something teams needed to coordinate, and that I should be using it as the sole developer of a project.
E.g. this article uses an example of removing the number 5, causing the developer to have to implement a base-9 numbering system. Unit tests that confirm this custom base number system is working as expected would be extremely reassuring to have. Alternatively, you could keep the base-10 system everyone is familiar with, and just have logic to eliminate or transform any 5s. This would normally be far too risky, but high coverage testing could provide strong enough assurance to trust that your “patched base-10” isn’t letting any 5s through.
The same is true for the other examples - unit testing feels like the first thing I’d reach for when told about flaming numbers.
“Testing that the code does what it does” is of course a terrible waste of both the time spent writing those tests, and of future time spent writing code under those tests. With skill and practice at writing tests, you make that mistake less often. Perhaps there’s a bit of a self-fulfilling prophecy for game developers: due to industry convention, they’re unfamiliar with writing tests, they try writing tests, they end up with a superfluous-yet-restrictive test suite, thus proving the wisdom of the industry convention against testing.
It's like testing that the website landing page is blue. Sure you can but breaking that rule is certainly valid and you'll end up ripping out a lot of tests that way.
Now, instead of calcifying the designer's whims, testing should be focused around things that actually need to make sense, ie abstract systems, data structures etc etc.
With high coverage and dry-ish tests, changing the tests first and seeing which files start failing can function as a substitute for find+replace - by altering the tests to reflect the whims, it’ll tell you all the places you need to change your code to express said whims.
There's also https://clang.llvm.org/docs/ThreadSafetyAnalysis.html which can statically catch some threading issues, though I've not used it much myself.
This has saved me a bunch of times when I've be doing work in code with proneness to those kind of issues. Sometimes it will just lead to a flaky test, but the investigation of the flake will usually find the root cause in the end.
Citation needed.
> can you imagine
Yes I can, because several languages have tooling built specifically for finding those race conditions.
If you built it, you can test it. If you can’t test it, you don’t understand what you built.
In their case their flagship game is full of bugs, and they had to ship their product asap pre-aquistion when they were a startup.
Because of the mentality of the managers, and weak minded devs, they don't write unit tests, and instead spend the vast majority of their days fighting bugs, so much so they have to hire dedicated staff for their (single game) backlog as they were struggling to keep up "with its success".
This is BS of course, I saw their backlog and it was a shit show, with Devs expected to work overtime free of charge to get actual features out (funny how this works isn't it, never affects the business execs' time/life who make the demands of no tests).
I was asked what I would bring to the company to help them support their now AAA game, and I stated up front "more unit tests" and indirectly criticised their lack of them. I got a call later that day that (the manager thought) "I would not be a good fit".
I got a lead job elsewhere that has the company's highest performing team, literally because of testing practices being well balanced between time and effectiveness (i.e. don't bother with low value tests, add tests if you find a bug etc, if an integration test takes too long leave it and use unit tests).
I think back to that interview every time I interview at games studios now, and wonder if I shouldn't push unit tests if they're missing. I'd still do it. The managers at that job were assholes to their developers, and I now recognise the trait in a company.
QA processes do a good job catching the rest.
In any games company I've worked for the designer is responsible for mapping and balancing the rules and mechanics of the game, they would provide a specification of what "red vs blue numbers" would look like and a balanced idea of how to remove the number 5 from the game (balancing and changing the rules like this being entirely within the domain of game design). incidentally any game company I've worked at has had an extensive set of test suites.
2. Anything primarily visual, audio, and control input based is extremely hard to reliably automate testing. Thus, if the clipping glitches are improbable and hardly noticeable... no one cares.
Some people did get around the finer game-play issues by simply allowing their AI character to cheat. Mortal Kombat II was famous for the impossible moves and combos the AI would inflict on players... yet the release was still super popular, as people just assumed they needed more practice with the game.
Have fun out there, =)