Readit News logoReadit News
Posted by u/lovehatesoft 3 years ago
Ask HN: How do you keep track of software requirements and test them?
I'm a junior dev that recently joined a small team which doesn't seem to have much with regards to tracking requirements and how they're being tested, and I was wondering if anybody has recommendations.

Specifically, I would like to track what the requirements/specifications are, and how we'll test to make sure they're met? Which I don't know if this could be a mix of unit and integration/regression tests? Honestly though if this is maybe even the wrong track to take, I'd appreciate feedback on what we could be doing instead.

I used IBM Rational DOORS at a previous job and thought it really helped for this, but with a small team I don't think it's likely they'll spring for it. Are there open source options out there, or something else that's easy? I thought we could maybe keep track in a spreadsheet (this to match DOORS?) or some other file, but I'm sure there would be issues with that as we added to it. Thanks for any feedback!

flyingfences · 3 years ago
In a safety-critical industry, requirements tracking is very important. At my current employer, all of our software has to be developed and verified in accordance with DO-178 [0]. We have a dedicated systems engineering team who develop the system requirements from which we, the software development team, develop the software requirements; we have a dedicated software verification team (separate from the development team) who develop and execute the test suite for each project. We use Siemens's Polarion to track the links between requirements, code, and tests, and it's all done under the supervision of an in-house FAA Designated Engineering Representative. Boy is it all tedious, but there's a clear point to it and it catches all the bugs.

[0] https://en.wikipedia.org/wiki/DO-178C

alexfromapex · 3 years ago
Just wanted to ask, this pretty much ensures you're doing waterfall development, as opposed to agile, right?
maerF0x0 · 3 years ago
Not sure about how parent concretely operates. But there's no reason you cannot do Agile this way.

Agile iteration is just as much about how you carve up work as how you decide what to do next. For example you could break up a task into cases it handles.

> WidgetX handles foobar in main case

> WidgetX handles foobar when exception case arises (More Foo, than Bar)

> WidgetX works like <expected> when zero WidgetY present

Those could be 3 separate iterations on the same software, fully tested and integrated individually, and accumulated over time. And the feedback loop could come internally as in "How does it function amongst all the other requirements?", "How is it contributing to problems achieving that goal?"

nonameiguess · 3 years ago
Waterfall is a great methodology where warranted. It ensures you're doing things in a principled, predictable, repeatable manner. We see all this stuff lamenting about and trying to implement reproducibility in science and build systems, yet seem to embrace chaos in certain types of engineering practices.

We largely used waterfall in GEOINT and I think it was a great match and our processes started to break down and fail when the government started to insist we embrace Agile methodologies to emulate commercial best practices. Software capabilities of ground processing systems are at least somewhat intrinsically coupled to the hardware capabilities of the sensor platforms, and those are known and planned years in advance and effectively immutable once a vehicle is in orbit. The algorithmic capabilities are largely dictated by physics, not by user feedback When user feedback is critical, i.e. UI components, by all means, be Agile. But if you're developing something like the control software for a thruster system, and the physical capabilities and limitations of the thruster system are known in advance and not subject to user feedback, use waterfall. You have hard requirements, so don't pretend you don't.

orangepurple · 3 years ago
If builders built buildings the way programmers write programs, then the first woodpecker that came along would destroy civilization. ~ Gerald Weinberg (1933-10-27 age:84) Weinberg’s Second Law

https://www.mindprod.com/jgloss/unmain.html

gotstad · 3 years ago
You don't have too, but it is very common to fall into the trap.

If working within a safety-critical industry and wanting to do Agile, typically you'll break down high-level requirements into sw requirements while you are developing, closing/formalizing the requirements just moments before freezing the code and technical file / design documentation.

It's a difficult thing to practice agile in such an industry, because it requires a lot of control over what the team is changing and working on, at all times, but it can be done with great benefits over waterfall as well.

flyingfences · 3 years ago
Big waterfalls, yes.
spaetzleesser · 3 years ago
You can and will make changes on the way but every change is extremely expensive so it’s better to keep changes low.
airbreather · 3 years ago
Actually most functional safety projects use the v-model (or similar, topography can vary a little as to needs), which is waterfall laid out a slightly different way to more clearly show how verification and validation closes out all the way back to requirements with high degrees of traceabilty.

I've always wanted to break that approach for something a little more nimble, probably by use of tools - but I can't see agile working in functional safety without some very specific tools to assist, which I am yet to see formulated and developed for anything at scale. Also, there are key milestones where you really need to have everything resolved before you start next phase, so maybe sprints, dunno.

The thing about doing waterfall/v-model is if done correctly there is little chance you get to the final Pre-Start Safety Review/FSA 3, or whatever you do before introducing the hazard consequences to humans, and a flaw is discovered that kicks you back 6 or 12 months in the design/validation/verification process. This, while everyone else stands around and waits because they are ready and their bits are good to go, and now you are holding them all up. Not a happy day if that occurs.

FS relies on high degree of traceability and testing the software as it will be used (as best possible), in it's entirety.

So not sure how agile could work in this context, or at least past the initial hazard and risk/requirements definition life cycle phases.

FS is one of things where your progress that you can claim is really only as far as your last lagging item in the engineering sequence of events. The standard expects you to close out certain phases before moving onto subsequent ones. In practice it's a lot messier than that unless extreme discipline is maintained.

(To give an idea of how messy it can get in reality, and how you got to try and find ways to meet the traceability expectations, sometimes in retrospect - last FS project I was responsible for design we were 2.5 years in and still waiting for the owner to issue us their safety requirements. We had to run on a guess and progress speculatively. Luckily we were 95%+ correct with our guesses when reconciled against what finally arrived for requirements)

But, normally racing ahead on some items is a little pointless and likely counterproductive, unless just prototyping a proof of concept system/architecture, or similar activity. You just end up repeating work and then you also have extra historical info floating around and there's possibility that some thing that was almost right but no longer current gets sucked into play etc etc etc. Doc control and revision control is always critical.

Background: I am a TUV certified FS Eng, I have designed/delivered multiple safety systems, mainly to IEC 61511 (process) or IEC 62061 (machinery).

postingposts · 3 years ago
Waterfall and Agile are tools. If you need to hang a photo, a hammer and a nail. Cut down a tree? Maybe not the hammer and the nail.
jmyeet · 3 years ago
Well… the 737MAX seems to suggest it doesn’t catch all the bugs.
markdown · 3 years ago
AFAIK the bugs were caught, known about, and deliberately ignored. In fact even when the bug caused a fatal error that brought an instance crashing (to the ground, literally!), it was ignored both by Boeing and the US government.
jzer0cool · 3 years ago
If you haven't seen, there is a Netflix documentary worth watching all the way about the 737 Max.

Dead Comment

lotyrin · 3 years ago
When it's technically feasible, I like every repo having along side it tests for the requirements from an external business user's point of view. If it's an API then the requirements/tests should be specified in terms of API, for instance. If it's a UI then the requirements should be specified in terms of UI. You can either have documentation blocks next to tests that describe things in human terms or use one of the DSLs that make the terms and the code the same thing if you find that ergonomic for your team.

I like issue tracking that is central to code browsing/change request flows (e.g. Github Issues). These issues can then become code change requests to the requirements testing code, and then to the implementation code, then accepted and become part of the project. As products mature, product ownership folks must periodically review and prune existing requirements they no longer care about, and devs can then refactor as desired.

I don't like over-wrought methodologies built around external issue trackers. I don't like tests that are overly-concerned with implementation detail or don't have any clear connection to a requirement that product ownership actually cares about. "Can we remove this?" "Who knows, here's a test from 2012 that needs that, but no idea who uses it." "How's the sprint board looking?" "Everything is slipping like usual."

Dead Comment

sz4kerto · 3 years ago
What we do:

- we track work (doesn't matter where), each story has a list of "acceptance criteria", for example: 'if a user logs in, there's a big red button in the middle of the screen, and if the user clicks on it, then it turns to green'

- there's one pull request per story

- each pull request contains end-to-end (or other, but mostly e2e) tests that prove that all ACs are addressed, for example the test logs in as a user, finds the button on the screen, clicks it, then checks whether it turned green

- even side effects like outgoing emails are verified

- if the reviewers can't find tests that prove that the ACs are met, then the PR is not merged

- practically no manual testing as anything that a manual tester would do is likely covered with automated tests

- no QA team

And we have a system that provides us a full report of all the tests and links between tests and tickets.

We run all the tests for all the pull requests, that's currently something like 5000 end-to-end test (that exercise the whole system) and much more other types of tests. One test run for one PR requires around 50 hours of CPU time to finish, so we use pretty big servers.

All this might sound a bit tedious, but this enables practically CICD for a medical system. The test suite is the most complete and valid specification for the system.

(we're hiring :) )

splittingTimes · 3 years ago
That sounds like a dream. We do medical systems as well, but depend heavily on manual testing. We use digital scans of a patients mouth to design a restoration in our CAD application. So we have a 3D scene where the user interacts with/manipulates the objects in it.

Not knowing what kind of application you produce, but how do you automate user interactions?

Deleted Comment

Foobar8568 · 3 years ago
Dream board for any projects. One PR per PBI/US is already hard to make people understand this or that we/they shouldn't start working on a PBI/US without acceptance criteria.

After I am unsure of the whole "testing part" especially running all the tests for each PR for typical projects..

corpMaverick · 3 years ago
Let the product owner (PO) handle them.

The PO has to make the hard decision about what to work on and when. He/She must understand the product deeply and be able to make the hard decisions. Also the PO should be able to test the system to accept the changes.

Furthermore. You don't really need to have endless lists of requirements. The most important thing to know is what is the next thing that you have to work on.

uticus · 3 years ago
This actually has a nugget of wisdom. I wish I was more open to soaking up wisdom - and less likely to argue a point - when I was a junior dev. Or still now, really.

Moreover, if your PO can't define the goals, and what needs to be tested to get there, well you have a problem. Assuming the team is committed to some form of Agile and you have such a thing as a PO.

However, I also disagree with the main thrust of this comment. A PO should have responsibility, sure. But if that gets translated into an environment where junior devs on the team are expected to not know requirements, or be able to track them, then you no longer have a team. You have a group with overseers or minions.

There's a gray area between responsibility and democracy. Good luck navigating.

michaelt · 3 years ago
> Moreover, if your PO can't define the goals, and what needs to be tested to get there, well you have a problem.

In some work environments, there may be unspoken requirements, or requirements that the people who want the work done don't know they have.

For example, in an online shopping business the head of marketing wants to be able to allocate a free gift to every customer's first order. That's a nice simple business requirement, clearly expressed and straight from the user's mouth.

But there are a bunch of other requirements:

* If the gift item is out of stock, it should not appear as a missing item on the shipping manifest

* If every other item is out of stock, we should not send a shipment with only the gift.

* If we miss the gift from their first order, we should include it in their second order.

* The weight of an order should not include the gift when calculating the shipping charge for the customer, but should include it when printing the shipping label.

* If the first order the customer places is for a backordered item, and the second order they place will arrive before their 'first' order, the gift should be removed from the 'first' order and added to the 'second' order, unless the development cost of that feature is greater than $3000 in which case never mind.

* The customer should not be charged for the gift.

* If the gift item is also available for paid purchase, orders with a mix of gift and paid items should behave sensibly with regard to all the features above.

* Everything above should hold true even if the gift scheme is ended between the customer checking out and their order being dispatched.

* The system should be secure, not allowing hackers to get multiple free gifts, or to get arbitrary items for free.

* The software involved in this should not add more than, say, half a second to the checkout process. Ideally a lot less than that.

Who is responsible for turning the head of marketing's broad requirement into that list of many more, much narrower requirements?

Depending on the organisation it could be a business analyst, a product owner, a project manager, an engineer as part of planning the work, an engineer as part of the implementation, or just YOLO into production and wait for the unspoken requirements to appear as bug reports.

lovehatesoft · 3 years ago
That would be nice, and maybe I should have clarified why I asked the question. I was asked to add a new large feature, and some bugs popped up along the way. I thought better testing could have helped, and then I thought it would possibly help to list the requirements as well so I can determine which tests to write/perform. And really I thought I could have been writing those myself - PO tells me what is needed generally, I try to determine what's important from there.

Or maybe I just need to do better testing myself? There's no code reviews around here, or much of an emphasis on writing issues, or any emphasis on testing that I've noticed. So it's kind of tough figuring out what I can do

ruh-roh · 3 years ago
This is good advice for multiple reasons.

One I haven't seen mentioned yet - When Product is accountable & responsible for testing the outputs, they will understand the effort required and can therefore prioritize investment in testable systems and associated test automation.

When those aspects are punted over to architects/developers/QA, you'll end up in a constant battle between technical testing investments and new features.

deathanatos · 3 years ago
I don't disagree with you. In fact, I think it's just a restatement of the PO's job description.

But POs who are technical enough to understand the system to understand what the requirements of the system are, empirically, unicorns.

clavalle · 3 years ago
This is a LOT to put on a PO. I hope they have help.
sumedh · 3 years ago
This is why you need a QA
5440 · 3 years ago
I review software for at least 3-5 companies per week as part of FDA submission packages. The FDA requirements require traceability between reqs and the validation. While many small companies just use excel spreadsheets for traceability, the majority of large companies seem to use JIRA tickets alongside confluence. While those arent the only methods, they seem to be 90% of the packages I review.
robertlagrant · 3 years ago
Health tech - we also use this combo. The Jira test management plugin XRay is pretty good if you need more traceability.
rubidium · 3 years ago
Xray and R4J plugins make it pretty nice in JIRA... as far as traceability goes it's MUCH more user friendly than DOORS.
scruple · 3 years ago
Exactly the same process for us, also in healthcare and medical devices.

Dead Comment

spaetzleesser · 3 years ago
I would love to see how other companies do it. I understand the need for traceability but the implementation in my company is just terrible. We have super expensive systems that are very tedious to use. The processes are slow and clunky. There must be a better way.
gourneau · 3 years ago
We have been working software for FDA submissions as well. We use Jama https://www.jamasoftware.com/ for requirements management and traceability to test cases.
sam_bristow · 3 years ago
I have also used Jama in a couple of companies. One for medical devices and one doing avionics. My experience is that it's quite similar to Jira in that if it's set up well it can work really well. If it's set up poorly it is a massive pain.
jhirshman · 3 years ago
hi, we're trying to build a validated software environment for an ELN tool. I would be interested in learning more about your experience with this software review if you could spare a few minutes -- jason@uncountable.com
stefanoco · 3 years ago
Zooming into "requirements management" (and out of "developing test cases") there's a couple of Open Source projects that address specifically this important branch of software development. I like both approaches and I think they might be used in different situations. By the way, the creators of these two projects are having useful conversations on aspects of their solutions so you might want to try both and see what's leading from your point of view.

* https://github.com/doorstop-dev/doorstop * https://github.com/strictdoc-project/strictdoc

Of course requirements can be linked to test cases and test execution reports, based on a defined and described process.

How to build test cases is another story.

zild3d · 3 years ago
I was at lockheed martin for a few years where Rational DOORS was used. Now at a smaller startup (quite happy to never touch DOORS again)

I think the common answer is you don't use a requirements management tool, unless it's a massive system, with System Engineers who's whole job is to manage requirements.

Some combination of tech specs and tests are the closest you'll get. Going back to review the original tech spec (design doc, etc) of a feature is a good way to understand some of the requirements, but depending on the culture it may be out of date.

Good tests are a bit closer to living requirements. They can serve to document the expected behavior, and check the system for that behavior

jcon321 · 3 years ago
Gitlab. Just use Issues you can do everything with the free tier. (It's called "Issues workflow" - gitlab goes a little overboard though, but I'd look at pictures of peoples issues list to get examples).

My opinion would be to not use all the fancy features that automatically tie issues to merge requests, releases, epics, pipelines etc... it's way to much for a small team that is not doing any type of management.

Just use some basic labels, like "bug" or "feature" and then use labels to denote where they are in the cycle such as "sprinted", "needs testing" etc. Can use the Boards feature if you want something nice to look at. Can even assign weights and estimates.

You can tie all the issues of a current sprint to a milestone, call the milestone a version or w/e and set a date. Now you have history of features/bugs worked on for a version.

In terms of testing, obviously automated tests are best and should just be a requirement built into every requirement. Some times though tests must be done manually, and in that case attach a word doc or use the comments feature on an issue for the "test plan".

lovehatesoft · 3 years ago
If possible, could I get your opinion on a specific example? In my current situation, I was asked to add a feature which required a few (java) classes. So -

* It seems like this would have been a milestone?

* So then maybe a few issues for the different classes or requirements?

* For each issue, after/during development I would note what tests are needed, maybe in the comments section of the issue? Maybe in the description?

* And then automated tests using junit?

jcon321 · 3 years ago
I don't know your deployment schedule or rules. I represent milestones as groups of independent issues (bug fixes or new features) that would all go together as a release. I don't use milestones as a group of multiple issues that represent one requirement (that would be referred to as an epic). However epics are part of the paid version, however there's no reason why you couldn't use milestones this way.

If you have a requirement (doesn't matter how big or small) I'd treat that as 1 issue (regardless of how many java classes or lines of codes need modifying). If the issue is complex then within the issue's description you can use markdown (like checkboxes or bullet points) to identify subset requirements. However, if you can break that large requirement into functional changes that could exist/be deployed separately then I'd probably do multiple independent issues with some type of common identifier in the issue's name (or use your interpretation of milestones and put all those issues into 1).

If you use gitlab as your git repository then tying an issue to a merge request is easy and it would then show you the diff (aka all the changes to source code) that the issue required for implementation.

In terms of tests, same kind of answer - I don't know your rules. Every issue should have a test plan, perhaps using markdown in the issues description would convey that test plan the easiest. If you automate the test using junit then not sure the test plan is anything more than "make sure test xyz from junit passes", if it's a manual test then the issue's description can have a list of steps using markdown.

lmeyerov · 3 years ago
Each issue can be from a 1 line fix to a week or even 2 week thing. But by that point, probably a metaissue that corresponds to other issues or issue with inline subtasks (GitHub has check boxes).

Features cut across code, so no 1-1 mapping with classes. Tests are generally self-documenting and land alongside the feature they are for. You can document them, but likely either a comment in the issue/PR if technically interesting, or in a separate ~wiki doc as part of a broader specification.

Ideally each commit is valid & passes tests (see "conventional commits") and each issue/PR has accompanying tests whether around a new feature or bugfix. Particular test frameworks change every year.