I'm a junior dev that recently joined a small team which doesn't seem to have much with regards to tracking requirements and how they're being tested, and I was wondering if anybody has recommendations.
Specifically, I would like to track what the requirements/specifications are, and how we'll test to make sure they're met? Which I don't know if this could be a mix of unit and integration/regression tests? Honestly though if this is maybe even the wrong track to take, I'd appreciate feedback on what we could be doing instead.
I used IBM Rational DOORS at a previous job and thought it really helped for this, but with a small team I don't think it's likely they'll spring for it. Are there open source options out there, or something else that's easy? I thought we could maybe keep track in a spreadsheet (this to match DOORS?) or some other file, but I'm sure there would be issues with that as we added to it. Thanks for any feedback!
[0] https://en.wikipedia.org/wiki/DO-178C
Agile iteration is just as much about how you carve up work as how you decide what to do next. For example you could break up a task into cases it handles.
> WidgetX handles foobar in main case
> WidgetX handles foobar when exception case arises (More Foo, than Bar)
> WidgetX works like <expected> when zero WidgetY present
Those could be 3 separate iterations on the same software, fully tested and integrated individually, and accumulated over time. And the feedback loop could come internally as in "How does it function amongst all the other requirements?", "How is it contributing to problems achieving that goal?"
We largely used waterfall in GEOINT and I think it was a great match and our processes started to break down and fail when the government started to insist we embrace Agile methodologies to emulate commercial best practices. Software capabilities of ground processing systems are at least somewhat intrinsically coupled to the hardware capabilities of the sensor platforms, and those are known and planned years in advance and effectively immutable once a vehicle is in orbit. The algorithmic capabilities are largely dictated by physics, not by user feedback When user feedback is critical, i.e. UI components, by all means, be Agile. But if you're developing something like the control software for a thruster system, and the physical capabilities and limitations of the thruster system are known in advance and not subject to user feedback, use waterfall. You have hard requirements, so don't pretend you don't.
https://www.mindprod.com/jgloss/unmain.html
If working within a safety-critical industry and wanting to do Agile, typically you'll break down high-level requirements into sw requirements while you are developing, closing/formalizing the requirements just moments before freezing the code and technical file / design documentation.
It's a difficult thing to practice agile in such an industry, because it requires a lot of control over what the team is changing and working on, at all times, but it can be done with great benefits over waterfall as well.
I've always wanted to break that approach for something a little more nimble, probably by use of tools - but I can't see agile working in functional safety without some very specific tools to assist, which I am yet to see formulated and developed for anything at scale. Also, there are key milestones where you really need to have everything resolved before you start next phase, so maybe sprints, dunno.
The thing about doing waterfall/v-model is if done correctly there is little chance you get to the final Pre-Start Safety Review/FSA 3, or whatever you do before introducing the hazard consequences to humans, and a flaw is discovered that kicks you back 6 or 12 months in the design/validation/verification process. This, while everyone else stands around and waits because they are ready and their bits are good to go, and now you are holding them all up. Not a happy day if that occurs.
FS relies on high degree of traceability and testing the software as it will be used (as best possible), in it's entirety.
So not sure how agile could work in this context, or at least past the initial hazard and risk/requirements definition life cycle phases.
FS is one of things where your progress that you can claim is really only as far as your last lagging item in the engineering sequence of events. The standard expects you to close out certain phases before moving onto subsequent ones. In practice it's a lot messier than that unless extreme discipline is maintained.
(To give an idea of how messy it can get in reality, and how you got to try and find ways to meet the traceability expectations, sometimes in retrospect - last FS project I was responsible for design we were 2.5 years in and still waiting for the owner to issue us their safety requirements. We had to run on a guess and progress speculatively. Luckily we were 95%+ correct with our guesses when reconciled against what finally arrived for requirements)
But, normally racing ahead on some items is a little pointless and likely counterproductive, unless just prototyping a proof of concept system/architecture, or similar activity. You just end up repeating work and then you also have extra historical info floating around and there's possibility that some thing that was almost right but no longer current gets sucked into play etc etc etc. Doc control and revision control is always critical.
Background: I am a TUV certified FS Eng, I have designed/delivered multiple safety systems, mainly to IEC 61511 (process) or IEC 62061 (machinery).
Dead Comment
I like issue tracking that is central to code browsing/change request flows (e.g. Github Issues). These issues can then become code change requests to the requirements testing code, and then to the implementation code, then accepted and become part of the project. As products mature, product ownership folks must periodically review and prune existing requirements they no longer care about, and devs can then refactor as desired.
I don't like over-wrought methodologies built around external issue trackers. I don't like tests that are overly-concerned with implementation detail or don't have any clear connection to a requirement that product ownership actually cares about. "Can we remove this?" "Who knows, here's a test from 2012 that needs that, but no idea who uses it." "How's the sprint board looking?" "Everything is slipping like usual."
Dead Comment
- we track work (doesn't matter where), each story has a list of "acceptance criteria", for example: 'if a user logs in, there's a big red button in the middle of the screen, and if the user clicks on it, then it turns to green'
- there's one pull request per story
- each pull request contains end-to-end (or other, but mostly e2e) tests that prove that all ACs are addressed, for example the test logs in as a user, finds the button on the screen, clicks it, then checks whether it turned green
- even side effects like outgoing emails are verified
- if the reviewers can't find tests that prove that the ACs are met, then the PR is not merged
- practically no manual testing as anything that a manual tester would do is likely covered with automated tests
- no QA team
And we have a system that provides us a full report of all the tests and links between tests and tickets.
We run all the tests for all the pull requests, that's currently something like 5000 end-to-end test (that exercise the whole system) and much more other types of tests. One test run for one PR requires around 50 hours of CPU time to finish, so we use pretty big servers.
All this might sound a bit tedious, but this enables practically CICD for a medical system. The test suite is the most complete and valid specification for the system.
(we're hiring :) )
Not knowing what kind of application you produce, but how do you automate user interactions?
Deleted Comment
After I am unsure of the whole "testing part" especially running all the tests for each PR for typical projects..
The PO has to make the hard decision about what to work on and when. He/She must understand the product deeply and be able to make the hard decisions. Also the PO should be able to test the system to accept the changes.
Furthermore. You don't really need to have endless lists of requirements. The most important thing to know is what is the next thing that you have to work on.
Moreover, if your PO can't define the goals, and what needs to be tested to get there, well you have a problem. Assuming the team is committed to some form of Agile and you have such a thing as a PO.
However, I also disagree with the main thrust of this comment. A PO should have responsibility, sure. But if that gets translated into an environment where junior devs on the team are expected to not know requirements, or be able to track them, then you no longer have a team. You have a group with overseers or minions.
There's a gray area between responsibility and democracy. Good luck navigating.
In some work environments, there may be unspoken requirements, or requirements that the people who want the work done don't know they have.
For example, in an online shopping business the head of marketing wants to be able to allocate a free gift to every customer's first order. That's a nice simple business requirement, clearly expressed and straight from the user's mouth.
But there are a bunch of other requirements:
* If the gift item is out of stock, it should not appear as a missing item on the shipping manifest
* If every other item is out of stock, we should not send a shipment with only the gift.
* If we miss the gift from their first order, we should include it in their second order.
* The weight of an order should not include the gift when calculating the shipping charge for the customer, but should include it when printing the shipping label.
* If the first order the customer places is for a backordered item, and the second order they place will arrive before their 'first' order, the gift should be removed from the 'first' order and added to the 'second' order, unless the development cost of that feature is greater than $3000 in which case never mind.
* The customer should not be charged for the gift.
* If the gift item is also available for paid purchase, orders with a mix of gift and paid items should behave sensibly with regard to all the features above.
* Everything above should hold true even if the gift scheme is ended between the customer checking out and their order being dispatched.
* The system should be secure, not allowing hackers to get multiple free gifts, or to get arbitrary items for free.
* The software involved in this should not add more than, say, half a second to the checkout process. Ideally a lot less than that.
Who is responsible for turning the head of marketing's broad requirement into that list of many more, much narrower requirements?
Depending on the organisation it could be a business analyst, a product owner, a project manager, an engineer as part of planning the work, an engineer as part of the implementation, or just YOLO into production and wait for the unspoken requirements to appear as bug reports.
Or maybe I just need to do better testing myself? There's no code reviews around here, or much of an emphasis on writing issues, or any emphasis on testing that I've noticed. So it's kind of tough figuring out what I can do
One I haven't seen mentioned yet - When Product is accountable & responsible for testing the outputs, they will understand the effort required and can therefore prioritize investment in testable systems and associated test automation.
When those aspects are punted over to architects/developers/QA, you'll end up in a constant battle between technical testing investments and new features.
But POs who are technical enough to understand the system to understand what the requirements of the system are, empirically, unicorns.
Dead Comment
* https://github.com/doorstop-dev/doorstop * https://github.com/strictdoc-project/strictdoc
Of course requirements can be linked to test cases and test execution reports, based on a defined and described process.
How to build test cases is another story.
I think the common answer is you don't use a requirements management tool, unless it's a massive system, with System Engineers who's whole job is to manage requirements.
Some combination of tech specs and tests are the closest you'll get. Going back to review the original tech spec (design doc, etc) of a feature is a good way to understand some of the requirements, but depending on the culture it may be out of date.
Good tests are a bit closer to living requirements. They can serve to document the expected behavior, and check the system for that behavior
My opinion would be to not use all the fancy features that automatically tie issues to merge requests, releases, epics, pipelines etc... it's way to much for a small team that is not doing any type of management.
Just use some basic labels, like "bug" or "feature" and then use labels to denote where they are in the cycle such as "sprinted", "needs testing" etc. Can use the Boards feature if you want something nice to look at. Can even assign weights and estimates.
You can tie all the issues of a current sprint to a milestone, call the milestone a version or w/e and set a date. Now you have history of features/bugs worked on for a version.
In terms of testing, obviously automated tests are best and should just be a requirement built into every requirement. Some times though tests must be done manually, and in that case attach a word doc or use the comments feature on an issue for the "test plan".
* It seems like this would have been a milestone?
* So then maybe a few issues for the different classes or requirements?
* For each issue, after/during development I would note what tests are needed, maybe in the comments section of the issue? Maybe in the description?
* And then automated tests using junit?
If you have a requirement (doesn't matter how big or small) I'd treat that as 1 issue (regardless of how many java classes or lines of codes need modifying). If the issue is complex then within the issue's description you can use markdown (like checkboxes or bullet points) to identify subset requirements. However, if you can break that large requirement into functional changes that could exist/be deployed separately then I'd probably do multiple independent issues with some type of common identifier in the issue's name (or use your interpretation of milestones and put all those issues into 1).
If you use gitlab as your git repository then tying an issue to a merge request is easy and it would then show you the diff (aka all the changes to source code) that the issue required for implementation.
In terms of tests, same kind of answer - I don't know your rules. Every issue should have a test plan, perhaps using markdown in the issues description would convey that test plan the easiest. If you automate the test using junit then not sure the test plan is anything more than "make sure test xyz from junit passes", if it's a manual test then the issue's description can have a list of steps using markdown.
Features cut across code, so no 1-1 mapping with classes. Tests are generally self-documenting and land alongside the feature they are for. You can document them, but likely either a comment in the issue/PR if technically interesting, or in a separate ~wiki doc as part of a broader specification.
Ideally each commit is valid & passes tests (see "conventional commits") and each issue/PR has accompanying tests whether around a new feature or bugfix. Particular test frameworks change every year.