10 Step Software Development Process Recipe
Working with enterprise clients on larger projects will require long term development, but it will also require some work aside of development to make sure that the development is being done right. It is often the case that companies get stuck developing in the wrong direction, or that in the end the client does not even know what they got for their money.
Unfortunately, I learned about this the hard way (incidentally, the hard way is the best way to learn things, as the message really sticks, but it is the most expensive way as well!). We have had a few projects where we would have meetings, arrange features, work on them, deliver, test, etc.
However, as the project went on for a long time, after a while we could not keep track of what was done, how it worked, what were the technical decisions, what was the potential technical debt etc.
Being engineers, we debugged the development process, broke it into smaller pieces and handled each of them.
10 step software development process
We have come up with a 10 step software development process that we use to deliver. Please take the word “deliver” here in a broader sense because it can literally mean deliver anything, but most often it means “deliver a feature”.
This process can be applied both to the waterfall principle of doing stuff as well as agile methodology. For the purpose of this post I’d prefer to keep it connected to agile methodology, as it is more often the case these days.
1. Write a functional specification of what needs to be done and have it verified by the product owner.
2. Working with the development team write a technical specification on how this feature should be done.
3. Develop the feature.
4. Complete automated tests or the feature is not completed.
5. Send a pull request for code review.
6. Deploy to development environment for QA testing.
7. Deploy to staging environment for QA and client testing.
8. UAT on staging environment.
9. Update functional specification and technical documentation.
10. Deploy to production environment.
The developers among you will notice that only a small section of actually delivering a feature is development (in our case steps 3 and 4). A lot about delivering a feature is connected to testing, deploying, getting the client’s acceptance and documenting.
We have created a checklist that you can use on your projects, feel free to download it here: TODO: DOWNLOAD
We can walk through these steps using a basic example.
Example feature: Pet management in an application about user’s pets
Let’s say we are building an application in which our users want to keep track of data about their pets. The data can be anything from height, weight, vet visit results etc. – it is not relevant for this example.
Let us also say that the user story that needs to be developed in this sprint is pet management. The user needs to add pets, edit pets, delete pets, perhaps search through their list of pets etc.
Let’s see how the process would be applied here.
1. Functional specification
Very often, a user story will be defined just like we defined it in the previous paragraph. And it is technically enough for a developer to develop an implementation of this. But would it really be the implementation that the client needed?
The development team would always be able to defend their point of view with the fact that the task was not well defined etc. and that a redo can be done in the next sprint, but… is it possible to prevent this?
If we were to write up a functional specification, we would need to answer all of the questions that come up from the example like:
– What features does a pet have (name, age or date of birth and in which format, image and if so, where it is stored etc.)
– What actions will the user be able to do (create, update, delete maybe, search maybe, etc.)
– Which user roles can perform which actions
– …
In our last project we’d do this for each user story at the beginning of the sprint to see which of these user stories we could even commit to in the given sprint. We would literally analyze a user story with our product owner until neither of us had any questions left.
There is no right or wrong format here – whatever works for you. It can be literally anything from wireframes to well defined text instructions – but they have to be exact and define as many scenarios as possible. The better you describe it in this phase, the easier it will be to develop, and the end client will be happier with the outcome.
2. Technical specification
Once we know what needs to be built from step one, now the development team takes over. In discussion, the development team should come up with a way to actually deliver this. I emphasize “in discussion” – it is never a good idea for only one person (e.g. the team lead) to do this on their own because four eyes see better than two. Also, even the mighty dr. House needed team members to bounce ideas off of. And, in the end, it has shown to us that it is a good thing to have everyone involved in this step as they feel more involved and connected with the feature itself.
The outcome of this discussion should be an ordered list of tasks that are well defined so any team member can take them on without additional clarification. Again, the format here is not so relevant, but each task should be something that could be built on its own and make sense and it should be clearly described from the “how to do it” point of view.
3. Develop the feature
Make the magic with your fingertips. If some task is not defined well enough, feel free to revisit step 2 and open more technical discussions. However, something that is not technically specified should not be developed. It should for sure be re-discussed and the conclusions need to go into the specification.
4. Automated testing
The level of automated testing can depend from feature to feature or from project to project, but automated testing should be in place. It is 2024 and having no automated testing is… well, just wrong.
No feature is ready for review without automated tests.
5. Code review
Code reviews are important. Ideally done by a person with a high level of OCD.
Code reviews are there to catch some obvious faults. These can be anything from bad code structure, bad coding style, not following agreed conventions, not catching edge cases or validations (e.g. check for possible null values) etc. These should be caught by a decent code reviewer.
Code reviews are also a useful skill as it has been shown that it improves your debugging ability. By looking at someone’s code you are immediately thinking about places it could be improved or about places where it could possibly fail. It has shown that this practices your perception and makes you more efficient at both coding and debugging.
6. QA testing on development environment
Ideally the team will have an option to deploy the new feature easily to a development environment and this is where QA tries to tear it apart. What do I mean by “tear it apart”? There is a joke:
A QA person walks into a bar. Orders a beer. Orders 3 beers. Orders 67890987653567 beers. Orders nothing. Orders -1 beers. Orders a lizard. Walks out without paying.
Even after all your automated tests have passed, you still need someone to click through all scenarios in different browsers, on different devices etc. Only after QA verifies that something is OK should it be shown to the client.
7. QA and client testing on staging
If the QA has verified that the new feature works on development environment, it should go to staging.
On staging environment you should have a good data representation of production data (not necessarily the same, but similar, perhaps an old backup with obfuscated sensitive info or something).
The QA should do their process on staging with the “real” data. If they verify it works, it can be presented to the client / product owner who can then click through (or have their team click through) the new feature.
8. UAT on staging environment
The client should be given enough time to test.
Depending on how you work, it can be that the development takes place in one sprint, and then the client tests in the next sprint while developers develop other features (which I don’t prefer). I prefer when testing is done in the same sprint, so that at the end of the sprint we can do user acceptance testing from the client’s side.
Often I see experienced companies even leave this to chance. “We gave them access, we told them to test, they did or didn’t…” And, again, this is a valid statement. However, in my opinion, the company delivering the software should push their clients to test it. And should get an (ideally written) UAT document stating that the client has tested the feature and it works as expected.
Insisting on a confirmation like this, from my experience, shows that you are a serious software development company and you care that what you deliver is in fact what the client needs.
Also, having a document like this can be very useful for any kind of disputes that might happen across the course of a longer project.
Again, the format itself is not important. In our last project we used minutes from the sprint review meeting. There we would say that the team developed features X, Y, Z, and that the client tested them and agrees they work as expected. At the end of the review the meetings were sent to all parties involved and that was enough.
But, as informal as it maybe, it is highly advised to have some sort of written confirmation from the client they have tried it out and it works as expected.
9. Update the functional specification and technical documentation
The project will be using different tools. Some of the functional specification will be on user stories in whatever project management tool is being used (Azure DevOps, JIRA, Trello, GitHub Projects…). Some of the functional specification can end up on pull requests in the description or in the comments. Similar goes with the technical specification.
Before a feature is merged to production you should take time to update these two documentations. This will ensure that the functional specification is always up-to-date and, at any given time, shows what the features of your software are. In a similar way the technical specification will always show how it works under the hood at any given time.
10. Deploy to production
If all the steps above were followed to the letter, then there should be no problem with this one. In theory. And, as we all know, theory and practice are the same thing… in theory.
If we are talking about a B2B application, I love to have some test entity (or entities) which QA can use to try stuff in production. If we are talking about a B2C application, I prefer QA having their own user accounts so they can click around in production.
Developing a feature vs. delivering a feature
Over the course of the years I’ve learned that delivering a feature is so much more than just developing it. This is why I sometimes smile at young developers when they say “That can be done in 2 days tops!”. What they are thinking is development. However, they often forget (or don’t know about) all the work that needs to be put in place before or after.
A friend of mine got hired as a Chief Delivery Officer in an IT company. We were kind of joking with her position, until we saw how hard she worked overseeing that company’s process (which was similar to this one) and making sure that all the I’s are dotted and all the T’s are crossed before something is shipped.
Delivering a feature >> developing a feature.
Is this too much?
This is a valid question and a tough one to answer.
The clients will often say yes, because they need to be conservative with the bugdet (especially startups and smaller companies). The developers will often say yes because “clean code is self documenting”, “everything is in GitHub PRs” etc.
Realistically, not all features on all projects can go through all of these steps. Ideally they should. If they do, you will have slower development, but a more robust software product that is well documented, easy to onboard to, well tested etc. If they don’t you might end up with technical debt, with bugs in production that can lead to loosing clients etc.
In our experience having a process like this one has shown extremely useful and we try to stick to it as much as possible.
Summary
Key takeaways:
– delivering a feature is much more than just developing a feature
– delivering a feature is a process that we have described in 10 steps
– estimates on features should include the whole process of feature delivery, not only feature development
– not all of the steps are needed on every project or feature, but keeping them can provide a more robust, better maintained solution, while skipping them can lead to technical debt, bugs or loosing clients