Plutora Blog - Release Management
Software Development Life Cycle (SDLC): Making Sense of the Different MethodologiesReading time 7 minutes
Some of you, myself included, may be surprised to hear that there’s more to the SDLC than just waterfall, agile, and DevOps. Yes, it’s true!
Though other methodologies haven’t received as much press recently, it’s still valuable to understand them and know what advantages and disadvantages exist for each. That’s because ultimately these methodologies represent additional tools in your toolbelt. And with knowledge, you’ll have the ability to solve a wider variety of problems.
In today’s post, we’ll take a look at different methodologies and their application. So let’s get started with ol’ faithful, the waterfall model.
The waterfall model was first discussed in a lecture by Dr. Winston W. Royce back in 1970. In this lecture, Dr. Royce shared a template that people could use to manage the development of large software systems. His ideas came from years of experience working on software for spacecraft and ensuring everything was on time, within budget, and operational.
Sending ships into space didn’t require market research, customer interviews, or anything else that agile methodologies excel at. The requirements didn’t change and were known up front. And though a few prototypes or early testing could have been beneficial, the programs thrived in a waterfall environment.
So let’s take a closer look as to how the waterfall model works. In this model, the software development process is broken down into a sequential flow through various phases. Waterfall requires that each step completes before going on to the next step. Seems pretty rigid, right?
Though the steps can vary, we typically see them as demonstrated in the image below:
Though the waterfall model receives a lot of hate, certain problems are best solved with this methodology. For example, projects where the end product has been clearly defined or projects that involve hardware upgrades can benefit from a waterfall approach.
But as you may expect, there are disadvantages to this rigidity. First, we must assume that requirements can be frozen in place. This becomes more and more difficult with our ever-changing technical landscape. Additionally, when changes do come up, they end up pushed into a change control process that can cause delays. Also, the fact that testing doesn’t happen until near the end doesn’t sit well with me. Years ago, we’d end up with a hard target date to reach, but the first four phases would run over. Then testing would end up with a smaller and smaller window of time to verify the software works as expected.
The v-model shares qualities with the waterfall model. Essentially, we take the waterfall model, add a few testing steps, and then flip part of it up.
Additionally, we once again don’t start the next phase of the SDLC until the previous phase has completed. This methodology works well for projects that require a lot of control and have clearly defined and unchanging requirements. Typically, it’s too heavy-handed for small projects.
On the plus side, this model emphasizes the need for good testing before the product release. However, we’re once again using a model in which changes in requirements result in heavy process and delays.
With the two previous models, the application doesn’t get early feedback from QA, customers, or any other stakeholders. However, with prototyping, the team focuses on creating early models of the software in order to receive feedback. The prototype typically lacks much of the functionality of the final product but provides the customers with an idea of the future software. Then the team gathers feedback to implement the final product.
Two types of prototyping methodologies exist. First, the throw-away prototyping model will discard any prototyped code before writing the real software product. On the other hand, the evolutionary prototyping model builds on the prototype, turning it into functioning and polished software over time.
One of the problems with the prototyping model involves perception. When the customer sees what seems like mostly working software, they may expect that the project is near completion. However, in reality, the software may still need to be completely written from scratch. That can take a lot of time.
The spiral method blends together parts of the waterfall model and either iterative or prototyping. This model uses the same phases as waterfall, but it separates them out with additional planning, risk assessment, and prototyping or iterations. This model provides safety for large and complex systems.
These projects typically have a high cost, but they manage risks with the reliance on iterations of the software.
Iterative and Incremental Method
The iterative model feels like mini waterfalls chained together. Instead of designing and gathering the requirements for the whole applications, we slice the project into functional portions that progress through the waterfall steps.
This may seem similar to agile, but it has a few differences. First, the iterative approach doesn’t involve external customers during the various phases. Additionally, the scope of each iteration or increment is fixed, similar to requirements in waterfall.
You may also notice similarities to the prototyping model. However, with both iterative and agile, the focus is on delivering fully functional slices of software and not just prototypes.
The agile method started with the creation of the “Manifesto for Agile Software Development” (now widely known as the Agile Manifesto) and its twelve principles.
With this methodology, we focus on getting complete yet simple slices of functionality to our customers. We value communication, people, and early feedback. And documentation usually takes a backseat to new features. With this methodology, changes in requirements don’t cause problems. In fact, changes to requirements present opportunities. They’re an expected part of the development of software.
However, with the way iterations fall, we typically can’t estimate when a large project will finish. And scaling agile tends to be difficult, though scaled agile frameworks do provide relief. Additionally, just implementing agile on a development team doesn’t complete the transformation to an agile organization. In fact, it requires organizational changes to allow for the self-sufficiency of agile development teams.
And now let’s welcome the newest methodology to the block: DevOps. DevOps brings development skills together with operations. This collaboration and sharing of responsibilities help ensure that the product developed operates well in production.
How does DevOps do all this? It puts a large focus on automation and total team ownership. With this methodology, we not only work on the functional requirements but we also automate operational requirements like monitoring and validating systems. We automate not only CI/CD but also the creation of production-like systems for development and testing. And we make tools so that software delivery teams can self-service their infrastructure needs.
The difficulty in making DevOps work revolves around experience. Most development teams have little experience with application or infrastructure monitoring. They don’t always automate as much as possible. And they might not feel confident in being responsible for the day-to-day operations of the software.
Do We Have to Choose Just One?
At this point, you may think, “Well, this is all great. But I need my teams to all to use the same methodology. Otherwise, how will I be able to coordinate larger initiatives across the organization?”
Here’s the problem. If all we have is agile, then that’s what we’ll use. But agile isn’t always the right solution. And we end up forcing all of our miscellaneous pegs into round holes. It usually ends up working—sort of.
So look at each software project and choose the best methodology based on the requirements and experience of the team. And then build off of your decisions to find what works best for your organization.