If you’ve been involved in software development projects, you’re likely to have come across the waterfall methodology or what is known as the traditional software development methodology. The waterfall methodology has been in use on software projects for over 30 years and counting. It is the fundamental model that underlies most software projects; other methodologies build on most of its phases. Its widespread use does not however, mean that it has been satisfactory.
The waterfall methodology is deemed inappropriate, especially in unpredictable and complex organizations. This article will focus on the shortcomings that make the waterfall methodology unsuitable in agile environments. Understanding the shortcomings will ensure that the analyst can enjoy its benefits wherever it’s applied and yes, it has its benefits.
The waterfall methodology structures software development in 6 phases: Requirements Definition Phase, Specification Phase, Design Phase, Implementation and Testing Phase, Integration and Testing Phase, and the Maintenance Phase. With the waterfall methodology, one phase has to be completed before the next with the output of one phase serving as input to the next. Phases do not overlap in the waterfall model. Backtracking is frowned at because it introduces project delays, high costs and can ultimately lead to project failure.
The waterfall methodology is easy to understand and use, and can be very effective for small projects where requirements are understood and the product definition is stable.
The fundamental principles of the waterfall methodology are:
1. You always know your goal at the beginning of the project
2. You can proceed along a straight line towards your goal
3. You can deliver a complete and correct system.
In reality however, there are no guarantees that any of these assumptions will always be true. This leads us to the fallacies of the Waterfall methodology.
1. Fallacy of Developing Systems with Limited User Involvement
From the time of initial user involvement in the requirements definition phase, the user has to wait until the end of the life cycle to see any working software. This increases the risk of non-acceptance and project delays drastically.
How can an accurate specification be possible without continuous user involvement? In some cases, customers only really know what they want when they see what they don’t want. Even if they provide answers to the questions we ask, they may lack information on the context in which such information is being sought, causing them to provide less information than what is actually needed. Responsibility for the development of a new system should never be delegated to technical experts (or analysts alone) since most of the decisions that need to be made during the project are organizational and not technical.
2. The Fallacy of Accurate Specifications
As discussed in previous posts, the analyst does not always begin a project with clearly stated objectives. Analysts are often confronted with messy situations. They often have to make sense of what the problem is before providing recommendations on how the problem can be solved. Analysts may also be faced with vague, ambiguous or conflicting requirements that need clarification. Even where the business need has been defined, there’s no guarantee that the analyst will come up with complete and accurate requirements that can be frozen. In addition to this, there’s no one in the organization, including the analyst, that can accurately predict the effectiveness of the “proposed new state” at the beginning of the project (Unless of course a similar situation has been experienced in the past). Due to these limitations, it is often necessary to iterate between the development phase and the analysis phase. This reality is contrary to the waterfall methodology principle which recommends that analysts specify requirements completely before development begins.
3. Fallacy of the Linear Sequence of Development Tasks
Because specifications can never be a 100% accurate the first time, it follows that development tasks will also not proceed in a linear fashion. In most cases, after coding begins, the analyst needs to go back to the users, ask more questions and communicate new feedback or changes to the developers.
4. Fallacy of the Complete System
For systems to maintain their usefulness, they need to be changed even after implementation to address users' changing demands. This implies that the system can never really be “complete”. If systems are not designed to accommodate change, simple changes become overwhelming very quickly. Most waterfall systems are not easy to maintain or change due to the huge amount of documentation that comes with them.
Developing Information Systems - Concepts, Issues & Practice By Chrisanthi Avgerou and Tony Cornford