August - 8th agile principle
Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
This principle sheds light on what some would call the dark side of software development. It leads us to discuss team velocity, deadlines, time pressure, overtime, unsustainable work habits and technical debt. There are a few points of interest in the phrasing. The principle states that “agile processes promote sustainable development”, not that “agile processes require or enforce sustainable development”. The implication is that you can very well apply agile processes in an unsustainable way. Obviously, you’re doing something wrong in this case. We will discuss how and why this happens. Furthermore, the principle mentions three roles which are orthogonal to the three roles in Scrum (product owner, scrum master, and developers). The dimension of progress or “pace” is seen from an outside angle, as it affects users, developers, and sponsors in similar ways.
Before we delve further into the interpretation of this principle, we should agree on what maintaining a constant pace means. This is not as straightforward as it sounds. It implies that we can somehow quantify and judge development progress. This in turn ties into the 7th principle that we discussed earlier. To quantify progress, we need to measure the velocity of the development team, or better: how much time it takes to create a releasable version of a certain feature with a defined amount of complexity. Scrum teams assign story points to features. The story points express the degree of complexity. If we add up the story points of all completed features within a single iteration, we obtain the Sprint velocity. It is a measure of the total implementation complexity that is accomplished in a single iteration. Ideally, the Sprint velocity remains roughly the same over subsequent iterations. In reality, it rarely does. We have to explain why and make some adjustments.
There are two “natural” (mathematical) reasons for variability in velocity. The first is that the capacity (working time) is not constant. The number of hours a team allots to a single iteration varies because of holidays, sick leave, varying team size and other such reasons. The adjusted figure (velocity multiplied by actual capacity) provides a more realistic metric. The other reason is that story point estimates carry a degree of uncertainty. The uncertainty is high for some types of to-dos, such as user stories with ambiguous wording or bug tickets. Overall, the uncertainty tends to balance itself out over a large number of to-dos, because some will be estimated too high, others too low. However, if you are interested in measuring performance in a reliable way, this source of variability should be removed. The initial story point estimates should be adjusted after the work is completed and the uncertainty has been eliminated. This is best done ticket by ticket.
There are also “unnatural” reasons for variability in velocity. The obvious one would be people working overtime. This typically happens in high-pressure situations when a deadline approaches or when things go wrong and require urgent fixes. Overtime increases capacity and therefore output. It doesn’t increase efficiency, though. On the contrary, it decreases efficiency. If people work longer hours, they will eventually fatigue. Fatigued people work slower, make more mistakes and work results will suffer in consequence. If people work overtime for longer periods of time, these effects will compound. Both team efficiency and work quality will drop noticeably. So, overtime and especially continued overtime is not sustainable and should be avoided. Another reason for (negative) variability is capacity draining. If team members are withdrawn from the committed scope to work on other things or to sit in meetings or to prepare reports and such, this creates a capacity drain that is almost impossible to measure. If it happens regularly, it is not possible for the team to maintain a constant velocity.
Another source of variability are so-called impediments. These are well defined in the Scrum methodology. It can be anything that keeps a team, or a team member from being productive. For example, if a team member has a broken keyboard, this would constitute an impediment. Obviously, it’s one that is easy to fix. In practice, impediments are often more involved. For example, a team member could have misinterpreted a task due to ambiguous wording and spend time with solving the wrong problems. Another team member could have trouble with an unfamiliar technology and be in need of some guidance. In order to maintain a constant pace, it is important to recognise and remove such impediments as soon as possible. Agile teams therefore follow practices that allow them to communicate, identify and remove impediments. If a team encounters a high number of impediments during a work iteration, this indicates some underlying problem that should be addressed in the retrospective meeting. The goal is to recognise and remove impediments quickly in the short term and to reduce their frequency in the longer term. Otherwise, it is not possible to maintain constant velocity.
By far the most common reason for not being able to maintain a constant velocity is technical debt. It’s probably more common than all sorts of impediments and other reasons combined. Why technical debt? Because it is like throwing a spanner in the works. It slows down development, complicates maintenance and prevents innovation. If you look at velocity as a graph v = f(t) over the entire life-cycle on the x-axis, v should almost be constant (a horizontal line)), whereas technical debt gives it a negative slope. Since technical debt needs time to accumulate, the velocity is usually normal at the beginning of the project and then starts to decrease steadily if nothing is done. It is important to note that allowing software entropy to accumulate directly contradicts agile practices. It also makes it impossible to adhere to the 8th principle. How does this happen? Quite simply, the complexity of feature implementation increases by and by. This is because the amount of accidental complexity, i.e. the type of complexity that does not contribute to augmented functionality, builds up over time. A tell-tale sign is that story point estimates increase over the years. If a story was estimated with 2 story points a year ago and is now estimated with 4, 5, or 8 story points, then perhaps the underlying reason is technical debt.
How can this be prevented? The solution is as easy to formulate as it is difficult to put into practice: continuous refactoring. It is much easier to refactor software at the time when a feature that requires design changes is introduced than later down the road. Immediate refactoring may increase the amount of work for that particular feature and possibly delay its release. But it is the only sane way to keep your technical balance out of debt. In the long run, continuous refactoring pays off doubly, because it does not just ensure a constant pace of development. It also ensures that maintenance and technical innovation will remain as pain free as possible.