October - 10th agile principle

2019-10-22

Simplicity –the art of maximising the amount of work not done– is essential.

The tenth agile principle is perhaps the most dialectic of all. It is frequently misunderstood and it is even more often applied incorrectly. To get to the point straightaway, “maximising the amount of work not done” is not about being lazy or leaving things out, but about doing things in the most efficient possible way. The 10th principle is about simplicity in the sense of Occam’s razor, a principle of problem solving. Occam’s razor states that of many alternative solutions for a given problem, the simplest should be chosen. This implies that the solution is properly reduced to its essentials. To quote Albert Einstein: Everything should be made as simple as possible, but not simpler. In the software development literature, this is also known as the KISS principle, an acronym for “keep it simple, stupid”. It is a design philosophy that favours simple solutions over complex ones. Because software is complex by nature, this is not an easy task. The difficult part is to judge at what point a solution is either too complex or too simple, or respectively: over-engineered or under-engineered. It is hard to strike the right balance, and it takes plenty of knowledge and training.

workplace

Why is this principle often misunderstood? Because too many people believe that achieving simplicity is simple. This error of thought includes the idea that it is OK to take shortcuts. However, this is quite far from truth. Achieving simplicity is hard in almost any area of life, whether it’s art, architecture, science, or engineering. It requires a complete understanding of the problem and of all solution alternatives. Simplicity is also inextricably linked to elegance. Elegance is achieved when all that is superfluous has been discarded and the human being discovers simplicity and concentration. (Paulo Coelho) As software engineers, we know that complexity is easy. It comes uninvited and it comes at no cost. Any non-trivial software tends to increase in complexity over time. As new features are being added, not only does the problem space grow, but also the interactions of new features with existing ones add to the complexity. For example, think of a class that reads data from a database. Let’s say you add a new event handler to this class. Upon closing the connection, you will now also have to take care of deallocating any resources used by event handlers in addition to the resources used by the connection itself.

The advantage of simplicity is self-evident. Simple programs are easier to understand and therefore easier to maintain. They don’t cost a lot of effort and money to implement or to change. However, this doesn’t imply that simple programs are easy to write. When confronted with a new problem, the first solution that comes to mind is often not the best one. Sometimes, it takes multiple trials to find an efficient solution. Think of sorting algorithms for example. A conceptually simple solution for sorting a series of data is the insertion sort. The insertion sort is an acceptable solution if the series is small. But for large data sets, the quadratic time complexity becomes a problem. So, another algorithm with n log(n) complexity, such as the heap sort is required to execute sorting within an acceptable time. The heap sort algorithm is less straightforward as it makes use of a special data structure, the heap tree. If the data is so large that it doesn’t fit into memory, yet another more complicated solution is required. For example, we could build an index while loading chunks of data from disk. From this example of sorting, it becomes clear that a simple solution can only be used for a correspondingly simple problem type. More advanced (and increasingly complex) solutions are required for harder problems.

Stemming complexity in software requires concerted action and continuous effort. The key is to know what to omit and what not to. It’s the kind of skill that comes with experience and that distinguishes junior from senior engineers. The most difficult decisions tend to be about architecture at the highest level of abstraction. In a data-centric architecture, for example, this could be the choice of data store, involving considerations such as performance, fault-tolerance, consistency, availability and partition-tolerance. In a layered monolithic architecture, it could be the data modelling, system boundaries, silos, and the details of the layering. In a microservices architecture, it could be the service granularity and the deployment platform. The next lower level of abstraction is at the design level. This typically concerns abstractions for composing functionality from individual classes and modules. Engineers can now make use of a catalog of well-tested and well-described design patterns. However, this sounds easier than it is. For each of them there are use case constraints and language-specific caveats. Using design patterns effectively requires a great deal of experience with their implementations in the specific language and platform one is working in. The least difficult level of abstraction is that of individual classes or modules. Ironically, most violations of the simplicity principle are found at this level. This is partly because senior engineers look less frequently into this level of detail. The main reason, however, is the accumulation of software entropy that occurs when a class or module gets altered repeatedly.

One of the most frequent and persistent contraventions to the simplicity principle is the YAGNI (“you aren’t gonna need it”) anti-pattern. A YAGNI situation arises when a programmer decides to create an implementation that solves more than just the problem stated in the requirements and if that implementation comes at the cost of higher complexity. A typical example would the creation of an interface when it is only implemented by a single class. Or, programming an algorithm in a more generalized way than required. This is often done in anticipation of future requirements. It also happens (a lot) when programmers apply design principles such as SOLID or Domain-driven design in a dogmatic way, i.e. for the sake of design rather than for the sake of function. It could be said that this anti-pattern is always the consequence of a certain mindset, which is either overzealousness or dogmatism. The programmer endeavours “to do things right”, but loses sight of the immediate task objectives. In a team that uses agile methods, this mindset is almost always counterproductive. The agile approach dictates that new features are targeted at a “minimum viable product” (MVP) implementation with just enough functionality to satisfy requirements. By definition, the MVP is open for later modification and extension.

Finally, I want to mention automation, because automation is a direct path to “maximising the amount of work not done”. Obviously, this is achieved by delegating work to a machine instead of executing it by hand. One would assume that this is second nature to software developers. Yet in reality, the lack of automation in software development boggles the mind. This lack is especially present in three areas: testing, code integration and deployment. Testing is perhaps the sorest spot. How many times have you been waiting for QA to catch up with development? The problem is exacerbated with shorter release cycles, because short cycles do not permit extensive manual testing. While manual testing can probably never be eliminated completely, a lot of it can. Automated tests and monitoring go a long way in spotting problems, but only if they are implemented in a disciplined way and with a high degree of coverage and quality. TDD has the double benefit of creating automated tests and coded specifications at the same time. Code integration and deployment are another big opportunity for automation. There is no point in manually merging features and releases over and over again. There is no point in moving bits and parts to production by hand. It’s a waste of time and it is error-prone. The objective should be continuous integration if not continuous deployment. With state-of-the-art CI/CD systems and some investment in streamlining these workflows, it is possible to save hundreds of hours. Not to mention the stress involved in manual deployments. So, go ahead and automate everything.

PreviousNext