This post is a follow-up to Comparing DAD to the Rational Unified Process (RUP) – Part 1. In that post I described in some detail why Disciplined Agile Delivery (DAD) is not “Agile RUP”. DAD is quite different in both approach and content. There are however some very good principles that the Unified Process (UP) incorporates that are not part of mainstream agile methods. This post describes what parts of the UP made it into the DAD process decision framework.
DAD suggests a full-lifecyle approach similar to RUP. DAD recognizes that despite some agile rhetoric projects do indeed go through specific phases. RUP explicitly has four phases for Inception, Elaboration, Construction, and Transition. For reasons that I described in the last post, DAD does not include an explicit Elaboration phase. However the milestone for Elaboration is still in DAD which I will describe shortly. As the DAD basic lifecycle diagram shows, DAD has three of the four RUP phases.
- The Inception phase. An important aspect of DAD is its explicit inclusion of an Inception phase where project initiation activities occur. As Scott Ambler says in one of his posts “Although phase tends to be a swear word within the agile community, the reality is that the vast majority of teams do some up front work at the beginning of a project. While some people will mistakenly refer to this effort as Sprint/Iteration 0 it is easy to observe that on average this effort takes longer than the general perception (the 2009 Agile Project Initiation survey found the average agile team spends 3.9 weeks in Inception)”. So in DAD’s Inception phase (usually one iteration) we do some very lightweight visioning activities to properly frame the project. The milestone for this phase is to obtain “Stakeholder consensus” on how to proceed. In the book we describe various strategies to get through the Inception phase as quickly as possible, what needs to be done, and how to get stakeholders consensus.
- The Construction phase. This phase can be viewed as a set of iterations (Sprints in Scrum parlance) to build increments of the solution. Within each iteration the team applies a hybrid of practices from Scrum, XP, Agile modeling, Agile data, and other methods to deliver the solution. DAD recommends a risk-value approach of prioritizing work in the early iterations which draws from the RUP principle of mitigating risk as early as possible in the project by proving the architecture with a working solution. We therefore balance delivering high-value work with delivering work related to mitigating these architectural risks. Ideally we deliver stories/features in the early iteration that deliver functionality related to both high business value and risk mitigation (hence DAD’s “risk-value” lifecycle). It is worthwhile to have a checkpoint at the end of the early iterations to verify that indeed our technical risks have been addressed. DAD has an explicit milestone for this called “Proven architecture”. This is similar to the RUP Elaboration milestone without risking the confusion that the Elaboration phase often caused for RUP implementations. All agile methods seek to deliver value into the hands of the stakeholders as quickly as possible. In many if not most large enterprises it is difficult to actually deliver new increments of the solution at the end of each iteration. DAD therefore recognizes this reality and assumes that in most cases there will be a number of iterations of Construction before the solution is actually deployed to the customer. As we make clear in the book, although this is the classic DAD pattern, you should strive to be able to release your solution on a much more frequent basis in the spirit of achieving the goal of “continuous delivery”. The milestone for the end of Construction is that we have “Sufficient functionality” to deploy to the stakeholders. This is the same milestone as the RUP’s Construction milestone. During the Construction phase it may make sense to periodically review the progress of the project against the vision agreed to in Inception and potentially adjust course. These optional milestones in DAD are referred to as “Project viability”.
- The Transition phase. DAD recognizes that for sophisticated enterprise agile projects often deploying the solution to the stakeholders is not a trivial exercise. To account for this reality DAD incorporates the RUP Transition phase which is usually one short iteration. As DAD teams, as well as the enterprise overall streamline their deployment processes this phase should become shorter and ideally disappear over time as continuous deployment becomes possible. RUP’s Transition milestone is achieved when the customer is satisfied and self-sufficient. DAD changes this to “Delighted stakeholders”. This is similar to lean’s delighted customers but we recognize that in an enterprise there are more stakeholders to delight than just customers, such as production support for instance. One aspect of RUP’s Transition phase is that it is not clear on when during the phase deployments actually take place. Clearly stakeholders aren’t delighted and satisfied the day the solution goes “live”. There is usually a period of stabilization, tuning, training etc. before the stakeholders are completely happy. So DAD has a mid-Transition milestone called “Production ready”. Some people formalize this as a “go/no go” decision.
So in summary, DAD frames an agile project within the context of an end-to-end risk-value lifecycle with specific milestones to ensure that the project is progressing appropriately. These checkpoints give specific opportunities to change course, adapt, and progress into the next phases of the project. While the lifecycle is similar to that of RUP, as described in Part 1 of this post it is important to realize that the actual work performed within the iterations is quite different and far more agile than a typical RUP project.
At Scott Ambler + Associates we are getting a lot of inquiries from companies seeking help to move from RUP to the more agile yet disciplined approach that DAD provides.
We recently started a discussion group on LinkedIn called, you guessed it, Disciplined Agile Delivery (DAD). You’re welcome to join and get involved in the conversation.
Last week I was in Moscow to do a workshop on DAD. Askhat Urazbaev, known for starting the first Agile User Group in Russia attended. He asked some good questions, including “Why is it called the Disciplined Agile Delivery framework? Are you suggesting that existing agile techniques are not disciplined?” I have heard this question a lot. As we describe in our book, clearly existing agile methods such as Scrum and XP require discipline to be effective, in fact more discipline than traditional approaches. However, this discipline is focused on practices used within the team to improve quality and meet the commitments made to the customer. For example, it certainly requires discipline to do test-driven development, continuous integration, to optimize team performance, and to recognize and deal with technical debt via refactoring.
In DAD, we support all these practices, but in addition we suggest that discipline needs to extend to other areas such as:
- giving adequate attention to forming an overall project vision before beginning Construction iterations
- framing the project within a lifecycle
- agreeing on appropriate lightweight milestones
- building enterprise awareness, not just team awareness
- adopting agile metrics and governance at the enterprise level
This week Scott and I are speaking at Agile East in Orlando and I just attended an excellent talk by Jim Highsmith regarding adaptive leadership on agile projects. He referred to mainstream agile as “Agile 101″ and addressing some of these larger issues as “Mature Agile”. This is very similar to the concept that we are trying to get across with the term Disciplined. Mainstream agile methods address the discipline required to deliver value via Construction iterations (or without iterations with lean). DAD extends that discipline to the full lifecycle and the enterprise.
We have written a number of posts on this blog in the “Discipline” category that you may find interesting which discuss some of these topics in more detail.
Early in the lifecycle, during the Inception phase, disciplined agile teams will invest some time in initial requirements envisioning and initial architecture envisioning. One of the issues to be considered as part of requirements envisioning is to identify non-functional requirement (NFRs), also called quality of service (QoS) or simply quality requirements. The NFRs will drive many of your technical decisions that you make when envisioning your initial architectural strategy. These NFRs should be captured someone, in a previous blog I explored the options available to you, and implemented during Construction. It isn’t sufficient to simply implement the NFRs, you must also validate that you have done so appropriately. In this blog posting I overview a collection of agile strategies that you can apply to validate NFRs.
A mainstay of agile validation is the philosophy of whole team testing. The basic idea is that the team itself is responsible for validating its own work, they don’t simply write some code and then throw it over the wall to testers to validate. For organizations new to agile this means that testers sit side-by-side with developers, working together and learning from one another in a collaborative manner. Eventually people become generalizing specialists, T-skilled people, who have sufficient testing skills (and other skills).
Minimally your developers should be performing regression testing to the best of their ability, adopting a continuous integration (CI) strategy in which the regression test suite(s) are run automatically many times a day. Advanced agile teams will take a test-driven development (TDD) approach where a single test is written just before sufficient production code which fulfills that test. Regardless of when tests are written by the development team, either before or after the writing of the production code, some tests will validate functional requirements and some will validate non-functional requirements.
Whole team testing is great in theory, and it is strategy that I wholeheartedly recommend, but in some situations it proves insufficient. It is wonderful to strive to have teams with sufficient skills to get the job done, but sometimes the situation is too complex to allow that. There are some types of NFRs which require significant expertise to address properly: NFRs pertaining to security, usability, and reliability for example. To validate these types of requirements, worse yet even to identify them, requires skill and sometimes even specialized (read expensive) tooling. It would be a stretch to assume that all of your delivery teams will have this expertise and access to these tools.
Recognizing that whole team testing may not sufficiently address validating NFRs many organizations will supplement their whole team testing efforts with parallel independent testing . With this approach a delivery team makes their working builds available to a test team on a regular basis, minimally at the end of each iteration, and the testers perform the types of testing on it that the delivery team is either unable or unlikely to perform. Knowing that some classes of NFRs may be missed by the team, independent test teams will look for those types of defects. They will also perform pre-production system integration testing and exploratory testing to name a few. Parallel independent testing is also common in regulatory compliance environments.
From a verification point of view some agile teams will perform either formal or informal reviews. Experienced agilists prefer to avoid reviews due to their inherently long feedback cycle, which increases the average cost of addressing found defects, in favor of non-solo development strategies such as pair programming and modeling with others. The challenge with non-solo strategies is that managers unfamiliar with agile techniques, or perhaps the real problem is that they’re still overly influenced by disproved traditional theories of yesteryear, believe that non-solo strategies reduce team productivity. When done right non-solo strategies increase overall productivity, but the political battle required to convince management to allow your team to succeed often isn’t worth the trouble.
Another strategy for validating NFRs code analysis, both dynamic and static. There is a range of analysis tools available to you that can address NFR types such as security, performance, and more. These tools will not only identify potential problems with your code many of them will also provide summaries of what they found, metrics that you can leverage in your automated project dashboards. This strategy of leveraging tool-generated metrics such as this is a technique which IBM calls Development Intelligence and is highly suggested as an enabler of agile governance in the DAD framework. Disciplined agile teams will include invocation of code analysis tools from you CI scripts to support continuous validation throughout the lifecycle.
Your least effective validation option is end-of-lifecycle testing, in the traditional development world this would be referred to as a testing phase. The problem with this strategy is that you in effect push significant risk, and significant costs, to the end of the lifecycle. It has been known for several decades know that the average cost of fixing defects rises the longer it takes you to identify them, motivating you to adopt the more agile forms of testing that I described earlier. Having said that I still run into organizations in the process of adopting agile techniques that haven’t really made embraced agile, as a result still leave most of their testing effort to the least effective time to do such work. If you find yourself in that situation you will need to validate NFRs in addition to functional requirements.
To summarize, you have many options for validating NFRs on agile delivery teams. The secret is to pick the right one(s) for the situation that you find yourself in. The DAD framework helps to guide you through these important process decisions, describing your options and the trade-offs associated with each one. For a more detailed discussion of agile validation techniques you may find my article Agile Testing and Quality Strategies to be of value.
Non-functional requirements, also known as quality of service (QoS) or technical requirements, are typically system-wide thus they apply to many, and sometimes all of your functional requirements. Part of ensuring that your solution is potentially consumable each iteration is ensuring that it fulfill its overall quality goals, including applicable NFRs. This is particularly true with life-critical and mission-critical solutions. Good sources for NFRs include your enterprise architects and operations staff, although any stakeholder is a potential source for NFRs.
Chapter 8 in the Disciplined Agile Delivery book, written by Mark Lines and myself, overviews several strategies for capturing and then implementing NFRs. As your stakeholders tell you about functional requirements they will also describe non-functional requirements (NFRs). These NFRs may describe security access rights, availability requirements, performance concerns, or a host of other issues as saw in my blog regarding initial architecture envisioning. There are three basic strategies, which can be combined, for capturing NFRs:
- Technical stories. A technical story is a documentation strategy where the NFR is captured as a separate entity that is meant to be addressed in a single iteration. Technical stories are in effect the NFR equivalent of a user story. For example “The system will be unavailable to end users no more than 30 seconds a week” and “Only the employee, their direct manager, and manager-level human resource people have access to salary information about said employee” are both examples of technical stories.
- Acceptance criteria for individual functional requirements. Part of the strategy of ensuring that a work item is done at the end of an iteration is to verify that it meets all of its acceptance criteria. Many of these acceptance criterions will reflect NFRs specific to an individual usage requirement, such as “Salary information read-only accessible by the employee,”, “Salary information read-only accessible by their direct manager”, “Salary information read/write accessible by HR managers”, and “Salary information is not accessible to anyone without specific access rights”. So in effect NFRs are implemented because they become part of your “done” criteria.
- Explicit list. Capture NFRs separately from your work item list in a separate artifact. This provides you with a reminder for the issues to consider when formulating acceptance criteria for your functional requirements. In the Unified Process this artifact was called a supplementary specification.
Of course a fourth option would be to not capture NFRs at all. In theory I suppose this would work in very simple situations but it clearly runs a significant risk of the team building a solution that doesn’t meet the operational needs of the stakeholders. This is often a symptom of a teams only working with a small subset of their stakeholder types (e.g. only working with end users but not operations staff, senior managers, and so on)
So what are the implications for implementing NFRs given the three previous capture strategies? Although in the book we would make this sort of comparison via a table to improve consumability, in this blog posting I will use prose due to width constraints. Let’s consider each one:
- Technical stories. The advantages of this approach are that it is a simple strategy for capturing NFRs and that it works well for solutions with a few NFRs or simple NFRs. But, the vast majority of NFRs are cross-cutting aspects to several functional stories and as a result cannot be implemented within a single iteration. This strategy also runs the risk of teams leaving NFRs to the end of the construction phase, thereby pushing technical risk to the end of the lifecycle where it is most difficult and expensive to address.
- Acceptance criteria. This is a quality focused approach which makes the complexity of an individual functional requirement apparent, working well with test driven approaches to development. NFR details are typically identified on a just in time (JIT) basis during construction, fitting in well with a disciplined agile approach. But, because many NFRs are cross cutting the same NFR will be captured for many functional requirements. It requires the team to remember and consider all potential NFR issues (see Figure in my previous posting) for each functional requirement. You will still need to consider NFRs as part of your initial architecture efforts otherwise you risk a major rework effort during the Construction phase because you missed a critical cross-cutting concern).
- Explicit list. This strategy enables you to explore NFRs early in the lifecycle and then address them in your architecture. The list can be used to drive identification of acceptance criteria on a JIT basis. But, NFR documents can become long for complex systems (due to the large number of NFRs). This can be particularly problematic when you have a lot of NFRs that are specific to a small number of functional requirements. Teams lacking in discipline may not write down the non-functional requirements and trust that they will remember to address them when they’re identifying acceptance criteria for individual stories.
The advice that Mark and I give in the book is that in most situations you should maintain an explicit list and then use that to drive identification of acceptance criteria as we’ve found that it’s more efficient and lower risk in the long run. Of course capturing NFRs is only one part of the overall process of addressing them. You will also need to implement and validate them during construction, as well as address them in your architecture.
An important issue which goes to NFRs such as consumability, supportability, and operability, is that of deliverable documentation. At the start of the project is the best time to identify the required documentation that must be created as part of the overall solution. This potentially includes operations manuals, support manuals, training materials, system overview materials (such as an architecture handbook), and help manuals to name a few. These deliverable documents will be developed and kept up to date via the continuous documentation practice.
In my next blog posting, the fourth in this three-part series, I will describe strategies for verifying non-functional requirements.
An important aspect Disciplined Agile Delivery (DAD) is its explicit inclusion of an Inception phase where project initiation activities occur. Although phase tends to be a swear word within the agile community, the reality is that the vast majority of teams do some up front work at the beginning of a project. Some people will mistakenly refer to this effort this Sprint/Iteration 0 it is easy to observe that on average this effort takes longer than a single iteration (the 2009 Agile Project Initiation survey found the average agile team spends 3.9 weeks in Inception and the November 2010 Agile State of the Art survey found that agile teams have Construction iterations of a bit more than 2 weeks in length).
Regardless of terminology, agile teams are doing some up front work. Part of that initial work is identifying an initial technical architecture, typically via some initial architecture envisioning http://www.agilemodeling.com/essays/initialArchitectureModeling.htm. Because your architecture should be based on actual requirements, otherwise you’re “hacking in the large”, your team will also be doing some initial requirements envisioning http://www.agilemodeling.com/essays/initialRequirementsModeling.htm in parallel. Your architecture will be driven in part by functional requirements but more often the non-functional requirements, also called quality of service (QoS) or simply quality requirements. Some potential quality requirements are depicted in the figure below (this figure is taken from the Disciplined Agile Delivery book but was first published in Agile Architecture Strategies ).
Some architects mistakenly believe that you need to do detailed up front modeling to capture these quality requirements and then act upon them. This not only isn’t true it also proves to be quite risky in practice, see my discussion about Big Modeling Up Front (BMUF) for more details. Disciplined agilists instead will do just enough initial modeling up front and then address the details on a just-in-time (JIT) basis throughout construction. Of course it’s important to recognize that just enough will vary depending on the context of the situation, teams finding themselves at scale will need to do a bit more modeling than those who don’t. It’s also important to recognize that to address non-functional requirements throughout construction that you need to have more than just architectural modeling skills. This topic will be the focus of my next blog posting in this series.
Recently at the Scott W. Ambler + Associates site we received a series of questions from someone who wanted to better understand how architecture issues are addressed on agile project teams. It seemed to me that the questions were sufficiently generic to warrant a public response instead of a private one. So, over the next few days I’m going to write several blog postings here to address the issues that were brought up in the questions. It’s important to note that I will be answering from the point of view of Disciplined Agile Delivery (DAD), and not agile in general. Other agile methods may provide different advice than DAD does on this subject, or no advice at all in some cases.
The goal of the first blog posting in this series is to address several potential misconceptions that appeared in the email. I want to start here so as to lay a sensible foundation for the follow-on postings.
Partial Misconception #1: Agile can be prefixed in iteration 0 by architectural design
I’ve named this a “partial misconception” for a few reasons:
- Disciplined agile teams do some up-front work. This is called the Inception Phase in DAD, although other methods may refer to it as iteration/sprint 0, warm up, initiation, or other names. Up-front work is an explicit part of DAD.
- Iteration 0 isn’t an accurate term. Although I have used this term in the past when discussing project initiation, the reality is that the average agile team spends about a month doing project initiation activities whereas the average iteration length is two weeks. So, Inception really isn’t a proper iteration.
- Inception is more than just architecture. Several activities typically occur at this point in time, particularly initial architecture envisioning, initial requirements envisioning, initial release planning, and putting the team together to name a few things.
Partial Misconception #2: On principle, Agile is against “big” anything
This is also a “partial misconception” for several reasons:
- There is in fact a lot of agile rhetoric against big artifacts. It’s very easy to find agile writings about the challenges with big requirements up front (BRUF), big modeling up front (BMUF) in general, and detailed up front planning for instance.
- Disciplined agile is against needless waste, not “big” things. Many traditional modeling and planning practices prove to be quite wasteful in practice. A serious cultural challenge that the traditional community has is that they are afraid to throw out the bathwater because they assume that the baby will go with it. I believe that Disciplined Agile Delivery (DAD), and Agile Modeling before it, make it quite clear that it’s possible to gain the benefit of thinking before doing without taking on the very serious problems around doing too much thinking before doing. So, have the discipline to keep the thinking “baby” yet discard any needless documentation “bathwater”.
- In rare situations it’s appropriate to create “big” artifacts. Disciplined agilists aim for sufficient artifacts, the size of which will depend on the context of the situation that your team finds itself in. In a recent article for Dr. Dobb’s Journal, Disciplined Agile Architecture, I explicitly explored how initial architecture envisioning on an agile project may result in “big” artifacts in some situations. These situations are very rare mind you, ignoring cultural imperatives to create big artifacts because some people still haven’t made the jump to a disciplined agile approach, but they do happen. One of the strengths of the DAD process decision framework is that it is goal driven, not prescriptive, and explicitly explores the tradeoffs surrounding the amount of detail to capture and when to do so.
Partial Misconception #3: Refactoring system architecture beyond mid-implementation is much more expensive than refactoring components
Once again, this is a partial misconception. I suspect part of the problem is a lack of understanding of what refactoring is really all about, a recurring problem with experienced traditionalists, and part because of a lack of understanding of how architecture is address by disciplined agile teams. Some thoughts:
- Refactorings are simple, not difficult. The goal of refactoring is to make SMALL changes to your design that improve the quality without changing the semantics of the design in a practical manner. This is true of code refactorings, database refactorings, user interface refactorings, and other types of refactorings. Small changes are inexpensive to make given the appropriate skills, tools, and organizational environment.
- Architectural rework (not refactoring) is often difficult. Rework, or rewrites, are very large changes the goal of which is typically to replace large portions of your solution. Yes, the later in the lifecycle such rework occurs very likely the more expensive it will be because you’ve built more based on that architecture that is now being reworked. This is a general issue, not just an agile one.
- Disciplined agile teams get going in the right direction to begin with. The practice of initial architecture envisioning, which we describe in detail in Chapter 9 of Disciplined Agile Delivery, aims to think through the architectural strategy before getting into construction.
- Disciplined agile teams prove their architecture works early. The first construction milestone, prove the architecture, reduces the risk of architectural rework. The goal is to prove that the architecture works by building a working end-to-end skeleton of the solution which implements critical/difficult technical requirements. This is an agile “fail fast” strategy, or as we say in DAD a “succeed early” strategy, that reduces technical risk on your project. As an aside, including explicit light-weight milestones such as this is one of many agile governance aspects built right into DAD.
- Disciplined agile teams have an architectural role. This role is called Architecture Owner and one of the responsibilities of the person in this role is to guide the team in architectural issues throughout the entire DAD lifecycle.
- There are no guarantees. No matter how smart your approach, there’s still a chance that rework can happen. For instance, you can be mid-way through a project and the vendor of a major architectural component of your solution decides to withdraw it from the market. Or the vendor goes out of business. Or perhaps your firm is taken over by another firm and the new owners decide to inflict, oops I mean bless you with, their architectural strategy. Stuff happens. Once again, this is a general issue, not specifically an agile one.
- Quality decreases the cost of rework. Disciplined agilists will write high-quality code, with a full regression test suite in place, at all times during Construction. It’s easier to rework high quality artifacts compared with low quality artifacts, so if you get stuck having to perform rework at least the pain is minimized. My article Agile Testing and Quality Strategies overviews many techniques.
In short, disciplined agile teams do what they can to avoid architectural rework to begin with by having an explicit architecture owner role who focuses on architectural issues throughout the entire lifecycle, by identifying a viable architectural strategy early in the project, proving that architectural strategy works early in Construction, and producing high-quality artifacts throughout the lifecycle that are easier to rework if needed. With continuous documentation practices and a focus on producing artifacts which are just sufficient enough for the situation at hand, this proves to be far more effective than traditional strategies that assume you require large up-front investments in “big” artifacts, that rely on validation techniques such as architecture reviews instead of the far more concrete feedback of working code, and that often leave quality strategies to the end of the lifecycle (thereby increasing the cost of any rework).
I plan two follow-on blog postings in this series, one exploring how initial architecture envisioning works and one about how to address initial quality requirements (also called non-functional requirements or quality of service requirements) on disciplined agile projects. Stay tuned!
At Scott W. Ambler + Associates we offer a one-day workshop entitled Agile Architecture: A Disciplined Approach that you should consider if you’re interested in this topic. We also offer coaching and mentoring services around agile architecture.