If you’re working on a distributed creative team, especially ones spread across timezones, today’s post from Steve McConnell is a great reminder that you’re not alone in your struggles:
The article is right: there’s currently no substitute for travel at those kinds of intervals. “The half-life of trust is 6 weeks” rings true.
Even in normal times, this is a heavy cost to bear, both for the company and for the people on the road. I’ve been at companies that have handled this “ok”, but never seen it be 100% positive and pleasant — this is a very hard problem. There’s a reason why Microsoft largely kept everyone on the same campus for almost 20 years up to the late 90s — and a reason why you should think about keeping things simple that way as long as you can, too.
If staying single-site as long as possible is the first line of defense, the second line of defense is trying our best to minimize and partition multi-site development in careful ways — e.g. into distinct products, projects, or features — to minimize trust issues and cost of communication. But this can only go so far in avoiding the issues of trust entirely. At some point (likely sooner rather than later), the worlds must meet, rules must be followed, decisions must align, and work on the product will overlap.
So, then, how do we design our processes and communication to be multi-site friendly? What processes and culture do we insist stay common or allow to be flexible? How do we maintain the trust to coordinate the things that we must? How can we hire more forgiving personalities for whom trust and camaraderie come easier?
There are certainly lessons to be learned from highly distributed open source projects (in particular, the tools they use and the ways they use them), but also cautionary tales of the borderline chaos that can ensue when the ties that bind are so loose and light. And I’m waiting for a good tell-all to be written on Google and some other other more recent companies who have embraced highly distributed organizations.
Going back to Steve’s post and how there’s no substitute today for spending time in person: we can only hope someone eventually finds a way to make multipoint video conferencing and techniques of remote socialization and team-building much more effective. It’d be great to not consume the time and energy of flying all over the planet — but that day doesn’t yet appear to be here.
Perhaps we could start with some company-sponsored network gaming to have some fun and get to know each other better? … of course, we then must decide which of the Bangalore, San Francisco, or Cambridge teams we’ll ask to get up at 7am to play. Hmmm.
Derivation of the essential development workflow by induction
1. Imagine that you have produced a high-integrity, high-capability design that is suitable for your problem domain. The design is of high quality in every way that matters to you: reliability, usability, security, response time, power consumption, whatever. You have reliable metrics to assess all of your relevant quality attributes. It does not matter how you created this design. It could be a rigid phase/gate process, it could be a million monkeys. It only matters that you are in possession of such a design.
2. Starting with your finished and delivered system, add one feature or improve one performance specification in a way that preserves all of the quality attributes that you could measure in the original system, plus any new attributes that are relevant to your change. Deploy the newly enhanced system and validate. This is your essential development workflow.
3. Repeat step 1, but with one fewer initial feature, or one relaxed performance attribute.
Any sequential development workflow can be pipelined
1. Take all of the steps from your essential development workflow and arrange them in dependency order. Work serially through the steps until a result is produced. If the result is not satisfactory, then repeat the process and apply what you have learned until a satisfactory result is produced. It does not matter if this process is not efficient.
2. Allocate sufficient resources so that two such design increments, following the essential workflow, can execute in parallel without overlapping resources. Create a build/integration process that allows each feature to develop in isolation and integrate through to deployment and validation. Integrations will necessarily be serialized, creating a pipeline. This is your essential development process.
3. Map the value stream of your development process. Add sufficient capacity to each pipeline to realize flow. Add or remove stages to the pipeline to reduce backflows or otherwise reduce cycle time. Substitute new practices or stages as the situation demands and capability allows. Find bottlenecks and relieve them. Share resources between pipelines to improve utilization. Add or remove pipelines as necessary to match demand.
4. Repeat step 3, forever.
Swimlanes are used to track parallel work streams within a common resource. The most typical use might be grouping composite work items with a parent. Another typical use is an expedited lane for emergency work items that can skip queues and preempt other work. Swimlanes generally indicate some kind of branching. That can be either work item branching or workflow branching.
Kanban systems can be used when the work generally follows some kind of common workflow. For many software development projects, this is an easy constraint. Most software involves some kind of problem definition, some kind of design activity, and some kind of verification. Most software development also involves the use of a limited set of technology applied to a limited set of systems within a limited problem domain, so that most of the work done falls into a few general types.
Most is not all, however, and sometimes work will appear that doesn’t fit neatly into any well-defined type. Or maybe it does have a type that we haven’t defined yet. If the work is non-value-added, we can chalk it up to overhead. If the work is value-added, we will want to track it like all of the other value-added work. A useful trick for managing that kind of work is to reserve a swimlane for ad-hoc workflow:
If you find yourself using the ad-hoc swimlane frequently, you might want to map the value stream of some of the work items to see if you can discover some latent or emerging workflow.
The simplest kind of kanban system is the CONWIP system, for CONstant Work In Process. The simplest kind of CONWIP system is no more than our fundamental kanban element:
The simplest CONWIP cardwall is a classic Agile cardwall with a limit on work-in-process:
An equally intuitive interpretation of CONWIP defines capacity simply as the number of people available to work, so that each person is the kanban:
CONWIP is a rule about work items, not a rule about workflow. We are free to define any workflow we like as long as we observe the global limit. This can be a helpful approach when we want to observe the flow of work, but expect a lot cycling between states, perhaps in an exploratory design mode:
The earliest Scrumban design was just such a CONWIP workflow, which we can represent directly in a simple cardwall. Here we have no limits on any specific column, but the total number of work items is limited by the yellow kanban cards, which are returned to the “Free” box when the task they contained is complete:
Breaking out a workflow implies a sense of sequence. If there is no real sequence, we could define a global “busy” state and reduce specific activities to a checklist. Exposing the workflow to visual control might suggest a need to level resource utilization or that we value reducing backflows as an improvement goal. Pooling the WIP limit suggests that the visual control is enough feedback to help the team achieve leveling. My personal preference is to work in this way where possible.
If a person can be the kanban, then how about a pair? If we combine pairing and kanban with a workflow or checklist, then we get something like Arlo Belshee’s Naked Planning:
I think the most interesting kind of CONWIP system is the Bucket Brigade. Everybody is allowed one card at a time, which can either be moving upstream or downstream. People can either pair (at the cost of some queueing) or work solo as much as they prefer. Constant WIP, zero queueing, full utilization, soft specialization, balanced workflow…what’s not to love?
If we can assign tasks and limit work by pairs, then why not do this with whole teams? That is essentially the approach of Microsoft’s Feature Crews with their quality gates.
Some value streams are too long or too complex to effectively manage with a single WIP limit. Many development teams exist within some larger organization that requires coordination between teams and competition for resources.
Where do features come from? Smaller shops may interact directly with the customer, but larger shops may have a more complex process for dealing with a large number of customers or with highly sensitive customers. If we peer inside the black box of the Product Owner, we might discover a whole team of people working to understand customers and define requirements: business analysts, product managers, usability researchers, product designers. Work-in-process on the business analysis side can just as easily go off the rails as development work. Have you ever seen an epic requirements specification or a bottomless product backlog and wondered where it came from? Your product owner might represent another group of people who feel pressure to produce and look busy. Value stream thinking encourages us to take an interest in what those people are up to and why.
Where do features go after we’ve built them? A large enterprise may have complex deployment requirements that involve integrating code into a manufacturing process or provisioning a datacenter. This work probably involves a different team than the development team, but they are still part of the value stream and their throughput affects everybody. Operations teams often have to deal with long lead times and different natural batch sizes than the development teams that feed them. Each group can benefit from understanding the status and availability of the other.
A small team may be able to self-regulate with visual control, but a long value stream may need more explicit control. Kanban gives us an easy solution by chaining together pooled segments. Within each segment, we manage by visual control, but between segments, we manage by kanban:
CONWIP systems allow us to regulate team workflow with a lighter touch. Pooling kanban across closely related functions and zooming out by one level of scale makes it easier to think about using kanban to manage large systems.
Part 1 of a three-part series
Any kanban-controlled workflow system can be described by combinations and variations1 of a basic pattern:
Sometimes we can simplify the diagram by replacing the kanban backflow with a simple capacity parameter2, but often it is better to show the flow of kanban explicitly. Many of the software development kanban systems we’ve seen are simple workflow systems composed by chaining this basic element:
I would like to think that any professional software engineer would be able to think up more interesting workflows than just a linear cascade. Then again, I would also like to think that any professional software engineer would understand the value of keeping things simple. I have personally come to prefer a more symmetrical call-stack style of flow for software development, because I believe that any person who requests custom work should also be responsible for approving the completion of that work. Consumers pull value from producers, not the other way around:
Petri nets are ideal for describing workflow systems because they are a) concurrent; b) formal, simulable, and sometimes even verifiable; and c) relatively easy to read by humans. Any Petri net that can be drawn without crossing edges can easily be made into a “card wall” for visual control3:
Sometimes a different workflow is needed, depending on the kind of thing being made:
Some tasks can be done in parallel by specialized resources:
When we split tokens, we may need to keep track of their common ancestor so that we can merge them again. Colored Petri nets let us associate composite work items across branches:
Sometimes a large work item can be decomposed into smaller work items of a similar type. We might think of a branching workflow to model this, but that is hard to do if we don’t know how many component work items will be created. Petri nets allow us to take another approach by generating new tokens in-place and then executing them concurrently on the same workflow branch:
When all of the unit work items are complete, they are integrated into their parent work item:
While that might look a little complicated, in practice it’s as simple as the “2-tier” style (or n-tier) cardwall that is often used for project management:
A state transition is a black box that may have some internal process. We might expose that process with a hierarchical model. Alternately, we might want to collapse extraneous diagram detail into a single supertransition. Hierarchy is a simple syntax extension to any workflow model.
Feedback should be considered implicit to any creative process, but it can complicate these models without much benefit to understanding4. In practice, kanban systems regulate feedback very well, because the limits serve as a ratchet function that gracefully responds to feedback and damps oscillation. A process operating at capacity will not accept new work, and a process operating over capacity will also not accept new work. Again, it’s awkward to model “over capacity” so we have to be mindful to treat our models for what they are: models.
Once we understand some of these basic design elements, we can use them to describe or design a wide variety of product development processes. A computer scientist armed with Petri nets and a bit of knowledge about queueing, networks, and processor scheduling has some wicked tools at his disposal for Value Stream Analysis.
1. Some variations involve rules about queue placement and timing of the kanban backflow. GKCS and EKCS are examples in the literature. I wrote about some of that here.
2. Whether or not you can simplify in this way depends on which queuing rules are used.
3. I debated using the blink tag for this point.
4. You can usually cheat by adding an “escape” transition to send all feedback to the beginning of the model and allow it to repropagate downstream without friction. Feedback is easier to account for in matrix representations than in graphic representations. Feedback in a dependency matrix looks like row elements or “rabbit ears” on the “wrong” side of the diagonal.
J.D. Meier is hosting a couple of introductory articles on Lean software development on his Shaping Software blog. I admire J.D. not only for his prodigious writing and encyclopedic knowledge, but also for his insatiable curiosity and practical wisdom. A true methodologist.
Domain knowledge (problem and solution) is the raw material from which information systems are constructed–the source of the value stream. Well, that and computers.
The software development pull system representations we have seen so far are not the end. There will be a good deal more evolution of that line of thinking.
Motherhood and apple pie
A staple of software engineering research is the effectiveness of design reviews and code inspections for discovering defects. Methodologists love inspections, but they seem to be difficult to sustain in practice. I’ve seen a few typical reasons for this:
- Inspection is a specific skill that requires training and discipline. Naive, unstructured “code review” is worse than useless and eventually self-destructs.
- Inspection is quick to be dropped under acute schedule pressure, and slow to restart as a habit once it has been broken.
- Inspection works well for frequent small batches and badly for infrequent large batches.
Reason 1 is a matter of skill, and can be solved with education. Reasons 2 and 3 are process issues.
The “inspection gap” illustrates a curious aspect of human nature. There are certain behaviors that a group of people will agree should be practiced by its members. Individual members of the group, when asked, will say that they believe that members of the group should practice the behavior. But then, in practice, those same individuals do not practice that behavior or practice it inconsistently. If you point this out to them, they may agree that they should do it, or even apologize for not doing it, and then continue to not do it anyway.
In my mind, this is a good part of what Lean thinking has to offer. Lean methods like Visual Control recognize this aspect of human nature and provide people with enough structure and context to act in a way that is consistent with their own beliefs. If people using a Lean process agree that code inspections are a good idea, then it will not be hard to get them to agree to incorporate inspections into the process in a way that is hard to neglect. Lean strives to make it easier to do the right thing than do the wrong thing. Lean helps people align their actions with their values.
One practice that works well in most workflow systems is the simple checklist. Human attention is a delicate thing. People get distracted, make mistakes, and overlook things even when they know better. A checklist is a simple device to keep your intentions aligned with your actions. Doctors who use checklists deliver dramatically improved patient outcomes. Would you get on an airliner with a pilot who didn’t use a pre-flight checklist? Would you get on an airliner controlled by software that was written without using checklists?
Checklists and kanban are highly complementary because you can attach a checklist directly to a kanban ticket and make the checklist part of the completion transaction. Checklists improve confidence and trust, and expose tacit knowledge. Checklists relieve anxiety and reduce fear. Can you think of any part of your development process where you’d sleep better at night knowing that all of the important questions were answered correctly by somebody you trust?
Checklists work well for individual activities that do not require specific sequencing, but they don’t work as well for activities that require collaboration from people who have competing commitments. We can raise the stakes for everybody if we elevate our checklist item to the workflow and subject it to the pull discipline. That makes your problem everybody’s problem and gives your peers sufficient incentive to collaborate.
Inspections are a typical example at the scale of a single developer, but there are other practices and scales that we might consider. Failure Mode and Effects Analysis (FMEA) is another highly effective technique that many people agree with in principle but find difficult to implement in practice. FMEA is a systemic method and often targets components or subsystems that are much larger than “user story” scope . Security lifecycle and regulatory compliance activities may also fall into this category. An advantage of using composite workflow is that you can schedule activities that apply to different scales of work.
Process retrospectives can also be attached to workflow in this way. Compared to a more open-ended periodic retrospective, a workflow-bound retrospective asks a more specific question: How could we have created this work product more effectively? Such a workflow-based retrospective directly implements Deming’s Plan-Do-Study-Act cycle.
Are there any practices you would like to see your team use consistently, but have trouble fitting in to your schedule?
The Feature Crew model is important to the Lean software development discussion because it is another major variety of kanban system, and is probably the most successful application of pull scheduling in software engineering to date. Feature Crews are a strong and direct expression of the Lean principles of pull, flow, and value.
Value is represented in Feature Crews by the feature. A feature is roughly defined as a unit of customer-valued functionality that can be built and fully integrated within an interval of a few weeks. The feature is the fundamental unit of scheduling in Feature Crews. Features are generally derived from a user-centered analysis practice like Personas/Scenarios, and they should be bundled into customer- and business-valued packages for integration and deployment. Defined in this way, features are roughly equivalent to the Minimum Marketable Feature (MMF) concept. Personally, I define features exactly as MMFs and therefore I define:
The Feature Crew process is a one-piece-flow pull system for Minimum Marketable Features.
The namesake attribute of a Feature Crew is the cross-functional workcell. A crew should contain most of the capability that it needs to fully specify, design, verify, and integrate a complete product feature. Typically that means a Program Manager (in the Microsoft sense of the title), a handful of developers, and a couple of testers. Depending on the type of feature, there may also be a product designer or other specialist, but we’ll see how such a resource might be shared across workcells.
One of the primary problems that Feature Crews address is the difficulty of maintaining the integrity of very large code bases under development (imagine 1000 developers coding against a 10,000,000 line system). FC poses the problem as the tension between a) keeping the main branch as current as possible, and b) keeping the main branch as robust as possible. The FC solution is to make features an atomic transaction. A feature is either 0% complete or 100% complete, and a feature is not 100% complete until it can be demonstrated that it satisfies the same quality criteria as the rest of the main branch.
Features-in-process are not allowed on the main branch. The FC alternative is branch-by-feature. A crew takes a branch when it takes possession of the feature kanban. The crew is responsible for forward-integrating any changes that are checked into main while their feature is in process. That is, if another crew integrates and breaks your feature-in-process, it’s your responsibility, not theirs. When your feature is finally complete AND you have integrated with all changes on main AND you pass all of the quality gates, THEN you can reverse integrate your feature into the main branch, and everybody else will have to forward integrate your changes.
From a Lean perspective, even without the scale issue, there is an argument to be made for atomic MMFs and branch-by-feature. MMFs are business-valued. User stories are merely user-valued. Customers demand features. Users merely request them. Allowing features-in-process on the main branch exposes the product to inventory and market risk. A Lean deployment model should be transactional at the granularity of business value, not user value. The purpose of the MMF practice is to bring those values as close together as possible.
One Feature, One Crew
A feature is an atomic customer-valued work item (value). A Feature Crew is a dedicated cross-functional team (flow). We need one more thing to implement pull, a rule: One Feature, One Crew. A Feature Crew works on one and only one feature until the feature is fully integrated into the main branch. When such a feature is finally accepted, then capacity becomes available to begin another feature. Such a rule about the relationship between work-in-process and available resources is a kanban rule, thus Feature Crew is a kanban system, where each token represents the capacity of one workcell and is exchanged for exactly one feature.
What happens to a Feature Crew when its feature is complete? There are a couple of variations. One style keeps the crew intact and reschedules it as a unit. Another returns the crew to a resource pool from which new crews can be allocated. Neither pure strategy is necessary. I like mostly durable teams with light rotation of individuals.
While things like unit tests and customer acceptance tests are necessary to meet the quality criteria of a multi-million-line codebase, they are certainly not sufficient. A Minimum Marketable Feature, taken as a whole, has properties that are more than the sum of its component tasks. There are certain kinds of design and quality control activities that have a larger natural granularity than user stories.
Quality Gates ensure that all of the systemic work gets done that does not fit naturally into the functional development process. A good deal of security and reliability engineering has nothing to do with the intended functionality of a feature, and everything to do with how components of a system will behave in the presence of other components of that system under a wide range of operating conditions. Quality Gates also facilitate the sharing of specialized resources, like a security engineer or a system architect, that are impractical to include on every crew.
Feature Crew is only a process framework
The Feature Crew model treats both features and workcells essentially as black boxes. Like Scrum, the Feature Crew method is not prescriptive about workflow. What happens inside the crew is somewhat up to the crew. One could imagine that a crew implements a mini-phase/gate (and some do, though we know that isn’t wise). A crew could choose to implement an off-the-shelf process like Cleanroom or XP for their internal workflow, and many do. Some crews will use an ad-hoc local body of practice, or git-r-done cowboy craft. The quality gates set the bar, how you meet the bar is your concern.
Since Feature Crews address a different scale than Scrum, we can even combine Feature Crews with Scrum. If a crew takes on a 6-week feature, that crew could then overlay a 1-week timebox within those 6 weeks and decompose the feature into Scrum-like work items and goals, which are then implemented according to local custom. Again, this is not uncommon practice.
Since we already understand Feature Crews as a type of kanban system, and we see how can we can overlay Scrum as a secondary planning process, then it follows that we can use something like Scrumban as the internal scheduling process. We don’t have to do this, but doing so buys us some symmetry between the layers of planning hierarchy, and allows us to share metrics, tools, and terminology between workers and management. The cumulative flow report for your team is of a similar kind to the cumulative flow report for the project as a whole.
Feature Crews are an effective method of capacity calculation, but they are also a blunt instrument for that purpose. Making a team self-contained can result in boom-and-bust duty cycles for individual team members. The beginning of a feature branch may be heavy on analysis and high-level design, and the end of a feature branch may be heavy on system testing and bug fixing. A UI designer might find herself very busy for the first couple of weeks and mostly idle for the last couple of weeks. Such a team may be able to learn a certain amount of task leveling, but only so much before running into other problems.
The complementary dysfunction of individual bursting is poor total resource utilization. If every crew employs their own UI designer at 50% capacity, then you’ve hired twice as many designers as you really need to get the work done. As much as we love flow, that is a high price to pay for it. Introducing a second level of kanban granularity gives us access to a finer set of controls that we can use to dial in a better balance between availability and utilization.
Nested Scrumban is enough to give us a more consistent process between the whole and its parts, but it doesn’t help us directly with our utilization problem. For that, we will appeal to a little bit of queueing theory. For some development functions we may be able to share resources between workcells with the same low delay as more dedicated resources, but at much higher utilization. Software development costs are overwhelmingly dominated by labor costs, so paying at least some attention to labor utilization is worth our consideration. Lean cares a lot about labor utilization, it just cares about it in the right order.
Given these tools, we can design a hybrid feature crew / matrix organization. Some resources can be feature-aligned and dedicated to their workcells. Other resources can be function-aligned and pooled across workcells. A product group with 10 workcells probably doesn’t need 10 security engineers, 10 user researchers, 10 architects, and 10 database administrators. But there is some right ratio of each of these functions to each other, and those ratios can be determined by value stream analysis, theory of constraints, and other heuristics.
What is the Toyota Production System? When asked this question most people (80 percent) will echo the view of the average consumer and say: “It’s a kanban system”; another 15 percent may actually know how it functions in the factory and say: “It’s a production system”; only a very few (5 percent) really understand its purpose and say: “It’s a system for the absolute elimination of waste.”
Some people imagine that Toyota has put on a smart new set of clothes, the kanban system, so they go out and purchase the same outfit and try it on. They quickly discover that they are much too fat to wear it! They must eliminate waste and make fundamental improvements in their production systems before techniques like kanban can be of any help. The Toyota production system is 80 percent waste elimination, 15 percent production system, and only 5 percent kanban.
This confusion stems from a misunderstanding of the relationship between basic principles of production at Toyota and kanban as a technique to help implement those principles.
– Shigeo Shingo, A Study of the Toyota Production System
- Start every limit at 1. Add tokens 1 at a time until one person is always busy, then apply Theory of Constraints.
- Start every limit at an arbitrarily large value, like 10. Subtract tokens 1 at a time until flow is observed. Then start looking for a way to remove 1 more.
- Create a Value Stream Map and measure the time-on-task distribution of each activity. Use Little’s Law to calculate the corresponding queue sizes.
Notice that “match the number of people currently available” is not one of the ways. You’re trying to discover how many people you need not how many you have.
I’m at the Lean&Kanban conference in Miami. Mike Cottmeyer has done a nice play-by-play of the conference so far. Overall, I thought today’s speakers were very good.
I should be more enthusiastic, but I bit down on a stone in my food and broke a tooth yesterday.
I’ve talked a bit before about how kanban systems facilitate basic Statistical Process Control by creating an event-rich environment that produces meaningful and timely data about team performance. Benjamin Mitchell has written a helpful article on using control charts to understand variation in user story completion using a kanban system.
I wrote quite a lot of material about Lean software development from 2003-2005, while I was still at Microsoft. Some of that was published inside Microsoft, but none of it externally. This is one example. Some parts appear unfinished, but I left it as-is.
:Summary: Analyze, design, construct, integrate, verify, and deliver one feature at a time until the customer stops asking for features.
Don’t treat software like a product
How do you measure software value? Traditional methods treat software like a manufactured product, as if it were an automobile or a toaster, delivered in one big chunk, maybe with a few optional features. But is that accurate? How many features of Word do you use in any session? How many features will you ever use? And why do you have to pay for all of those features that you will never use? No, the product view of software is not all that helpful, and it forces consumers and providers into an adversarial contract negotiation relationship where the customer must either specify exactly what he wants in advance, or must accept whatever the producer is offering in large monolithic units.
An alternate metaphor for software value is a refined substance, like gasoline or ice cream. The raw material is customer requirements, domain knowledge, time, and energy; and this is transformed into manifest behavior of a computing system. This view suggests that software has no more customer value than the sum of the scenarios that it supports. Then there is no bright line that defines “complete”, there is only relatively more or less value delivered. When I fill up at the gas station, I need more than one gallon and less than 100 gallons, and when the pump tells me I have enough, I expect to pay the exact value of the utility I expect to receive from the product. Software can be delivered in this way: keep giving me more until I have enough, then stop and charge me for what I have consumed.
Or it could be like telephone service: I don’t know how much I’ll use, but there’s a range that I’ll probably keep to. I don’t know when I’ll ask for service, but when I do, I have some expectations of the performance of the service. We agree that you will supply me with on-demand service at a guaranteed level of performance, and I will pay a fee for that service. Maybe that fee is pay-as-you-go, and maybe it’s a subscription. I expect you to give me either option.
Software should be the same way. I don’t want to tell you what I want the software to do. I want to tell you what I want to do right now, and then I want you to enable that as quickly as possible in a way that is least distracting to me. I don’t want to specify what I might want to do next year. I’ll figure that out next year, and then will I expect you to enable that as quickly as possible. As the relationship matures, you may start to anticipate what sort of things I might want, and then suggest them at an appropriate time in a way that is not annoying. Sometimes, I might like your suggestion and then ask you to deliver that, but you had better not burden me with the cost of your attempts to anticipate my wishes.
Ironically, Lean Manufacturing treats manufacturing production more like a fluid, with streams and flows. If we want to consider software value to be more fluid, then perhaps Lean can supply us with the right concepts and terminology to enable that mode of production. A suitable name for such a solution might be Feature Factory.
Q: How do you deliver value to a customer when the customer doesn’t know exactly what he wants, and you don’t know exactly how to build it? (see Five Orders of Ignorance)
Q: How do you avoid delivering features that the customer doesn’t really need or want?
A: Continuous delivery. Analyze, design, construct, integrate, verify, and deliver one feature at a time until the customer stops asking for features.
How might the Team Software Process apply to this solution?
The advantage of the Team Software Process is the high quality of the code it produces. How much of the Team Software Process can be applied to such a demand-driven model? How much of TSP actually depends on batching and queueing feature requests by workflow phase?
Start with TSP as defined, eliminate non-value-add work products, and eliminate gaps in value streams.
[TSP exchanges one type of waste, defects, for another, copious bureaucracy. ]
To eliminate gaps in value streams, we might:
* integrate phases into workflows everywhere possible
* compress non-integrated phases
* substitute short, fixed-length iterations for long, fixed-scope iterations
* substitute feature-oriented task scheduling for component-oriented
* substitute feature-oriented proxy estimation for component-oriented
* implement continuous flow of customer-valued features to production code
* integrate security and reliability analysis into value stream
* implement pipelined continuous integration
To eliminate non-value-add work products, we might:
* substitute measurement for estimation
* implement numerical specification of requirements
* substitute executable tests for requirements documentation
* substitute executable tests or automated design analysis for design documentation
This looks like a lot of change. However, the core workflow of TSP, as represented by the process scripts, remains largely intact. There are a few minor changes in workflow steps, but most of the changes are in sequencing.
step 1: reduce TSP iteration scope to a single feature
step 2: factor tactical vs stratgic planning decisions, and assign strategic planning to a new fixed-duration replanning iteration
step 3: optimize single-feature workflow according to the opportunities created by the dramatically reduced scope
Can this still be considered TSP?
Well, I’m not particularly interested in labels, and the customer probably isn’t either. How many customers actually care about your ISO9000 certification, if it even means anything at all? TSP is a means to an end. The customer’s end is high-quality product, where quality is defined only in the customer’s terms. The customer’s terms likely do not include any notion of defects-per-KLOC. The producer’s end is profit and reputation, which comes from managing cost and consistency. If TSP, or something related to it, helps us realize our ends, then (and only then) it is meaningful.
Jurgen Appelo has put up a survey of Agile practices. While I consider myself and this blog to be decidedly post-Agile, I also love survey data and the concerns of practitioners. Plus, critical analysis of the survey itself makes for great sport.
Lean Development for Lean Times
Agile Vancouver is a non-profit user group run by volunteers.
We are affiliated with the Agile Alliance and host lively regular monthly meetings to promote the state-of-the-art wherever we can.
On Tuesday April 21st, we are hosting an afternoon event on the application of Lean Principles to software development. This workshop brings together leading thinkers from Lean Production and Lean software. On the agenda, we have three interactive sessions:
- An Introduction to Lean Product Development - Katherine Radeka
- Scrumban: Lean Thinking for Agile Process Evolution - Corey Ladas
- The Lean Startup: a Disciplined Approach to Imagining, Designing, and Building New Product - Eric Reis
The event will run from 2:30pm to 7:00pm at the Plaza 500 (the same venue as previous Agile Vancouver conferences). Please register to attend this event as space will be limited. There is a $25 fee to secure your registration for the event. If you have any questions about registration, please contact us at firstname.lastname@example.org.
- Corey Ladas is a master of applying Kanban systems to Agile projects and the editor of Lean Software Engineering blog
- Katherine Radeka is an expert in the application in Lean principles to product development with a track record of successful products and successful product development transformations
- Eric Ries is the former CTO of IMVU and champion of the Lean Startup
Lean & Kanban 2009 Conference
Quite simply the greatest ever assembled group of experts in Lean software development will be convening in Miami in May to explore the frontiers of agile and lean in software development. This is your chance to be a part of the new wave in agile development and management practices.
Speakers include: Alan Shalloway, Dean Leffingwell, Peter Middleton, James Sutton, Corey Ladas, Karl Scotland, Ami Rathore, Sterling Mortensen, Aaron Sanders, Rob Hathaway, Alisson Vale, Max Keeler, Linda Cook, Eric Landes, Eric Willeke, Chris Shinkle and David Laribee.
The final conference agenda has been released. You can download it here.
along with the program (draft) released yesterday
The new conference format provides Day 1 - May 6th - as a Lean Day and Day 2 - May 7th - as a Kanban specific day. Day 3 will be open space and lightning talks.
After March 16th early bird registration will be $700 until April 16th after which time the full $800 price will apply.
Register now at http://www.leankanbanconference.com/
Agile Florida special rate
Members of an Agile Florida User’s Group can make use of the Super Early Bird price as a special discount until April 16th. Membership of an agile group in Florida and residency in Florida (based on Credit Card details) are required to qualify.
Register now at http://www.leankanbanconference.com/
Lean Day Only Registration - May 6th
For those who only want to attend the Lean sessions at the conference, we are offering a special one day rate that wil include the evening reception. The price for this is $335. Register now at http://www.leankanbanconference.com/
Kanban Day Only Registration - May 7th
For those who only wish to attend the Kanban case study presentations, we are offering a special one day rate of $295. Register now at http://www.leankanbanconference.com/
Please bear in mind that the numbers at the event are strictly limited. Please register early to avoid disappointment.
I made a simulation of an XP-like process. I won’t go as far as to say it’s exactly XP, but it’s a reasonable approximation. The first one is an unconstrained workflow with a single customer. It is a PIPE2 simulation, and the file is here. Multiple customers (or nested stories) would require a colored Petri net, which PIPE2 doesn’t support.
One of the keys to understanding the model is the bidirectional edge between the customer and write a story. The team keeps writing stories until the customer doesn’t want any more stories and accepts a final build. Other keys are the bidirectional edge out of accept story and the inhibitor arcs into accept build. Those things give the model most of the “iterative goodness” you’d expect.
Not surprisingly, the unconstrained workflow rarely reaches a checkpoint where the customer has nothing in process, so the deliver build transition rarely triggers. If we add a stories in process kanban limit, then there is also much more reengagement with the customer as a consequence. That model is here.
I would be interested to hear of any improvements to the model.
- Kanban regulates work, not workers.
- Every process is a queue, not just the buffers. Workflow buffers can be in time, space, or capacity. Each is a tool, none are “correct.”
- Behold! The mighty Invariant. http://tinyurl.com/dyhelw
- A Minimum Marketable Feature is the smallest unit of work that has recognizable value to the customer. If a Minimum Marketable Feature could be made any smaller, then either it wasn’t Minimum, or it no longer has value to the customer. The Minimum Marketable Feature is the most natural unit of scheduling for Lean and Evolutionary Development. The Minimum Marketable Feature is the most valuable product of Rolling Wave Planning. A Minimum Marketable Feature can be decomposed into User Stories, Use Cases, BDD Scenarios, etc. for detailed work scheduling. Minimum Marketable Features can be staggered and overlapped for production leveling of skills and roles. A Sprint Goal is a substitute for having a real business-valued goal. A Minimum Marketable Feature is the real thing.
- The “Feature Crew” process is a kanban system for Minimum Marketable Features. A Feature Crew system can scale up to hundreds of people.
- Alan Shalloway on future of Lean software development: http://tinyurl.com/b3q6f5
- In Evolutionary Design, the same requirement may be implemented more than once. This is not rework, because no mistake has been made.
- Bottom up: A Design Parameter is “what the system does.” A Functional Requirement is “why the system does it.” Top down: A Functional Requirement is “what the system should do.” A Design Parameter is “how the system will do it.”
- I find greater value in the ideas of Deming, Ohno, and Goldratt. I am not the only person who feels this way.
- One man’s good engineering practice is another man’s criminal negligence.
- If continuous testing and continuous integration are good, then continuous planning might also be good: http://is.gd/m98c
- My opinion of Peter Middleton and James Sutton’s book “Lean Software Strategies” only grows over time. http://is.gd/lFgb
- The purpose of business is not to give programmers something to do.
Powered by Twitter Tools.