You can also download the Case Study as a PDF
by Marco Melas
It all started with what become notorious as the "mandatory sprint" at PARSHIP’s product development department. The cross-functional teams at PARSHIP had used scrum for three and a half years and were really good at adopting its mechanics.
But then a few things changed: three 3-week sprints in a row were busted because of ever-changing requirements. Instead of waiting until the beginning of the next sprint, Product Owners (PO) were forced by business needs (read: CxOs and market requirements) to replace and change User Stories (US) already committed by the team frequently. Of course scrum has means of handling things like this. Do a "spike", swap US for others with the same estimated size, shorten the sprint length.
One example of the problems we had with scrum and the changing business behavior: team A needed to stop working on a user story and instead start working on a different story that team B had already estimated to be approx. the same size. But as all teams had their own differing baseline the team A had to do their own estimation.
So the estimation team B created was a waste of time as their estimation was no longer necessary or useful.
We can discuss standardization and the sense or nonsense of doing this for knowledge workers. But this would be another story and I prefer to continue with this case study.
As the commitments of the past three sprints for these four teams were changed, swapped, not met and frustration was running high we created a lot of formal scrum-process-overhead and would continue to do so even if we shortened sprint duration.
Now "business people" (read CxOs) came and said: hey guys, we have this fabulous idea. When having completed this, everything will turn out right and we will start the "afterburner" for our business.
So this is how our mandatory sprint started. It was called "mandatory" already at that time because everything inside this sprint had to be done - it was mandatory. The team committed to the stories but refused to commit to the usual sprint-time. So even the word "sprint" didn't fit anymore as it wasn't a timebox either. At the end it felt like working on a "death march project".
So what happened? Due to high morale and the professionalism of the development-team, we finished the "mandatory sprint" after six weeks non-stop working, including some weekends and holidays.
The features were a success but more features and new stuff was in the pipeline.
With this background information you as a reader certainly have different ideas what could have gone better, where we possibly made mistakes etc.
Most of our ideas involved convincing “upstream parties” of change. On the other hand we could simply accept things as they were and change ourselves. We can look for a way how "we" can cope with the new reality. After all the “perception of reality” may be different for different people but reality still stays the same. Our reality was that business needs changed frequently and that this didn't fit with our current development-process. It made everyone involved unhappy.
Given this situation (we) the whole team decided that it was about time to change how we handle our development process.
This is our starting point. (“Start where you are.”)
A couple of months earlier we introduced the kanban-method (I’ll simply write “Kanban” from here on) to the operations-team and feedback to this change-method was really good. This IT-Ops-team at PARSHIP is responsible for internal administration and live-operations. One of my bigger learning before introducing new stuff was: fear of change can kill any initiative no matter how sensible or good it may be. At least due to good preparation and stakeholder-management we were able to make Kanban a success within the operations-team. But this was a small team, consisting of five people. Smaller groups are easier to understand and it is much simpler (but never easy!) to "dig into" personal fears.
So how can we adapt this for 25 people?
 At that time the PARSHIP product development team looked like this: We had four cross-functional teams consisting of 4-6 people with the following skill sets: java-dev, frontend-dev (html, css, and js) and qa. Each team worked as a rather autonomous domain team. Therefore each team was responsible for its own way of estimating and each team had their own baseline. Furthermore we had two scrum-masters who by that time were already "Delivery Managers", as they did so much more than simply making sure the team sticked to the scrum-rules and taking care of impediments. Each Delivery Manager took care of two teams and each team had at least one PO for getting input. We have a separate operations-team taking care of the PARSHIP live-operations. Therefore we still have a formal handover process for the upcoming release.
We Delivery Managers sat down and created a one-day workshop based on the principles of David J. Anderson book “Kanban: Successful Evolutionary Change for Your Technology Business”.
We wanted to address the following:
The expectation from business on how we handle development has changed. It doesn't fit anymore to our expectation. We can't continue as we always did, so what can we change in order that business needs and development capability stay in synch?
Before I move on to explaining how we built our workshop let me describe how we addressed this fear of change.
As soon as we had our first rough version of the workshop and the big goal as described above I presented it to the management team: "Listen guys, I know that you know that this mandatory sprint thing didn't work out that well. We developed an idea how to make things better. I would like to share this with you and get feedback." And I got feedback. People understood that we were addressing a pain they felt but were not able to articulate. They were happy about our approach and backed us up all the way. We integrated their feedback into our workshop. And then we went for round number 2.
Because the introduction of Kanban has some impact on how POs needed to handle prioritization we decided to do a second round with the improved workshop with them. Besides, the POs were in the middle of heat: they gather business requirements and analyze benefit and risk. They are responsible for deciding the roadmap. (Coaching for “minimal viable product” (MVP), “minimal marketable feature” (MMF), “cost of delay” (CoD) etc. was not our intention, as we wanted to take one step after the other and saw the highest risk in continuing with plain scrum). The intention was to show the POs their new responsibility and possibilities and a check if our workshop-modifications worked out well enough. We received valuable feedback from their perspective, too and integrated this into our workshop afterwards. With the POs the emphasis lay on keeping existing roles and titles and responsibilities. (“Respect the current process, roles, responsibilities & titles.”)
Steps 1. and 2. were a two hour session and not the full-blown workshop. You will understand why, when I explain the workshop.
In the meantime the Delivery Managers talked to the "opinion leaders" inside the teams. We explained that we are working on something that will help us getting better but it will not be the solution to all our current problems. Further we explained that we understand the pain of the past sprints and wanted to help to prevent frustrating experiences like that in the future by introducing Kanban.
They understood and were happy to support our effort in the upcoming Kanban workshop.
The moment of truth had come. We invited the whole team for a day-long workshop in order to address the pain-points of the past sprints. As good scrum-people do, the teams already had their retrospective and had identified their pain-points. Since it was already clear, that we will have this workshop, we asked the teams not to come up with just team-internal-solutions during their retrospective but instead bring their ideas to the workshop where the whole team was present.
As the word "Kanban" was in everybody’s mind (the preps for the workshop weren't “secret” and I encouraged the DMs to talk e.g. during lunch about some aspects already) we started the workshop with some theory: what is Kanban, where does it come from and why are we talking about it. So this part we internally called "expectation management". With this entire Scrum vs. Kanban things you hear and read about we explained that there is no real "vs." as you are comparing apples to oranges.
Secondly we collected a list of things that didn't run as smooth/good/clear/painless as desired. This was the aforementioned link to the retrospectives. We made clear that Kanban cannot solve every problem we have on that list. Legacy code will not disappear if we "do Kanban". Pushy POs will not become super friendly with Kanban, etc. BUT Kanban will give the team a whole new level of transparency and with this a whole new level of control.
This step was crucial to us as we explained that if the team decides to try Kanban at the end of the workshop, then this will be an experiment. We suggested three months for this experiment. At the end of this period we will check if things on that long list have improved and if not then ask questions why not. (It will be of no surprise to you if I reveal already that after these three months the teams decided to continue with our adoption of Kanban.)
With this we made very clear where Kanban can help us and where not.
Then we spent some more time with explaining the “J-curve effect”. The J-curve effect shows that with each change introduced to a team, workflow etc. there is a dip in performance, transparency or other capability etc. that makes things worse (chaos). After that hopefully the reasoning behind the change spreads (transforming idea) and things get better (integration) and ending with a better situation than before we introduced the change. This effect should be measurable.
You can find a discussion about this topic for example in the kanbandev-mailinglist. Our reasoning why we mentioned this was expectation management that changes need some time before having an effect. We continued to explain that one big change is a revolution, and many small changes are an evolution. The latter is what we are aiming at. We avoided Japanese here but you may know this as “kaizen-culture”.
We continued the workshop with a simple game in order to visualize the problem of multitasking: Name Game, then we explained the usage of Boards, the sense of limiting your WIP and ended with a few rounds of the Pizza Game. The company sponsored real pizza for lunch and afterwards we started the last theory session:
How can Kanban help us? According to "start where you are" we discussed the possibilities of introducing Kanban elements to our scrum-process.
We were careful to emphasize what exactly Kanban changes: WIP limits instead of sprint-commitments and workflow-visualization instead of Jira-taskboard-views. The rest stays the same, if we decide to. Here is an excerpt of what we explained and what the team decided to do.
|What we did until now||What we will try from now on|
Timebox Iteration prescribed (Sprint)
No more timeboxed Sprints
Team Commitment per Sprint
Commitment per pulled Story
WIP Limit indirectly per Sprint
WIP Limit directly per step in workflow
No new Stories during a Sprint
New Stories can be pulled any time the WIP limit allows
A Scrum Board is clean after each Sprint
A Kanban Board is persistent and shows the flow
No more estimations
The result was our version of Scrum-Ban:
The next step for the teams was to design their own workflows. Before they started with this task, we gave them one rule:
Flow of tasks needs to go from left to right.
No pulling back of tasks that were WIP is allowed. With this we wanted to help them understand that the flow of tasks (“stop starting and start finishing”) is part of adopting Kanban.
Additionally we made clear that whatever the teams come up with, they need to have the following two columns in their workflow:
Please be aware that by no ways we had or have intentions of standardizing workflows. We just wanted to make sure the start and the end of the flow is clearly visible.
As long as we do one joined release, the "DoD reached" column collects these joined stories.
Then the teams came up with their workflows. They presented each other their board-design and discussed whether to adapt things they saw on boards of the other teams.
In the evening the teams and their POs began filling the empty boards with stories.
The workshop had ended, the experiment began.
For the “Management and Product Owner Buy-In” we skipped the Pizza Game and didn’t create a board for visualization. This is why these workshops were much shorter than the final team-workshop.
Our experiment started as expected: with problems.
We in the Delivery Management already discussed a lot when creating the workshop. Therefore we were prepared on giving advice and coaching as the teams came up with their problems. The difficulty for the DMs lay in sticking to the coaching role (e.g. making suggestions and stating opinions) instead of telling the team what to do. I think that this is one of the keys to create a sustaining culture of self-aware self-organized teams.
I wouldn't say that we DMs always succeeded. But through this experience we learned, too.
As we kept the daily stand-up routine, this was the place where discussions concerning workflow and adaption of Kanban initiated. But the stand-up was also visited by POs or members of other business units. Therefore we decided to discuss changes to workflow etc. after the stand-up.
Doing 3.5 years of "scrum stand-ups" we have indoctrinated the scrum-way of doing stand-ups. What did I do yesterday, what do I plan to do today and which impediments do I have? We felt that these questions didn't reflect the way we were working anymore. Still some people saw value in giving the team a status of their work, although it was clearly visible on the board. In order to respect this need now we ask the following questions:
One team decided to stick to the scrum questions.
What is slacktime and how to handle it? As the concept of having slacktime was new to some people, a lot of questions boiled up. In order to give guidance to the team we created a small guide together:
How to choose a limit and how to behave when a limit is reached? The initial setting of WIP limits was a part where we DMs discussed a lot about already before talking to the teams. As there is no real data we can use for deciding, we opted for ((2/person) -1). We suggested this to the teams and the teams started the experiment with our suggestion without further ado.
As we haven't mastered continuous deployment quite yet, we need to decide when to create our release-packages. Before, the end of the sprint marked this time. Now we didn't have this marker. I discussed this with the team lead of the POs and we decided that the best people to decide when to release are the POs. They know exactly what they expect of the user stories and therefore can decide best when enough "business value" is available to justify our "release costs". So far so good, but what if we are working on a big feature? What if the mechanics of MVP, MMF and CoD etc. are not inside of POs perception? After some time we created a “twelve day” rule. If POs do not decide to release after twelve days of development then we will release anyway. Why twelve days? Because experience has shown, that by that time our avg. release cost and our avg. business value will be approx in optimum (Read more about this in Don Reinertsen Book “The Principles of Product Development Flow: Second Generation Lean Product Development”, Chapter “Economics of Queues”).
I think 11 or 13 days would have been as good. I liked the number 12. With this rule we make sure that as long as we don't have Continuous Deployment we will not create so much complexity into our code that we become too afraid to release it.
Work on our sprints did not really end with the end of the sprint-timebox but continued inside a so called release sprint. Here all features were collected combined and tested inside a regression test. This took about two weeks of time as it contained a lot of manual testing and was in parallel to the normal development sprint which started after the end of the former development sprint.
This is no problem as long as there are different people working on release and development sprint. Unfortunately this wasn't our model. We had people working on the new sprint and on the release sprint. Multitasking at its best. Especially our QA people were annoyed by this fact, as they were the ones who did most of the multitasking.
After having started with PARSHIP-Scrumban the team evoked an initiative we later named "prescribed swarming": as soon as enough business value was available within the DoD-column the teams started to swarm on regression-testing and "stabilizing" the build. The pendulum struck to the opposite side. Instead of multitasking to keep up some "flow", all simply interrupted their current work item and swarmed on testing. This was clearly visible as team members didn't pull new tasks and the board became more and more empty the clearer the next release came into view. It reminded me of the scrum task boards where at the end of the sprint all tasks had to be on the right side of the board. Besides stopping the flow this behavior had one major downside as POs had to stand by idle as long the team worked on the release. The upside of doing this swarming was that the teams didn't simply test manually but created a large set of automated tests. After five of these "prescribed swarmings" we were down from ten days to two days of regression testing. We stopped the prescribed swarming after that. Management could have intervened but didn't as the benefit of “having a two day regression test anytime you want” vs. “planning ten days of manual testing with the whole team” was a no-brainer. So what did we learn? All teams added a column for creating automated acceptance tests before working on the features (TDD). Another learning was that the team used the purpose of swarming in order to fix a major problem we had in our process: test-automation.
Another way of looking at this fact is the resulting transaction costs: how much does our release “cost” and how much value do we deliver? Based on these simple but sometimes hard to tell facts we knew that our transaction costs during a prescribed swarming were very high. We saw this in an investment in order to reduce transaction costs for all following releases with the help of fully automated regression tests.
Although the teams worked as hard as before, POs became rather uneasy about progress. They understood that teams pulled new stories as soon their WIP limit allowed them to but POs perception was that too few stories were pulled. So we started to analyze this perception. Our findings were: Stories were pulled rather often. But as the POs refilled the blank space in their selected column almost immediately, perception was: no flow! We suggested to refill the empty spaces during the PO-internal stand-up in the morning and provided a weekly statistics of how many stories and bugs were pulled. This helped a lot to get perception of reality and reality in sync. Still this is a quantitative view of facts. Another aspect was that some stories were quite huge. They were more of an epic than simple user stories. So the teams and the POs agreed to a “five day rule”: as soon as a team thinks, that a user story will take longer than (approx.) five days cycle time, they will discuss together with the PO before starting to work about how to make a more suitable and smaller package. This addresses already the aforementioned MVP, MMF, CoD issue, but, is by itself not enough to resolve it. It helps the development team creating a flow and some sort of cadence though.
This is just a selection of problems that we encountered. I chose these examples in order to show you the spectrum of problems and the one thing the solution of all problems had in common: Kanban made these problems visible (some may argue Kanban was the reason for the problems - apart from choosing a WIP limit - I strongly doubt this, the problems were always there but finally were made visible so that we had to think about a sustainable solution) and that it was always the team to discover the problem, address the problem and think of a solution. This was no jumpstart behavior but something the teams had to learn and are still learning. IMO the Delivery Management was a key factor in coaching and aiding discussions, and creating transparency etc.
After three months the whole team met up again in order to discuss whether and how to continue with our experiment.
We asked what has changed since the introduction of Kanban and how the change was perceived. Overall the perception was that we improved in the following points:
Every single person was convinced that we made progress as how we worked together!
We concluded the session with the simple question: can we agree to end the “experiment-situation” and continue with Kanban? With the great feedback we received I knew this was a rhetorical question but we wanted to make sure that everybody agreed. To no one’s surprise we had unanimous assent.
But even with all the things we achieved in the past months we still have more challenges to tackle.
As some of you may already have guessed, I am not convinced that the way we approach the User-Story is the best. We still have quite large chunks of customer value that needs a lot of time until it is released and usable.
I think models like Minimal Marketable Feature and/or Minimal Viable Product (depending what we are working on) can still help us a lot.
We still take a lot of risk and reducing the batch-size in the coding does not help the business fully. Change of mindset is not always easy and needs to be done carefully and respectfully.
This is something we already begun working on. Effective swarming can only be done if at least some sort of basic knowledge outside the area of specialization is present. A carpenter can only help an electrician if some basic knowledge is present that prevents the carpenter from getting fried and really helping. E.g. a cross-functional development team can only swarm effectively if a developer has some knowledge how to create and/or follow the test-procedure the team has agreed upon. So instead of having only experts with deep domain-knowledge (in the letter-analogy looking like this: “I”), we need experts that additionally have some knowledge of their fellow team-members left and right of them (in the letter-analogy looking like this: “T”). Some teams already started this and I'm looking forward to this.
As we are using "offline" boards it is quiet some work to get access to KPIs like avg. tead time, avg. cycle time etc.
The Delivery Managers are thinking hard how to come up with a method to have easy access to these numbers and still having the benefits of the physical board. We have experimented with a few digital tools but decided to count manually until these tools work more reliable.
There is this amazing thing that happened during we introduced Kanban in engineering. After some time more and more people came up to us and asked: "Hey what are you doing with all these boards? Please explain as the teams seem to like it and this is looking very interesting."
After a short time more and more boards popped up by themselves. I had my hands full of explaining to some of my coworkers that you can't do a copy'n'paste of Kanban from one team to another. Even our CEO came up to me and asked me if I can introduce this method of working to the other teams.
So what I did was explaining with the help of the "cargo cult" example how important it is to first understand the principles and rules of Kanban before "starting" with a board. Our CEO and I agreed that I would start within our Brand & PR team with understanding how they work and depending on that helping the team to evolve.
In the meantime my fellow Delivery Managers kept coaching and helping our four engineering teams.
The Brand & PR team has opted to go for a shallow implementation of Kanban (Depth of Kanban). They built a board in order to focus on transparency for status and prioritization and they are limiting their WIP with help of avatars. They established two stand-ups per week and decided to let a Delivery Manager facilitate retrospectives every two months.
I have moved on and am working together with the sales team now. More departments are waiting for “their turn”.
As I am writing this I am getting more confident that I need to elaborate more about our Delivery Management.
All of the above wouldn't have been possible without the great team-work of all participants. From top-management through middle-management to no-management we had a great spirit that we need to change things if we want to grow and improve. Of course growing is often associated with growing pains.
Everything I have described - and much more - took place in a time period of about five to six months. Some of the questions and problems were obvious and some were hidden behind more distracting facts.
Many companies chose to get an external consultant for this kind of work. We at PARSHIP decided to have us Delivery Managers do “the job”. Delivery Management combines different roles like Scrummaster, Agile Coach, Project Manager, and Flow Manager. We take the necessary bits and parts of these roles and combine these to a more generalist (and neutral) approach of helping the business to evolve. And besides: we are part of the company. It is our business as much as it is the business of the people we are helping. Sometimes this can be a hindrance. In our case this worked out fine and helped a lot solving “impediments”.
…maybe this is a good reason for another article about Delivery Management?
PARSHIP (www.parship.com), a subsidiary of the Georg von Holtzbrinck publishing group, is Europe's
leading online matchmaking agency for single people with high standards. The company launched its pioneering service in Germany on Valentine's Day 2001, opening up a new market with its
innovative approach. With its scientifically based matching system, PARSHIP has to this day supported millions of European singles in their search for love and a long-term relationship,
establishing itself over the past years as the No 1 online matchmaking agency across the European market.
A team of 160 employees put their energies into bringing happy couples together and establishing new benchmarks for the online dating business as a whole. While the international customer services team (PARSHIP Service GmbH) takes care of members' questions and concerns, numerous psychologists/sociologists across Europe work constantly at optimising the scientific PARSHIP Principle®.
PARSHIP is active in 13 countries: Austria, Belgium, Denmark, France, Germany, Ireland, Italy, Mexico, Netherlands, Sweden, Spain, Switzerland and the United Kingdom. The international headquarters are in Hamburg, Germany.
For contact and further insights into PARSHIP please contact
Marc Schachtel (CTO)
20095 Hamburg / Germany
Phone +49 40 - 460026 - 514
Being a pragmatical optimizer for his whole life, he has managed projects since 1999. He is a practitioner of agile and lean methods since 2008. Knowing the strengths and risks of classic PM, he is combining his experience of PM with lean and agile practices ever since, complementing each with common sense. He helped introducing scrum to the product development team, coaches the coaches and helps the whole company to evolve.
Since August 2013 Marco Heads the Delivery Management at ZANOX.de AG.
Thanks to Alexander Fedtke, Udo Carls and Arne Roock for their feedback on this Case Study!