Delivery increments and the Procrustean time box

Iteration-based delivery processes like Scrum usually deliver in two-week batches, but often our stories don’t fit this structure and we solve this by arbitrarily cutting up stories or stretching them across multiple iterations.

As Scrum is probably the best known iteration-based process and I’ve seen it implemented in multiple teams, I’ll use it as an example.

What does Scrum look like?

To demonstrate the issues with Scrum, it’s worth detailing the process:

  • each batch of work or ‘sprint’ has set-up ceremonies at the start (sprint planning) and closing ceremonies at the end (sprint review and retrospective)

 

  • there are Product Backlog Items (PBI or ‘stories’) that are moved from a Product Backlog into a Sprint Backlog in sprint planning. The team commits to completing all the stories in the Sprint Backlog

 

  • when the Sprint has begun, the team gets on with the work, focusing on meeting the commitment

 

  • at the end of the sprint the delivered product is reviewed and the retrospective allows feedback on (and ongoing improvement of) the process

This sounds pretty straightforward, but often stories can get quite big – even the small ones. And that’s where it starts to become unstuck.

The problem with PBIs

If a PBI is too big to fit into a sprint, it causes problems. Usually, when PBIs are moved into the sprint backlog, they’ve not been thought through well enough. Either from a user experience, a business logic or an implementation perspective.

Getting this thinking into a PBI is often called ‘elaboration’ and is generally a prerequisite of a PBI being accepted into a sprint. As practice has evolved, there’s often a per-PBI mini-waterfall process where a story goes through various stages of elaboration by the Business Analysts, User Experience Designers, User Researchers and Technical Architects.

This has two effects:

 

  • developers get further from the end-users as they cease to be fed stories and start to get requirements

 

  • the process of elaboration in conjunction with the development work breaks the time-box limits – there’s simply not enough time in a single sprint for a UX, UR, BA and then a TA to look at and elaborate on a PBI

 

In many organisations there are various strategies for dealing with this. Some go for the untracked elaboration where the non-developer delivery roles pick and choose stories to work on.

Most organisations go for a two-sprint cycle with elaboration done in sprint n-1 and stories developed and shipped in sprint n. Some even end up with a three or four sprint cycle with elaboration (n-1), development (n), QA (n+1) and deployment (n+2), giving an overall lead time of a story of six or eight weeks.

This can work, but expedited tickets often lead to specialised hot-fix processes and the structure leads to all stories having a fixed multi-week lead time, whatever the size.

Is there a better way?  

Yes – and it’s called Kanban.

In a well-structured Kanban process there’s still a backlog and PBIs are still sized to the smallest size that will usefully deliver business value. And there’s still a commitment point but this isn’t necessarily fixed to a delivery point, instead the committed backlog is refreshed at more frequent intervals. When using Kanban, all stages of delivery are mapped, tracked and crucially, each PBI works through the delivery at its own pace, with progress being tracked statistically.   

There are a number of benefits from this but I’m particularly calling attention to the individual tracking of PBIs through the process. This allows them to be the size they need to be rather than the size of the box they’re put in.

 

Kanban board delivery increments
An example of a well-structured Kanban process

Who is ‘the stretcher’?

In Greek mythology, Procrustes (or ‘the stretcher’), invited passer-bys to spend the night in his iron bed. Because nobody fitted perfectly into it, he physically altered guests by amputating feet or stretching their bodies to fit.

What has this got to do with PBIs and Scrum?

By squeezing thoughts into a reductive category or enlarging ideas to fill the time you’re trying to control or predict the outcome of the unknown. This can have disappointing or unwanted consequences. Don’t be the stretcher – let your stories be the size they need to be.

 

Featured Post

Creating user-tested designs: a whistle stop tour

At Fimatix, we help our clients deliver while constantly learning, adapting and having fun. From digitising public services on large government projects to making sure our clients are meeting the Digital Service Standard, user needs are taken into account from start to finish.

An essential part of each and every project we work on is user research.

Why user research is crucial

When it comes to creating the best for your customers or audience, user research is a great way to check your service is helping users do the tasks they need to do. Of course, user research is led by user needs – helping the service you’re offering improve (and keep improving) based on the needs of users. 

In order to create user tested designs from the user research you’ve carried out (which can be used by teams in the future), you need to develop a lean process. We’ve written an in-depth white paper all about it, but for a whistle stop tour, you’ve come to the right place.

How can you make sure user research is effective?

To get the most out of user research, you need efficient designs for user testing. Efficient designs will cover:

  • how to incorporate user needs
  • keep desired service goals in mind
  • have actual business change at its heart

The designs can have a low or high accuracy, depending on which stage of the project you’re in. Once the designs are ready, they’ll need to be tested by the end users and updated as per the feedback gathered

The process here is straightforward – user needs are adapted into designs, tested with users and ongoing feedback is implemented. From our experience, we know there can be stumbling blocks along the way, which is why we’ve developed a workflow to make sure we’re as efficient as possible during this stage.

The Fimatix workflow

We visualise our workflow using a Kanban board that’s both on the wall showing the high-level view and then captured in Jira – along with extra detail and conversations on the tickets.

  • User researchers come up with the user needs
  • User needs are discussed with designers and the product owner

If there’s a separate content designer, it’s wise to involve and draw up a collaborative picture with them too. This conversation is to make sure the designers and product owners understand the user needs, with the designers being kept informed from the beginning.

  • Designer and content designer come up with a design (ideally a low fidelity prototype)

A prototype is a virtual simulation showing the interactions between the users and the designed interface. The user researcher takes these prototypes to the users and conducts a testing phase. Here users prod and poke, making sure it works as planned. Then this is followed by another conversation between the designers, product owners and the user researchers. It’s an opportunity to gauge reaction, this checkpoint is a really important stage.

This internal checkpoint make sure user needs are captured in the design and the product owner has an opportunity to review the design – making sure not to deviate from the overall service goal.

A decision is also made during this conversation as to whether the design is ready for wider user testing or needs to be repeated further.

  • User researcher takes prototypes to user testing

The users and the findings of the user testing are talked through with the product owner and designer.

  • Changes are adapted into design prototypes

The changes are attached to the relevant user stories to be further developed by the business analysts.

Every project is different to the other, therefore each element of the workflow adapts to the needs of the project. If you want to learn more about Fimatix and how we could help you with your digital transformation project, get in touch at contactus@fimatix.com.

Featured Post

Why coding should be mandatory in schools

2018 is an important year.

It marks the centenary of women (some, not all) having the right to vote.

And it’s also the Year of Engineering – a government campaign recognising the importance of engineering across the UK to inspire and encourage young individuals to get involved.

Currently, only 17% of people working in technology in the UK are female, with just 7% of female students choosing computer science as an A level.

 

Hello world

After leaving school in 1988, I had zero exposure to computers, other than seeing my friend making ‘Hello world’ appear on a screen. I thought it was amazing.

From this I:

  • took A level maths and statistics
  • went on to take a social science degree
  • did a master’s in Tourism Management
  • had a work placement that turned out to need a database building so ….
  • taught myself how to code – on my first day!
  • got a job at Reuters (I was basically a human API)
  • realised that technology was shaping everything
  • started applying for developer graduate schemes
  • was accepted onto a Royal Bank of Scotland graduate scheme as an analyst coder
  • discovered a love for all things tech

I’ve never looked back.

 

Pictfor: Women in Tech roundtable

As a woman working in tech in 2018, it’s important to discuss, explore and discover the challenges women in technology face.

With this in mind, I was delighted to be a part of Pictfor’s roundtable discussion at the House of Commons – Women in Tech: How can we promote further diversity in the sector?

The day featured fantastic talks from women such as Dr Ruth McKernan CBE and Ivana Bartoletti. We touched on subjects such as:

  • promoting diversity in each sector
  • social media etiquette
  • workplace culture
  • equal pay


Mandatory coding in schools

The idea I brought to the session was simple: coding should be mandatory for everyone in education up to the age of 16.

More often than not, young individuals (girls especially) are quick to dismiss the idea of coding before they know if they like it and what it involves.

 

Why?

If coding was mandatory in schools across the UK, girls would at least be familiar with code and understand the basics. Then, at the age of 16, they’d be able to decide to continue or not.

But it’s more than just familiarity. This would hopefully increase diversity in the tech sector.

If more young people (males and females) are confident about using code, they’re more likely to consider or choose a technical career in future.

 

Make ‘mandatory’ fun

As we’re now in the Fourth Industrial Revolution, coding is arguably just as important as maths, English and PE.

Although compulsory, PE is made enjoyable through group activities and sports as opposed to academic subjects such as maths and English. With this in mind, I feel coding could be handled in a comparable way.

Delivering coding classes in a similar way to PE could open the doors for industries and/or volunteers to get more involved in teaching code. This could be done via an online or streaming approach supplemented by volunteers. By doing it this way, students are regularly kept up-to-date with what’s new in code.  

It also avoids forcing teachers, who might not be confident in the subject, to teach coding classes. And schools won’t have to spend their (already tight) budgets training staff to get them to an acceptable level to teach it.

The Fourth Industrial Revolution
The Fourth Industrial Revolution

How can you get involved?

As cliché as it sounds, the future of coding depends on the next generation. It’s now our job to help encourage young people to take an interest in tech – especially young women.

So, where do we start?

I run Women in Agile – a London based meetup giving women working in agile environments the chance to share success stories, challenges and opportunities.

If you want to swap and discuss ideas with other women who work in similar roles, let’s get together.

 

Featured Post

Key issues in tech ethics

There’s definitely something about attending meetings at the Houses of Parliament. The corridors of power oozing history, the sense of events happening around you and the hint of a promise of a smidge of influence. So having the opportunity to opine on “What are the most pressing issues in tech ethics?” was too good an opportunity to miss.

The big agenda

To try and be representative, I asked my Facebook, LinkedIn and Fimatix communities the question and got a very interesting range of replies. The sheer range of these issues felt important in its own right.

Here are just some of the topics that came up:

  • As you’d expect after the Facebook / Cambridge Analytica revaluations, targeted political advertising in social media and data privacy had lots of mentions
  • Fake News
  • The other obvious one was the impact of automation and robotics both in terms of software algorithms and autonomous devices e.g. driverless cars
  • There was also a lot of concern about hidden (or subconscious) bias in algorithms. This was a significant discussion at the meeting.
  • Digital literacy ensuring that people understanding what the possibilities and issues are. In the discussion we talked about data consumers becoming data citizens
  • There were concerns about regulation keeping up with innovation.
  • Profit motive v human needs i.e. is disruption always good?
  • The impact on the environment of tech e.g. mining rare earths and the constant replacement cycle of tech hardware.
  • Over reliance on technology to the detriment of human interaction resilience and relationships
  • Global tech companies paying their fair share
  • How do we get our young people with the skills to benefit from the digital e.g. software apprenticeships
  • You can’t have a discussion about tech ethics without mentioning the possibility of Skynet and the advent of battlefield robots
  • The other major concern was the opportunities for social engineering by governments through mandating digital first. The China Social Credit System was highlighted by several people. The reason I’ve used the Wired link above is that it does highlight that this is also possible in western societies.
Cambridge Analytica used personal information harvested from more than 50 million Facebook profiles without permission.
Cambridge Analytica used personal information harvested from more than 50 million Facebook profiles without permission.
How does this translate?

My personal priorities are summed up by the word transparency. We need to see how algorithms are written to be able to challenge bias. We also need to see what data is being collected and how it being used. Society also needs to see how money is being made and what purposes profit and taxes are being put to.

What emerged is the All Parliamentary Group on Data Analytics is planning to run a commission (an engagement exercise) on tech ethics. The meeting was a round table to frame the themes that this commission would look at, with the aim of recommending changes to legislation and regulation.

The themes that emerged are:

  1. Trust in the way that software is being built and used, particularly from consumers
  2. Avoiding bias, particularly in algorithms, and ensuring diversity
  3. Public understanding and skills (sort of summed up by the phrase moving people from consumers to citizens)
  4. Data and AI opportunities and risks including civil liberties
  5. The boundaries of acceptable use, particularly AI
Changing the standard

My thoughts during the meeting were that there were some immediate steps that could be taken to improve things without reports or more additional primary legislation.

  1. Fund and staff the regulators properly – particularly the Information Commissioner’s Office to ensure GDPR is implemented and policed properly
  2. We need knowledge about tech embedded in government. Make the changes to the civil service pay grades that have been talked about for years so that the regulators and the public sector can recruit and retain the right level of tech expertise.
  3. Enforce the existing laws and set up dedicated online police to prosecute fraud and hate crime. The model could be similar to the British Transport Police or parts of the the Environment Agency where the industry pays a levy to pay for policing.
  4. We should apply the off-line rules to online business; Uber has been ruled to be a taxi company, so Facebook is a publishing company, Airbnb is a hotel company and Google is a monopoly (like Microsoft in the 90s). Facebook would take its responsibilities much more seriously if it was fined for publishing hate crime and child porn. Applying the age grades for content on YouTube properly would make a big difference. Those of us in tech know that it really isn’t as hard as “big tech” makes out, but it is potentially costly.

My top priority for legislation beyond this is would be to force some key algorithms to be made open source. I’m thinking the monopoly ones or the ones that provide a fundamental utility e.g. internet search

Tell us your thoughts

Do you agree that those outlined above are the most pressing issues in tech ethics? And what’s the best way to ensure these ethics are followed? We would be very interested in everyone else’s thoughts on this and happy to represent these views to the commission if / when it gets going.

The event was one of a series of events organised by the Parliamentary Internet, Communications and Technology Forum (Pictfor) to read more about the event, please see Pictfor’s website here: http://pictfor.org.uk/blog/

Featured Post

Getting started with agile teams at scale: tip 1

1. One single backlog

Just to be clear from the outset, this blog is about when you’re dealing with more than one Scrum team. Let’s assume you’ve decided what product you are going to deliver (in an agile way), you are using Scrum as your empirical process framework and you now know you need to add to the single Scrum team that has forged the way ahead on the product.

Pretty soon you’ll get to the question of, ‘well, if we have more than one team won’t we need more than one backlog’? Wouldn’t it be sensible to think that each team will have one backlog and a product owner each?

Scaling Scrum teams – do we need more than one backlog?
Scaling Scrum teams – do we need more than one backlog?

The short answer is ‘no’. Let’s explore why.

Firstly, when scaling in a Scrum environment, the question of ‘how many backlogs?’ should be addressed from the very start – it will save you a lot of pain. Trust me.

There are many reasons why you might think having a single backlog for each team is a good thing. These may include:

  • Each team will have its own dynamic and should be empowered to deliver as independently as possible.
  • Each team needs to size and commit to things that only they are on the hook for getting to Done, as agreed with the PO.

All of these examples are relevant and very valid, particularly in one-team Scrum. However, and this is the key thing, having more than one team introduces more variability, complexity and inter-team dependencies. In fact, it introduces so much more complexity that you need fresh thinking to address multi-team Scrum.

Don’t just take my word for it; both Scrum organisations (Scrum Alliance and Scrum.Org) offer frameworks for dealing with Scrum at scale.

  • Scrum Alliance offers LeSS (Large Scale Scrum) as it’s chosen partner for scaling, which, as part of an initial set of scaling rules, recommends one backlog for each product ‘that defines all of the work to be done on the product’[1].
  • Scrum.org, with its Nexus framework advocates that ‘there is a single Product Backlog for the entire Nexus and all of its Scrum Teams’.[2]

So that’s grand – the two heavyweights of the Scrum community say you should have one backlog at scale.[3] So what? How will it help?

For me it helps on several levels.

It creates and consolidates a view that there is one product, and the items that are required to produce the product need to be prioritised within the same context for the appropriate team to build them. In the grand scheme of things this item is more valuable to the Product Owner than the next, so that is what is prioritised across the entire product (Systems Thinking in other words).

Scrum Masters may have encountered instances where some teams working on the same product with individual backlogs have prioritised lower priority product items because the backlogs are silos of functional items prioritised at the team level. That results in what Larman would call a ‘local optimisation’[4] and therefore not optimised for the system. Having one backlog for the product is a way to avoid this kind of issue.

Equally, when teams are refining backlog stories they will have a pretty good idea of which team will be picking up a given story as they get refined. If the ‘integrated increment’ (Nexus) is to be fully integrated and delivered by multiple teams, they need to understand any dependencies and co-ordinating activities that should happen during the two-week sprint. They have an opportunity to call this out in the cross-team planning session or, Planning One (LeSS) and Nexus Sprint Planning (Nexus).

Of course, having to integrate the code on a regular basis will reinforce the team dependencies and equally, Scrum Masters working across the teams will also enhance the co-ordination (at least this is how the LeSS framework does it).

Most importantly, there is a flow gain from the approach. As the teams are multi-skilled non-component based functional teams, we can reduce variability and increase flow ‘because a user story can flow to any one of several available teams.’[5]

Questions about sizing are really red herrings in this debate. It really is not the point here. The point is cross team collaboration/co-ordination, flow at scale and prioritisation of more valuable items in a scaled environment. One rule however, should be that team velocity is not compared in this approach. Furthermore you can add a short cross-team refinement session where items are sized by representatives from each of the teams, before being more fully refined by the team most likely to take the item on in the sprints.

Neither does the multiple Scrum team approach devalue the ‘empowered teams’ Scrum philosophy. There is nothing different here – they are still Scrum teams, however they now have sight of what other feature teams in the system are working on and can plan and agree inter-team product dependencies accordingly.

So, in summary, the tip for this month is have one backlog for each broad and wide product. How you plan for that also needs some adaption to single team Scrum ceremonies – but more about that next time…

Featured Post

Agile Women UK

We recently took over the sponsorship and running of Agile Women UK.

Agile Women UK is a place for women working in agile environments to share success stories, challenges and opportunities.

Sound like something you’d like to be a part of?

Let’s get together regularly and swap ideas with other women who work in similar roles and can relate to shared experience.

Our first event was held at the Groucho Club with great feedback – there’ll be more to come very soon.

Agile women UK meet up at The Groucho Club, London.
Agile women UK meet up at The Groucho Club, London.
Featured Post

SDLC: The price of disagreement

[vc_row padding_top=”0px” padding_bottom=”0px”][vc_column fade_animation_offset=”45px” width=”1/1″][text_output]When teams fail to agree and follow a common approach to developing and releasing software it often results in delays, duplication, and a mountain of technical debt.

This post explores the challenges behind one of most important decisions your delivery team or programme will make – how to agree and follow the SDLC.

Are we talking about ‘Software’ or ‘Service’ development?

Actually, it’s both. SDLC commonly stands for the Software Development Life Cycle, or Service (System) Development Life Cycle. (This may vary depending on profession and experience.)

The lifecycle covers the end-to-end process of developing, releasing and maintaining code into a live environment in support of a service.

And, its primary purpose is to make sure teams follow a common, consistent approach to delivery at every stage.

What is the SDLC made up of?

When you boil it down, the SDLC itself consists of four main elements:

  • A set of processes – things that need to be done before you can progress to the next stage of the lifecycle
  • Some essential artefacts/documents – things that need to be produced at each stage of the lifecycle
  • A bunch of environments – to support development of code and (latterly) maintenance of software within that stage
  • Some governance – both governance of the SDLC (as a thing) and governance within SDLC (review, sign off points etc)

A deeper dive of these elements is another blog post, but it’s useful to list these four here, just so we understand one another.

The Software/Service Development Life Cycle
The Software/Service Development Life Cycle

Why is it important?

Following a common SDLC is critical to the success of any digital programme as it:

  • supports continuous delivery of code into live environment as quickly as possible
  • enables safe, sustainable releases into live production that mitigates risk to existing live operations
  • ensures each release has an agreed level of service, technical and business support

Getting these three right really do matter.

That’s probably enough of the basics.

‘This all sounds perfectly sensible. What’s the problem?’

Like all good ‘sensible’ approaches, people have a wonderful knack of dropping spanners in the mix (yep, that’s you too).

So given we’ve already stated the fundamental importance of getting a working path to production in place, it’s worth exploring some of the challenges that large, digital programmes commonly face while trying to set this up.

  1.    ‘You don’t want to do it like that…’

In the (increasingly distant) days of the single supplier, said supplier would be reasonably expected to lay out how the SDLC was going to work, and everyone would be expected to follow it. That was that. Worry about the value later.

In the post GDS landscape, we bring in different suppliers to provide expertise at specific points along the lifecycle. Which is the right thing to do (you can read up on the logic in another post), but this brings it’s own challenges.

An absolute cornerstone to establishing a healthy SDLC is recognising and respecting the contributions different communities of practice make at specific points along the lifecycle. This is not just about engagement, but inclusion (the SDLC can only be as strong as its least collaborative link). Architecture, DevOps, Operations, Security and representation from the development teams make up the core SDLC community. But others communities e.g. testing, will also have a valid voice.

It’s likely these communities will be represented by a variety of suppliers. They will have their views of what good looks like and what works best, based on their experiences and roles.

But, no one has a monopoly on SDLC wisdom. The trick is to collectively look at your specific programme with fresh eyes and gain consensus about a sensible approach that delivers value at every stage of the lifecycle. And, you should do this in the early stages of the programme.

It’s not going to be perfect, so be honest about your limitations as they present themselves and agree how you’ll work together to resolve them.

  1.    ‘I want speed, you want safety. What gives?’

The SDLC must:

  • support quick delivery of code
  • make sure it’s safe for the business to accept and support – checking and assurance is a critical part of the process

This creates a natural professional tension between the development and operations community because:

  • any checking or assurance activity that development teams consider unnecessary  may be considered a blocker to speedy deliver
  • any compromise to checking or assurance activity, where not agreed, may undermine the integrity of the live service

But, this tension is a good thing and should be embraced. It forces collaboration. And, if done really well, means that people have to adopt a genuine awareness about how and where their role contributes to the end to end lifecycle.

Remember, good will is the most under-recognised currency on any programme. So proactively forge close ties with people you are going to be relying on when the pressure cranks up.

  1.    Ownership is going to change as the life cycle matures

A bit of a grey area this one. While the case for a dedicated SDLC community to collaboratively deliver a path to production is relatively easy to make, someone will need to have the final say. So, who has ultimate responsibility for the SDLC?

Eventual ownership will lie with the business. However, in the earlier stages of the programme, particularly before any major releases have occurred, the programme lead is likely to be the natural owner.

They will be closer to the day to day evolution of the programme as the communities of practice work towards creating a path to production. But, foresight should be given regarding the ‘transition of ownership’ as environments become live, service support kicks in and frequent releases become the norm.

Some practical things to consider:

  • Start with a set of SDLC principles. You’re going to disagree on some of the details down the line, so agreeing your principles upfront will provide invaluable points of reference to help resolve or avoid conflict.
  • Agree a common language to describe the SDLC and its components. People will bring different terms and phrases to the table. It doesn’t really matter which ones you use, just as long as they are understood and used consistently.
  • Once you’ve agreed your actual approach, trust the process. Large programmes will naturally be bumpy at times, but unspoken workarounds are the devil – they can break cross working relationships and can cause serious failure demand (particular if built on top of each other).
  • Implement an SDLC MVP as soon as you can. Months of endless discussion will result in stagnating code (from a release point of view), and a purely a theoretical SDLC that isn’t improving based on feedback, it’s just developing based on people’s’ opinions.
  • Check in regularly. The SDLC is a living, breathing beast. So make sure the contributing communities of practice have a regular conversation to see how you are doing and support continuous improvement.
  • If the approach isn’t working, call it out early. Then change it. Make sure the relevant parties are all aware of the changes though!
  • Have a plan to improve and evolve. The SDLC is made up of several moving parts that will take time to establish. Iterating will be a necessity.
  • Finally, always think about the bigger, end to end SDLC picture. You are only one part in creating and maintaining a successful path to production, so understand what role you play and how you contribute to making it work.

[/text_output][/vc_column][/vc_row]

Featured Post

Agile Business Conference 2016 presentation

Agile Business Conference 2016 presentation is now available to view and download here.

Featured Post

Scaling agile: What can I do before adding more people / cost?

When scaling agile, it’s important to keep front of mind that we are spending other people’s money. We should always think about delivering value for money from outset and be transparent with ourselves and stakeholders about Value. A sponsor will ask the questions “Are we getting value for money?” and/or “Can’t we go faster?”. These are legitimate challenges and as responsible professionals we need to have explored the options before adding more people and therefore cost.

So before you add people, what can you do to deliver more, faster?

To get to an effective delivery, here are the things I look at. After the first point, they are in no particular order as the priority will differ depending on the circumstances. I’m assuming there’s already a team in place and they’re already delivering.

Culture – Is there an open, supportive, learning culture?

If this isn’t in place, your team is neither as efficient or effective as it can be. Even hints of a closed, niggly culture will mean people are not looking to learn and motivation / productivity will wither.

What’s worse is that scaling even a mediocre culture will exaggerate the flaws exponentially and the negative traits will overwhelm anything positive. Your good people will leave, your initiative will start to fail.

User stories – Are your stories really ready to be played?

Particularly in the early days of delivery, there’s a tendency to underestimate the level of information needed to write the code and tests that deliver a user story.  The concept of a user story being the “invitation to a conversation” can also be used as an excuse for not writing down the key aspects of delivering a feature.

The dialogue triggered by writing acceptance criteria, producing designs, estimating, defining performance is critical to both effective and efficient delivery.

So create a “definition of ready”, make sure that all your stories meet it before they are played.  Then improve it as part of your regular retrospectives; there’s no such thing as a perfect story.

Prioritisation – Are you really working on the right things in the right order?

This is an interesting one as the simplistic Agile mantra is that you should be always working on the most valuable feature to the organisation at any time.  In theory this is focusing on being effective, however this mantra can drive inefficiencies.

From the perspective of a user,  there will often be groups of features that only make sense to deliver together or in a specific delivery order.  From a technical perspective there are usually a set of foundations that are better to be in place early to limit the amount of refactoring that would have to be done if they are implemented later.  In both cases the high value features might be more efficiently delivered later.

As usual, the key here is balance.  Use the high value features as a goal or mission and recognise that putting some foundations in place are important to minimise “throw away” work.  This requires careful facilitation of business, delivery and technical perspectives.

Frameworks – Have you got the right frameworks in place?

This is a relatively obvious one.  In any software delivery many of the concepts will have already been delivered by someone else as reusable components, creating these again is wasteful.  Most of the common ones will be already available in e.g. .NET and Java frameworks.

For the new features that you are developing, if you find common patterns build them into the frameworks so that the next time they are needed development is accelerated.

For a one team delivery switching technical framewoks might be managble, however once you have many teams there is a massive switching cost.

As part of your Discovery / Foundations, pick a framework that works for your product, stick to it and extend it as necessary.  Avoid mixing or switching  frameworks as this generates confusion and context switching.

This goes a little against the concept of emergent architecture but again balance / pragmatism is important.

Environments – Is your build, test and deploy pipeline mature?

Again this might seem like an obvious one but every large programme I’ve seen has problems getting continuous integration and continuous delivery sorted, mainly because it is complex and there are important (but unexciting) operational aspects, such as security, logging, audit, user support, back-up and recovery to get right.

There’s little more frustrating for delivery people than being able to demonstrate code to users that want to use it but can’t.  It’s one of those foundations mentioned earlier, you have to get your build and deploy pipeline in place before you can deliver working code.  In extremis, stop writing code and send everyone not involved in generating it on holiday until your pipeline works.

“Right-plating” – Do you have an appropriate definition of done for the product / service you are delivering?

In the drive to get software live, particularly for an alpha or pilot, it is very tempting to cut corners in non-functional and operational acceptance testing.  Conversely an operations function, typically coming from an ITIL mindset, will typically look to over-document from habit and training.

The phrase “an appropriate level of rigour” is key here.

Your definition of done must include non-functional requirements such as maintainability, performance and security, but have you created a cottage industry of operational acceptance documentation?  Can you automate production of the documentation that is really needed?

Delivery Organisation / Process –  Are you set up the right way for your product / service ?

Adhering evangelically to just one Agile/Lean methodology is a recipe for poor productivity.  The core of Agile is the same in all methods but, for anything other than the simplest deliveries, you need lean principles, the software engineering disciplines of XP, the management disciplines of DSDM, the focus on culture in Scrum.  The blend will depend on your circumstances but you will need some of all of them.  Note you don’t need SAFe at this stage (if at all)

Make sure the team completely focussed on delivery.  Minimise the number of people that are not dedicated to the team and make sure that when they are with the team they are not distracted by their other responsibilities.  This is particularly true of Product Owners given their key role in delivery.  I think that PO’s having some operational contact is good, as they stay up to date with the current user journeys and needs, the issues come when they are still responsible for operations because a business-as-usual problem can completely de-rail delivery.

Develop and maintain a disciplined rhythm

Context switching ruins productivity.  Unregulated feedback is just noise.

The issue is that a delivery team cannot just focus on the current context i.e. the next one or two sprints.  The team has to also listen to and act on feedback from users and users need a forward view of what will be delivered when so that they can plan business change.  At a minimum, key members of the team, such as the Product Owner and Tech Lead, need to be in all these conversations, ideally you want everyone to contribute.

To minimise the productivity hit of context switching, each context needs its own forum, these forums need to be prepared and run well and everyone, particularly senior stakeholders, need to understand where their input contributes rather than generating noise.  See Davina’s blog on setting an analysis rhythm to see what we mean by this in a practical way.

Governance

As we are spending someone else’s money, some governance is needed but excessive bureaucracy is not.  Effective decision making, particularly about spending, is critical to efficient delivery.

Talk to the sponsor about applying the GDS governance principles:

  1. Don’t slow down delivery
  2. Decisions when they’re needed, at the right level
  3. Do it with the right people
  4. Go see for yourself
  5. Only do it if it adds value
  6. Trust and verify

Make sure that your team can be trusted by them “demonstrating control”.  By this I mean that are transparently demonstrating full Agile disciplines and are able to evidence that they are spending the budget wisely.  If you are trustworthy you will be trusted.

Capability – Do you have people with the right skills, attitude and experience?

This an area that many Agilists find difficult because they believe, as I do, that the retrospective prime directive applies in nearly all cases; no-one sets out to do a bad job and everyone is doing the best they can in their circumstances.

However this can’t gloss over the fact that there are some people who are more productive in their role than others.  It is also a fact that many good delivery people dislike “free-riders” who tend to be productivity hoovers and an organisation tolerating this will tend to lose good people to those that don’t.

A brutal hire and fire culture also kills productivity, so a balance has to be struck.

My balance on this is that if you have a weaker member of the team who acknowledges that they need to learn, they should be given the opportunity and encouragement to do so.  If they don’t recognise it or aren’t prepared to learn then they need to be managed out.

For me attitude to feedback and willingness to learn is key for both contractors and permanent staff but clearly the level of tolerance of poor productivity should be lower for contractors.

In all circumstances as a leader in Agile delivery you have to be actively monitoring and managing the capability of the team.  All the research into the success or failure of programmes and projects, no matter which methodology is used, highlights that the key success factor is good people.

Do the hard work at the beginning to get the right people with the right attitude.  Make sure that your hiring processes is rigorous and repeatable. Get candidates to prove their skills at interview through relevant exercises e.g. you want to see developers code and user researchers electing feedback from users.

Scale to make the organisation resilient and able to manage the complexities.
Scale to make the organisation resilient and able to manage the complexities.

Mature the first team first, then add people

So in summary make sure that your first team is optimal before adding to it. You may find that you don’t need another one.

If you do need more than one team, getting the first one productive first means you have also defined what “good” looks like in your context and can scale with some confidence.

Thanks to my colleagues Praveen Karadiguddi, Ciaran Ryan, Rudiger Wolf, Davina Sirisena and Hugh Ivory for feedback on the draft.

Featured Post

Why scale agile?

I’ve just googled why scale agile. All the higher ranked entries focus on “how” and “what” of scaling agile and are typically trying to sell SAF – highlighting the marketing genius that is Dean Leffingwell.

The highest ranked answer that is relevant is a 3-part blog from “Adventures with Agile” that is erudite but doesn’t really answer the question. For balance, my last public foray into this topic, with the GDS governance team, wasn’t as simple and to the point as it could have been either.

So, let me be as clear and as simple as possible:

Why scale agile? To deliver more (product) faster.  

This is the primary answer, if you can deliver the outcome you are looking for with one team in an acceptable timescale then you don’t need to scale. Even if you can’t deliver the outcome in the timescales, throwing more people at the problem won’t necessarily reduce the time to deliver but will increase cost, probably significantly.

Why scale agile? To transform the organisation.

The secondary answer to the question, to fully transform the organisation’s ways of working, follows on from the first because you won’t be able to transform into a digital/agile/lean organisation without proving that Agile is the only way to achieve their strategic aims, it delivers significant benefits and Agile delivers in their context.

This is the first in a series of blog entries on practical scaling of Digital/Agile initiatives based on the author’s experience of running and coaching large Agile programmes in both the public and private sectors. Take a look at the second:

Scaling Agile: What can I do before adding more people / teams / cost?

 

Featured Post

Collaborating for organisational transformation

 

Agile is more than a set of methods, practices and behaviours. Agile is an enabler for transforming organisations, as relevant in the public sector as it is in the lean start-up.

Agile transformation requires new approaches across a number of dimensions:
• Delivery (engineering): where the push for Agile normally starts, from practitioners influenced by education, social media, peers, pragmatism.
• Governance (management): once the delivery teams start doing things differently, ways of governing and assuring are challenged to remain fit for purpose
• Organisation (culture): the changes being driven from the Delivery and Governance dimensions will highlight issues with our organisation’s culture which need to be understood and addressed

Those of you in Delivery Teams using Agile to build products and services, will often feel frustrated, hamstrung by the mechanisms and structures that delay your progress. You should realise that the leaders in your organisation want the same thing as you – delivery of value early and often. They just want to protect their investment, and they look for assurance about that. You can help by explaining how iterative and incremental delivery, with frequent demonstration of product, protects their investment. Providing easy access to your information radiators, and inviting them to visit you often will help.

Those of you in Leadership positions want your organisations to be agile and adaptive, to react to the forces of change – reduced budgets, new legislation, and better informed customer and citizen needs. You may be frustrated by the pace and cost of change. You will be concerned about the risk of wasted investment, and the consequences of that in terms of investor, regulator and media scrutiny.

So you look for assurance and appropriate governance to safeguard your investment. If you have good Agile delivery teams, their very approach is safeguarding your investment. They will ask you to empower your best, most visionary people to work with them to deliver what you really need. They will ask for time to explore, make mistakes, learn. They will ask for your patience – don’t expect the false certainty of a two year plan – let them know your desired outcome, give them space to figure it out. Visit them as often as you can – they’ll welcome you.

Those of you responsible for governing and assuring are caught in the middle of this drive for agility and adaptability. You are expected to be the brokers between the Sponsors and the Delivery Teams, facilitating the means for ensuring that money is invested appropriately, and is being used effectively.

You will need to create the conditions whereby leaders can:
• make decisions about the most important things to do
• allocate skilled, knowledgeable and empowered people to the delivery teams
• come and see the progress for themselves

Delivery teams will expect that the information they generate as they work should be sufficient to demonstrate progress and control.
And everyone will expect you to ensure that governance approaches add value and don’t slow down delivery (easy so).

In a nutshell, the potential for Agile to enable organisational transformation can only be fully realised when all of you (Leaders, Managers and Delivery Teams) align your behaviours as you mature from focusing on the Delivery dimension (using Agile practices to deliver a specific product or service) through to the Organisation dimension (harnessing agility to create an adaptive, learning evolving organisation).

Agile transformation
A challenging journey, but the potential rewards are great.
Featured Post

Ten years of Government Digital

Ten years ago today the initial Beta of the National Packaging Waste Database (NPWD) went live for the first time. It was a collaboration between the Packaging Federation, the Environment Agency (EA) and Oxford based software house Solution 7. The beta was a ranging success with over 80% uptake and reducing the time to produce the key Q1 numbers by 6 weeks. The full case study is here.

It was my first full cycle Agile project management gig and Ben Bradshaw’s endorsement of it in Parliament, “an unusual piece of government IT in that has been successfully delivered on-time and to budget”, still fills me with pride. What we didn’t know in 2006 is that NPWD would align so closely to Digital by Default (DbD) to the point of being a pre-cursor for it.

The key principle behind Digital by Default, focus on the user need, was at the heart of the project and the resultant service. NPWD manages the evidence of recycling of packing waste and was initiated by, and largely directly paid for, by the industry users that had to prove that they had met their recycling obligations. Users, both industry and EA, were represented on the board and the project team consulted with recyclers, exporters, compliance schemes, the regulators and the producers on every feature delivered.

DbD services must also be transformational. NPWD was born out of the need to move away from a paper based process that was regularly the subject of fraud (the evidence of recycling was, almost literally, a blank cheque), requiring significant changes in the regulations e.g. moving to password based “electronic signatures” which were very radical at the time.

NPWD is still going strong; industry users rebelled against its planned replacement last year for not meeting their needs and being too expensive: the EA costs for running regimes like this are passed on to those being regulated.

At the time, we did an informal Digital by Default service assessment, NPWD came out quite well.  The only points where it might have failed is on the use of open source technology, as it uses MS .NET and SQL Server, use of GDS design patterns, as they didn’t exist at the time, and possibly point 12 about service that simple and intuitive enough so users can complete first time.   Our point here was that to make it simpler would need a change in the underpinning EU regulations and the users are professional and have to understand the regime.

Looking back to 2006, there were other pockets of what is now seen as good digital practice emerging across government at e.g. DVLA and Companies House.

So UK Government Digital is at least 10 years old. When its history is written, as it surely will be, which will be the first service that could have been classed as DbD? Was it the National Packaging Waste Database?

Perhaps we could generate a candidate list in the comments and ask the now sadly departed GDS visionary team to judge.

Featured Post