Tag Archives: Story Points

Azure DevOps “Out of the Box” – Getting Started with Customizations

New to Azure DevOps? Here are a few customization recommendations you can make with minimal experience and deliver maximum value. User Stories are an essential part of delivering using agile methodologies, and Azure DevOps provides a basic template for creating a User Story, such as title, description, and acceptance criteria. However, there are a few additional fields the author of user stories can capture to maximize their agile journey such as MoSCoW priority, Precedence, and Size Estimate to name a few.

In addition, there is a Marketplace (i.e. Library) of Azure DevOps Extensions that can enhance your user’s DevOps experience. The post will cover the recommended extensions to apply to “Out of the Box” implementations of Azure DevOps.

Azure DevOps “Process” Updates: New Fields

Adding Fields to a User Story is very simple, as long as you have access to do so. Upon opening your Azure DevOps (ADO) project, select “Project Settings”, and the “Project details” page should appear. Select the “Process” defined for that project, e.g. “Scrum”. Depending upon which Process type selected, “Scrum” or “Agile”, you will see “Product Backlog Item” or “User Story”. Both may be used interchangabily. Note that only “inherited” processes can be modified by “Project Collection Administrators” group.

Process Change: Work Item Types
Process Change: Work Item Types

A list of Work Item Types appear. Select “User Story” or “Product Backlog Item”. The Layout of the work item will be displayed. Now you are able to add fields, by selecting the “New Field” button.

User Story – MoSCoW for MVP

For a Minimum Viable Product (MVP), where is the line drawn to get the product “out the door”? Here is a methodology called MoSCoW, self-explanatory, in which the capitalization is important and stands for:

  • “Must Have” – we aren’t going to production without it
  • Should Have” – borderline must have but could fall off the MVP list if there is pressure to reduce scope to meet timelines, for example.
  • Could Have” – a story identified but not prioritized in the currently targeted MVP.
  • “Won’t Have” – identified and then forgotten. It will never reach prod.

User Story – Precedence (Prioritization)

Reminiscent of the original BASIC programming language, using 10, 20, 30, etc., line numbers for execution sequence. In addition, like in BASIC, implement precedence by 10s, so there is room later on to fit in additional work items.

Priority within the Sprint for a given team member

How should someone on the implementation team prioritize their work? Especially important if the team runs out of time for a sprint and only produces the highest business or technology value first.

Priority within a Sprint for all team members

Collectively, as input from the product owner or team tech lead, the most important work items to deliver within a sprint.

User Story: Size Estimate (paired with Story Points)

Relative, standard, effort estimations are essential that everyone on the implementation team is “on the same page.” to sizing the user stories. Although “Story Points” is “Out of the Box” for User Stories, a “Size Estimate” field is not. Relative effort estimations I’ve used before are Tee Shirt sizes (X-small, small, medium, large, X-large), and can be correlated to Story Points to attempt to quantify the effort in days.

User Story: Lead Developer

A custom “Lead Developer” field is valuable for quickly identifying who performed the work. The current “Assigned To” person may not be the developer who implemented the User Story. Most likely, it’s a QA tester or the Product Owner for Accepting Stories.

This could be helpful if you want to track each developer’s progress either by the SUM of Story Points or the COUNT of Stories.

Risks to Compliment Issues

If you’re tracking “Issues,” an “Out of the Box” Azure DevOps work item, then why not add a custom object in the “Process” section called “Risk” and any fields you would like to track with that custom RIsk object?

Azure DevOps Extensions

Analytics

Created by Microsoft, this extension may or may not already be rolled into the core Azure DevOps product. It’s ideal if you want to externalize in-depth reporting using Microsoft Power BI.

Open in Excel

Created by Microsoft DevLabs, this extension may or may not already be rolled into the core Azure DevOps product.

Azure DevOps Office® Integration 2019

The best tool for importing and exporting work items from Azure DevOps to and from MS Excel. It can be downloaded here.

Delivery Plans

Created by Microsoft, this extension may or may not already be rolled into the core Azure DevOps product. It’s the closest I’ve seen (for free) with a graphic depiction of delivery timeframes in a Gantt-like chart. You can’t print or export it, which is a massive inhibitor to sharing your timelines with stakeholders outside the ADO universe.

Estimate

Created by Microsoft DevLabs, this extension may or may not already be rolled into the core Azure DevOps product. It’s Planning Poker in Azure Boards. I enjoy Planning Poker, but this integration may be more convenient because it can save the Story Point values directly to the User Stories. Also, note some corporate environments BLOCK “Planning Poker” on the firewall due to the words in the URL.

Feature timeline and Epic Roadmap

This Azure DevOps extension by Microsoft DevLabs is a close 2nd to the “Delivery Plans” visualization of deliverables. Again, no export or print capabilities.

Retrospectives

This extension is a “Must Have” for all teams leveraging the Scrum Retrospectives session. This extension, built by Microsoft DevLabs, is highly configurable and is ideal for remote teams unable to perform this activity in person.

Recipe for Optimization: Waterfall, Agile, and Scrum

Many firms try to graduate from Waterfall to Agile without completing the journey. The team may be embedded in an organization with strong ties by leadership to the traditional project plans with milestones. How can three schools of thought coalesce into an SDLC where all sides (mostly) buy into the resulting process?

The challenge with integrating new tools and process updates is to make sure there are no gaps in the new, incremental process. The more changes in people, processes, and technology, the greater the need to independently assess the target state SDLC.

Capability Maturity Model (CMM)

The Capability Maturity Model (CMM) is a development model created in 1986 after a study of data collected from organizations that contracted with the U.S. Department of Defense, who funded the research. The term “maturity” relates to the degree of formality and optimization of processes, from ad hoc practices, to formally defined steps, to managed result metrics, to active optimization of the processes.

The model’s aim is to improve existing software development processes, but it can also be applied to other processes.

Capability Maturity Model (CMM) Wikipedia

Tools Help Shape and Reinforce Product Life Cycle

Process Requirements: Epics, Features, and User Stories

From a top-down perspective, a discrete hierarchy of requirement elements helps logically organize the product requirements and so much more. An Epic is the highest level of requirements definition, which is a Theme of Features bundled together, e.g., for a major release. Features are the “next level” requirements definition and are associated with Epics as children. User Stories are the detailed level requirements and are usually formulated in the form of a narrative. Similar to use cases, there are personas or actors that operate on the product/system and design the implementation of a Feature. Successfully defined user stories have “Acceptance Criteria” for which the QA and/or Product Owner declares the User Story has been implemented according to spec.

Tools for Manging Requirements Implementation

Many SDLC requirements management products, such as Microsoft Azure DevOps and Atlassian JIRA, allow you to define a product backlog of Features and User Stories to be implemented by an implementation team. In addition, the QA implementation team members can create test coverage, i.e., associating Test Cases to each of the User Stories to be executed once (or in parallel to) the user story state has entered some form of “Test Ready” state. Finally, the implementation team may create Tasks as children to a User Story to help granularly track the implementation, such as Database Tasks, UI Tasks, or Interface Tasks.

Agile Manifesto on Documenting Requirements

The Agile Manifesto reinforces the “right” amount of documentation:

Working software over comprehensive documentation

That is, while there is value in the item on the right, we value the items on the left more.

The Agile Manifesto

Classically, in Waterfall SDLC, we await completed documentation such as the finalized Business Requirements document and technical specifications. Leveraging an Agile approach, a Sprint can incorporate incremental business requirements definition and iterate with evolving documentation. In addition, User Stories dictate the requirement in a practical way, where we can see the Persona travel through the User Story, ultimately meeting the “Acceptance Criteria”

There’s Nothing like a Good Gantt Chart

Visual timelines for tasks and milestones, showing dependencies between tasks and predecessor definitions dynamically push dependent work items. Typically, classic waterfall maps out milestones “going beyond the near-term.” Agile may look toward the delivery of one or two sprints ahead, sprints varying in time between one to six weeks each. In some instances applying SAFe, Scaled Agile Framework may instantiate a Product Increment [Sprint], which attempts to plan 8 + weeks ahead.

There are several ways to overlay classic Gantt chart visuals over the product backlog delivery timeframes. Depending on the toolset you use, such as Microsoft Azure DevOps and Atlassian JIRA, these visuals may be provided “out of the box” or leveraging 3rd party extensions, or even exporting the product backlog data to be reported using a 3rd party tool such as Microsoft Power BI.

Burndown Delivers Value

Neophytes to Agile will not be initially exposed to Burndown Charts. Scrum masters, akin to project managers, attempt to measure the health of initiatives using Key Performance Indicators (KPIs) and, in the case of Agile and Scrum, leverage sprints, story points, and average sprint velocity.

Burndown Release Chart
Burndown Release Chart
  • Story Points Remaining” – All of the user stories contain “Story Points.” Story points are derived from collective, relative effort estimations. Each person on the team guesses the size of each story based on other stories previously estimated. Implementation team members use a consistent scale for estimations, such as the Fibonacci Sequence. All implementation team members estimate each story and speak their answers at the same time. Then a consensus is achieved for a given story. Story Points Remaining is an aggregate of points for a defined major/minor release.
  • Items Not Estimated” – are stories in the “initiative” product backlog that have not yet been estimated. This number can skew the overall burndown estimated completion date/sprint by inflating the number of points still remaining. but are currently unknown. i.e., “Projected Completion” will not be accurate.
  • Total Scope” – is the total number of story points for the “initiative” regardless of user story completion status. There may be an upward tick of Total Scope, as we are agile and are able to accommodate for changes or increases in scope. over the course of the initiative.
  • Remaining” is the bar chart that shows a downward trend in the remaining scope for the initiative. The remaining may also have an uptick in user stories as we see “Items Not Estimated” become estimated.
  • Burndown” should be a downward trend, and based on the tool that derives this graph, it may predict the projected completion of the initiative based on several factors, including average total velocity per sprint.

Daily Scrum v. Daily Status – Removing Blockers

Daily, Weekly, and Biweekly status update sessions with the implementation team are no match for Daily Scrum sessions, which primarily focus on Blockers. Blockers may be Issues that impede progress for the implementation of User Stories. We all focus on unblocking team members so they can implement stories and we can earn Story Points.

Collective, Relative, Effort Estimations

The classic developer SWAG for effort estimations is “two weeks.” None of which may have any basis upon reality. Performing relative effort estimations allows the team to apply a reproducible methodology. We compare the size of a change relative to other changes we have made to the system. Any scale will do so long as you consistently apply the method. For example, you can use tee shirt sizes, Extra Small (XS), Small (S), Medium (M), Large (L), or Extra Large (XL).

Some teams use a sequence of numbers. One most notably used is the Fibonacci Sequence: 1, 2, 3, 5, 8, 13, 21, 34, and so on forever. with many of my teams, we use 1,3,5,8,13, and 20, a “modified” Fibonacci Sequence for 3-week sprints. If using user stories as the team’s discrete unit of requirements to implement., each story can have “Story Points,” and these points are populated using the Fibonacci Sequence. Your team can equate

  • 1- one day or less; ideal for a small change or spike
  • 3 – three days or less for change to implement
  • 5 – one business week
  • 8 – Week and 1/2
  • 13 – 2 weeks
  • 20 – 3 weeks

When deriving “Story Points,” the implementation team must agree that story points are inclusive of system integration testing.

Perception – Stakeholder Point of View

Stakeholders want to have a holistic review of the project/product health. Actually, that is just some stakeholders. Other stakeholders may just want to know how many open Bugs currently exist with the severity of one. The Scrum Master can develop dynamic reports and dashboards for whoever wants a peek into the product/project health in Azure DevOps and other tools.

Charts help communicate a message and help shape our point of view. Different project stakeholders have different needs of perspectives. Both Agile principles and Waterfall methodologies inspired visual mediums that reflect the Key Performance Indicators (KPIs) of a project or product evolution.

Agile, what have you done for me lately?

At the end of each sprint, during the Scrum, Sprint Close ceremony, the implementation team members demonstrate/discuss each of their completed user stories. The Product Owner (PO) accepts or reopens the user story based upon the Acceptance Criteria being met. Each user story that is accepted by the Product Owner has Story Points associated with it. All the accepted user stories “earn” story points for the team, and the points are accumulated for each sprint which is the velocity of the team for each sprint.

There are lots of ways the Sprint Close can go “Pear Shaped”.

  • “Acceptance Criteria” was not as detailed as required; the user story results were not entirely what was as expected by the Product Owner.
  • The implementation team took on too many stories and were not able to start/complete the projected stories for the sprint.
  • By failing to deliver on the Sprint “Open/Planning” committed Story Points, the average velocity of the team’s sprint may likely go down.

As a team, make sure you are prepared for the Sprint Close by performing Product Backlog Refinement days before to confirm things like “Acceptance Criteria” verbiage with the implementation team and the Product Owner. Work in Progress or WIP limits could help the team focus on their bandwidth and apply constraints to how many user stories the team can work on at one time, thus minimizing over-promising the Product Owner.

Waterfall Gates Persist

  • User Acceptance Testing – The business team(s) insisted they validate anything before it goes into the production environment.
  • Approvals from Internal Teams – conformity to organization architecture standards, for example, must be approved when changes in target state architecture changes are proposed.

Questions and Comments Appreciated

Please let me know if I missed any other Agile, Scrum, and Waterfall areas that can cohabitate/coalesce into cohesive SDLC.

Delineation of Work Items, Segregated by Tech Stack

Building any multitiered solution is not just creating a User Interface to render the data, there is most likely a service tier that fetches data from a database, and serves that data up to the UI to then be rendered. How do you derive work items in your product backlog? One User Story, and multiple child tasks, one task per tech stack tier, UI, service tier, and database? Or three user stories, one per tech stack tier?

User Stories Defined, Per Tech Stack Tier

There are clear advantages of representing most work items with User Stories such as deriving story points, determining team average velocity, and a more accurate burndown chart depicting a downward trending scope and implementation of user stories.

Using child Tasks of user stories may obfuscate the total work required to implementation of the solution unless baked into the parent story points. Tasks are typically tracked in terms of hours, and separately user story points are calculated/derived from a collective, relative effort estimation, e.g. Fibonacci sequence; 1,3,5,8,13,20…, and many teams may overlay this scale to fit their sprint duration.

Feature and Story Planning – At a Glance

In order to organize each feature, and correlated user stories, teams may use a prefix in the title of the user stories, such as [UI] or [DB]. At a glance, a product owner, or the implementation team can see if all the required stories for a given feature have all the elements required to implement the feature. For example, if a new report needs to be created, multiple stories must contain [UI], [API], and [DB] stories.

Drawbacks – Accepting a User Story as Complete

If you segment your product backlog user stories based on tech stack, you may need to wait until all related stories, UI, API, and DB have been implemented. For example, If your API and DB stories are developed, and not the User Interface (UI), you’re QA/Testing may not start until the UI story has been deployed. Of course, your tester could test the API using testing tools like SoapUI.