Live Project Bug Tracking, Test Metrics, and Test Sign off

This is the concluding part of our “Software Testing training on a live project” series.

It is going to be about defects and also a few remaining topics that will mark the completion of the Test Execution phase of the STLC.

In the previous article, while Test Execution was going on, we encountered a situation where the expected result of the test case was not met. Also, we identified some unexpected behavior during Exploratory Testing.

What happens when we encounter these deviations?

We obviously have to record them and track them to make sure that these deviations get handled and eventually fixed on the AUT.

#1) These deviations are referred to as Defects/bugs/issues/incidents/errors/faults.

#2) All the following cases can be logged as defects

  • Missing requirements
  • Incorrectly working requirements
  • Extra requirements
  • Reference document inconsistencies
  • Environment-related issues
  • Enhancement suggestions

#3) Defect recording is mostly done in excel sheets or via the use of a Defect Management software/tool. For information on how to handle defects via tools, try using the following links:

Table of Contents: [Show]

How To Log The Defects Effectively

We will now try to see how to log the defects we encountered in the previous article in an excel sheet. As always, choosing a standard format or template is important.

Typically, the following columns are a part of the Defect Report:

  • Defect ID: For unique identification.
  • Defect Description: This is like a title to describe the issue briefly.
  • Module/section of the AUT: This is optional, just to add more clarity as to indicate the area of the AUT where the problem was encountered.
  • Steps to Reproduce: What is the exact sequence of operations to be performed on the AUT to recreate the bug are to be listed here. Also, if any input data is specific to the problem that information is to be entered as well.
  • Severity:  To indicate the intensity of the issue and eventually the impact this might have on the functioning of the AUT. The guidelines on how to assign and what values to assign in this field can be found in the test plan document. So, please refer to the Test Plan document from article 3.
  • Status: Will be discussed further in the article.
  • Screenshot: A snapshot of the application to show the error when it happened.

These are some of the ‘must-have’ fields. This template can be expanded (E.g. to include the name of the tester who reported the issue) or contracted (For Example, the module name removed) as needed.

Following the above guidelines and using the template above, a sample Defect log/Report could look like this:

Sample Defect Report for OrangeHRM Live project:

Below is the sample Defect Report created in qTest Test Management tool: (Click on image to enlarge)

Defects are no good when we log them and keep them to ourselves. We will have to assign them in the right order to have the concerned teams act on them. The process – who to assign or what order to follow can also be found in the test plan document. It is mostly similar to (Click on image to enlarge)

Defect Cycle:

From the above process, it can be noted that bugs go through different people and different decisions in the process of being identified to fixed. To track and to establish transparency as to exactly what state a certain bug is at, a “Status” field is used in the bug report. The entire process is referred to as a “Bug life cycle”. For more information on all the statuses and their meanings, please refer to this Bug Life Cycle tutorial.

A Few Pointers While Bug Tracking

  • At the point when we are new to an innovative Group/Task/AUT, it is in every case best to examine the issue we experienced with a companion to ensure that how we might interpret what truly makes for a deformity is right or not.
  • To give all the data that is important to replicate the issue. A deformity that returns to a testing group with the status set as “insufficient data” doesn’t ponder decidedly us. Look at this post – How to get your all bugs settled with next to no ‘Invalid bug’ name.
  • Check in the event that a comparable issue was raised prior to making another one. ‘Copy’ issues are likewise terrible information for the QA group.
  • Assuming there is an issue, that surfaces arbitrarily and we don’t have the foggiest idea about the specific advances/circumstances in which we can reproduce the issue-raise the issue no different either way. At the gamble of the issue being set to “IrReproducible/insufficient data” – we actually need to ensure that we dealt with all potential glitches to the most ideal degree.
  • The general practice is that the QA group makes every one’s imperfections in a succeed sheet during a day and combines it toward the day’s end.

The Complete Defect Life Cycle

For our live project if we were to follow the defect life cycle for defect 1,

  • At the point when I (the analyzer) make it, its status is “New”. At the point when I allot it to the QA group captain, the status is still “New” however the proprietor is presently the QA lead.
  • The QA lead will survey the issue and on verifying that it is a legitimate issue, the issue is relegated to the Dev lead. At this stage, the status is “Relegated” and the proprietor is Dev lead.
  • The dev lead will then dole out this issue to a designer who will chip away at fixing this issue. The status will currently be “Work Underway” (or something almost identical with that impact), the proprietor is the designer.
  • For imperfection 1, the engineer can’t recreate the blunder, so he allots it back to the QA group and set the status as “Not ready to duplicate”.
  • On the other hand, in the event that the designer had the option to chip away at it and fix an issue, the status would be set to “settled” and the issue would be appointed back to the QA group.
  • QA group will then, at that point, get it, retest the issue and assuming it is fixed, will set the status to “Shut”. In the event that the issue actually exists, the status is set to “Return” and the cycle proceeds.
  • Contingent upon different circumstances, the status can be set as “Conceded”, “insufficient data”, “Copy”, “filling in as expected”, and so on by the designer.
  • This strategy for recording the imperfections, announcing and allotting them, overseeing them is one of the significant exercises performed by the QA colleagues during the test execution stage. This is finished consistently until a specific test cycle is finished.
  • When Cycle 1 is finished, the dev group will require a little while to solidify every one of the fixes and modify the code into the following rendition that will be utilized for the following cycle.
  • A similar interaction again go on for cycle 2 too. Toward the finish of the cycle, quite possibly there could in any case be a few issues “Open” or unfixed in the application.
  • At this stage-do we actually go on with Cycle 3? If indeed, when will we quit testing?

Exit Criteria for the OrangeHRM Live Project Testing

This is where we utilize what we would call the “Leave Measures”. This is pre-characterized in the Test Plan record. It is basically as the agenda that will decide if we finish up the testing after cycle 2 or go for another cycle. It seems to be, the underneath when finished up thinking about a few speculative responses to the accompanying inquiries concerning, OrangeHRM project:

At the point when we take a gander at the above agenda, there are measurements and close down referenced there that we have not examined before. Allow us to discuss them now.

Test Metrics

We have laid out that during the Test Execution stage, reports are conveyed to the wide range of various task colleagues to give a reasonable thought regarding what’s going on in the QA Execution stage. This data means a lot to everybody to get approval about the general nature of the eventual outcome.

Envision I report that 10 experiments breezed through or 100 assessment cases were executed-these numbers are just crude information and don’t give an excellent viewpoint about how things are going on.

Measurements assume a crucial part in filling this hole. Measurements are in straightforward words, savvy numbers that the testing group gathers and keeps up with. For Instance, assuming I expressed 90% of the experiments passed, it checks out than saying 150 experiments passed. Isn’t it?

There are various types of Measurements gathered during the test execution stage. What measurements precisely are to be gathered and kept up with for what timeframes this data can be found in the test plan record.

The following are the most commonly collected test metrics for most projects:

  • Pass Percentage of the Test cases
  • Defects density
  • Critical Defects percentage
  • Defects, Severity wise number

Check out the Status Report attached to this article to see how these metrics are used.

Test Sign Off /Completion Report

As we need to advise every one of the partners that testing has started, it is likewise the QA group’s obligation to tell everybody that testing has been finished and share the outcomes. In this way, normally an email is sent from the QA group (generally the Leader/QA Supervisor) giving a sign that the QA group has approved the item joining the experimental outcomes and the rundown of open/known issues.has signed off on the product attaching the test results and the list of open/known issues.

YOU MAY BE INTERESTED IN

The Art of Software Testing: Beyond the Basics

Automation testing course in Pune

Automation testing in selenium

Mastering Software Testing: A Comprehensive Syllabus

Scroll to Top