Software Bug Tracking Increasing Efficiency And Reducing Project Costs

The costs of erroneous software can be as high as 50 percent of the investment in software development. Yet, the potential to improve software quality and reduce project cost is enormous. Software bug tracking can be an effective means to achieve quality i.e. error free at less cost. However, bug tracking is commonly misunderstood, incorrectly implemented, and often seen as an impediment and cost to the organization. In this paper, we discuss the quality costs of erroneous software and provide an approach to implement an effective software bug tracking system within a system.

1.     Introduction: Bugs are commonly defined as "failure to conform to specifications," e.g., incorrectly implemented specifications and specified requirement(s) missing from the software. However, this definition is too narrow. Discussions within the software development community consistently recognize that most failures in software products are due to errors in the specifications or requirements—as high as 80 percent of total bug costs. Other studies have shown that the majority of system errors occur in the design phase [3].

2.    Implementation and Effective Bug Tracking Process: Software quality assurance departments can play a catalytic role in implementing an effective bug tracking process. A survey conducted in 1994 by the Quality Assurance Institute found that a mere 38 percent of the organizations had formal software bug management processes, whereas 25 percent of the survey participants said their organizations lack consistent testing standards and procedures. The survey also reported that although 60 percent of organizations had testing standards and procedures, some organizations admitted they were out of date and not followed. Recent surveys, nonetheless, suggest that more companies are now striving to improve their software development process through early bug identification, minimizing resolution time, hence reducing project cost.
Effective bug tracking begins with a systematic process. A structured tracking process begins with initially logging the bugs, investigating the bugs, then providing the structure to resolve them. Bug analysis and reporting offer a powerful means to manage bugs and bug depletion trends, hence quality costs.

3.    Integraye Software Development and Bug Tracking: Traditional approaches place testing immediately before implementation. Typically, testers receive a low-quality product at the tail end of development when there is tremendous pressure to deliver, even if the software is plagued with bugs. For early bug detection and resolution to take place, bug tracking and software development efforts should begin simultaneously.

It will solve a multitude of problems downstream [6]. Bug tracking must be implemented throughout the development lifecycle, this has always lead to fewer release bugs; however, such organizational foresight is rare. The Sentry Group reported that 62 percent of all U.S. organizations do not have a formal quality assurance or test group. The report also added that a large majority of these organizations place a much higher priority on meeting schedule deadlines than producing high-quality software [7]

4.    Different Phases of Bug Tracking:
a) Requirement Phase: Bug tracking focuses on validating that the defined requirements meet the needs and the users' expectation about functionality. Sometimes, system-specific constraints would cause the deletion of certain business requirements [8].
 
b) Design and Analysis Phase: Efforts should focus on identifying and documenting that the application design meets the business rules or field requirements as defined by the business or user requirements [2]. For example, does the design correctly represent the expected user interface? Will it enforce the defined business rules? Would a simpler design reduce coding time and documentation of user manuals and training? Does the design have other effects on the reliability of the program?
    c)  Programming Phase: Bug tracking must emphasize ensuring that the programs accomplish the defined application functionality given by the requirements and design. For example, have any particular coding caused bugs in other parts of the application or in the database? Is a particular feature visibly wrong?
    d)  Maintenance and Enhancement Phase: During the maintenance phase, effort is spent tracking ongoing user issues with the software. During enhancement phases (there could be multiple releases), bug tracking is focused on establishing that the previous release is still stable when the enhancements have been added.
5.     An Effective Bug Tracking Process: To merely integrate bug tracking into the development process is not enough. A clearly defined bug tracking process is needed to ensure bugs are handled in an organized manner from discovery through resolution. Components of this process are described in the sections that follow. This process is progressive—bug evaluation cannot be successfully performed if the earlier components (such as describing bugs and prioritizing bugs) were not implemented [10].
        a)    Bug Repository: Once a bug has been discovered, the important first step is to log the bug into a bug-tracking database or repository. When a bug is logged, it must be fully described so that it can be reproduced during debugging, prioritized based on its severity, and have resources assigned for its resolution.
Bugs have a number of other attributes that should be recorded, such as
•    Bug number.
•    Date.
•    The build and test platform in which it was discovered.
•    The application requirement or business rule to which it relates.
•    Any supplementary notes.
It also is important that the repository offer a means to track the "life" of the bug (the resolution status) and historically report on all bugs discovered and logged for the project. It pays to have this system online and available to all development staff so that the assigned parties can update the resolution progress for the bug status.
b)    Bug Described: Your organization's bug reporting procedures should require that details about each software bug be recorded when the bug is discovered, including a description, symptoms, sequence of steps to re-create it, and severity [4]. Bugs are of various types:
•    Interface bugs include incorrectly working menu items, push buttons, and list boxes.
•    Navigational bugs could be described as a window not opening when moving from one interface screen to another.
•    Functionality bugs could be incorrect calculation of salaries in a payroll system.
 Merely logging, "Adding new customer window does not work." A detailed description, such as, "The `Save' button on `Add New Customer' window does not work," would give the developer adequate information to go straight to the specific problem and repair it. This saves time and unnecessary interruption for the developer to research the bug thus reducing the overall project cost.
c)    Bug Prioritized: Once a bug is logged and described, appropriate resources must be allocated for its resolution. To do this, the bug must be analyzed and prioritized according to its severity. Each bug is given a priority based on its criticality. Usually, it is practical to have four priority levels:
•    Resolve Immediately.
•    High Priority.
•    Normal Queue.
•    Low Priority.
 A misstatement of a requirement or a serious design flaw must be resolved immediately, before the developer translates it into codes that are implemented in the software—it is much cheaper to amend a requirement document than to make program code changes. The wrong font size for a label may be classified as "Low Priority."
The critical path for development is another determinant of bug priority. For example, if one piece of the functionality must work before the next piece is added, any functional bugs of the first piece will be given the "Resolve Immediately" priority level. For example, a query engine retrieved transactions matching user-specified criteria upon which further processing was performed. If the query engine had been bugive, no further development (or testing) would have been practical. Therefore, all functional bugs of the query engine were prioritized as "Resolved Immediately." The urgency with which a bug has to be repaired is derived from the severity of the bug, which could be defined as follows:
•    Critical.
•    Important.
•    Average.
•    Low.
     A bug that prevents the user from moving ahead in the application—a "show stopper"—is classified as "Critical," e.g., performing an event causes a general protection fault in the application. Performance bugs may also be classified as "Critical" for certain software that must meet predetermined performance metrics. If the user is able to formulate work-arounds where there are bugs, these bugs may be classified as "Average." An overly long processing time may be classified as "Important" because although it does not prevent the user from proceeding, it is performance deficiency. Bugs with severity "Average" will be repaired when the higher-category bugs have been repaired and if time permits. Certain graphical user interface bugs, such as placement of push buttons on the window, may be classified as "Low," since this does not impede the application functionality. Although bug priority indicates how quickly the bug must be repaired, its severity is determined by the importance of that aspect of the application in relation to the software requirements.
d)   Structured Resolution: The bug tracking system also must ensure that the bug progresses in an appropriate sequence from discovery through resolution. Each bug is also given appropriate status; for example, a new bug is given the status of "Open," and a bug under repair would have the status of "Assigned." As repair work progresses, the status of bugs are updated to reflect its state in the resolution process. A bug that has been repaired will be submitted to the testing team through formal change control to be verified again. Only if the fix passes the regression test will it be accepted and the bug assigned a status of "Closed." Other bug statuses could include "Deferred," if the bug is not to be fixed for the current release but may be resolved in a subsequent release or "Enhancement," if a feature that is not part of the requirements has been suggested, and may be reviewed as an enhancement for later releases.
e)     Communication: An effective bug tracking system must allow communication of the software's bugs, status, or changes to members of the development team and all others concerned. This has become an increasingly crucial element because people working on the same project may not only work in different parts of a building but also may even work in a different state for a different organization. Without an effective means to communicate bugs, bug tracking—and consequently achieving software quality—would be a nightmare. E-mail is an efficient vehicle to expedite informing software engineers and all concerned of bugs as they are discovered. Software engineers could then perhaps access an online bug repository as they receive the E-mail on new and existing bugs. Similarly, E-mail also serves as a reply medium to inform testers that a bug has been repaired. Some bug tracking repositories, e.g., one set up in Lotus Notes, facilitates built-in communication features that can be used by both software engineers and testers.
Commercially available bug tracking software, e.g., Auto Tester and SQA Team Test Software, are more sophisticated in communicating bugs and their status to individuals or as a batch. They also automatically inform respective development staff and management of bugs as they are discovered. Although E-mail provides a means to convey information about bugs between the development and testing team, regular formal bug tracking meetings also help keep a close eye on the number, types, and nature of bugs found, which may indicate how software quality is progressing through the resolution stage.
Bug analysis is discussed in more detail in the "Reporting" component of this bug tracking process. If the testing and development teams must work hand-in-hand toward achieving software quality, there must be continuous communication between them. Informal or verbal communication between these teams is inadequate.
f)     Continuous Bug Resolution: It costs much less to resolve bugs as soon as they are discovered. For example, in any current project, a software product undergoes two transformations: The entire application architecture is being revamped and enhancements are being implemented for the next release. Revamping the architecture changes the fundamental "backbone" of the application in question, which is in itself a complex task [9]. We have three categories of bugs:
•    The existing list of yet-to-be-resolved bugs from the original application.
•    Bugs that would come about as a result of revamping the architecture.
•    New bugs contributed by the new enhancements.
Now the project is divided into smaller deliverables and bug tracking is implemented for each deliverable. If resolution were to be delayed until later, the mere complexity of the various deliverables would present an inordinate amount of challenge to resolve bugs. At later stages, it would become a mammoth effort to merely identify and assign bugs to the respective software engineers. Additionally, when they have been assigned bugs to repair, the engineers have to remember what they implemented in the codes perhaps months earlier. This will incur expensive investigation time. The best time to resolve bugs is when they are discovered. This is especially true in a RAD environment, where the application is developed through several iterations or builds. Each build has an incremental amount of application functionality and related coding. Any bugs discovered in a particular build should be referred to developers immediately for resolution. The functionality added in the most recent build and related program codes are still fresh in the developers' minds, which leads to faster investigation of the root cause of the bugs, and therefore more efficient resolution efforts. To defer bug resolution until later in the development cycle wastes time and resources.
g) Bug Evaluation and Analysis: Most organizations consider it essential to constantly monitor and evaluate their performance, and this key practice is especially critical in bug removal. The overall success of your project largely hinges on effective bug resolution, so you need to know your bug removal status and the cost of achieving quality. For example, a bug trend analysis will indicate the number of bugs discovered over time. This analysis may even be further subdivided for bugs by status, functionality, severity, etc. Bug age analysis suggests how quickly bugs are resolved by category.
The type and extent of the bug evaluation and analyses may be determined by the organization's cost objectives and delivery schedules. Following are a few suggested analyses that may be applicable to most software projects. The following measures need to be determined to analyze bugs (or those chosen as part of an organization's bug analysis strategy).
•    Bug status vs. priority.
•    Bug status vs. severity.
•    Bug status vs. application module.
•    Bug age.
The above information will not be available if the earlier steps of adequately logging bugs were not implemented as part of the bug tracking process. By comparing these measures from the current iteration to the results from the analysis of previous iterations, one can get an indication of bugs’ trends, which are discussed further in the following two subsections. Bug Evaluation. Although the evaluation of test coverage provides the measure of testing completion, an evaluation of bugs discovered during testing provides the best indication of software quality. By definition, quality is the indication of how well the software meets a desired attribute. So, in this context, bugs are identified as "variance from a desired attribute." Bug evaluation may be based on methods that range from simple bug counts to rigorous statistical modeling. Rigorous evaluation can include forming a model (or setting goals) about discovery rates of bugs, then fitting the actual bug rates during the testing process to the model. The results can be used to estimate the current software reliability and predict how the reliability will grow if testing and bug removal efforts continue. However, because the field's current lack of a scientific
model and resources dedicated to perform such evaluations (or a tool to support them), an organization should carefully balance the cost of rigorous evaluation with the value it adds.
Bug Analysis. This means analyzing the distribution of bugs over the values of one or more parameters associated with a bug. Bug analysis provides an indication of the reliability of the software. Four main bug parameters are commonly used for bug analysis:
Status: the current state of the bug (open, being repaired, closed, etc.).
Priority: the relative importance of addressing and resolving this bug.
Severity: the relative impact of this bug to the end-user, an organization, third parties, etc.
Source: what part of the software (such as a module) or requirement this bug affects?
     Bug counts can be reported in two ways:
i) As a function of time, resulting in a bug trend diagram or report.
ii)  As a function of one or more bug parameters (like severity or status) in a bug density report. These types of analysis provide a perspective on the trends or distribution of bugs that reveal the software's reliability. Bug trends follow a fairly predictable pattern in a testing cycle. Early in the cycle, the bug rates rise quickly. Then, they reach a peak about midstream, in an adequately staffed test project, and fall at a slower rate over time. The project schedule can be reviewed in light of this trend. For instance, if the bug rates are still rising in the third week of a four-week test cycle, the project is clearly not on schedule. Other instances where the rate of closing bugs is too slow (experience rated) might indicate a problem with the bug resolution process; for example, resources to fix bugs or to retest and validate fixes might be inadequate[1]. This is an important aspect of software project management: to ensure that software quality is progressing within the planned delivery schedule. Figure 3 displays bug status by software module. In each
software module, a discovered bug is given a status of Open and assigned resources for fixing.
4. Conclusion: Effective bug tracking strongly contributes to enhancing software quality and reducing development project costs. Using the broader definition of a bug ensures that not only are resultant errors or nonconformance to requirements discovered but also variance from a desired attribute, including incomplete requirements, takes place. Searches for such bugs can then take place across all software development phases. By "shadowing" the software development process, bug tracking helps you identify and report potential software problems early and acts as a catalyst for problems to be addressed. By facilitating discovery of bugs earlier in the development cycle, effective bug tracking is a critical key to lower costs, enhanced software quality, and reducing overall project cost. However, to achieve this requires a fundamental change in the ideology behind quality assurance and the software development process as well as the introduction of the necessary tools to track and manage bugs. The bug-tracking model discussed in this article will be useful for organizations moving in this direction. Careful planning and phased adoption of this model can make this approach a powerful software quality strategy.
5.   References:
1.    J. Anvik, L. Hiew, and G. C. Murphy,(2006) “Who should fix this bug? In ICSE’06: Proceedings of the 28th International Conference on Software engineering, pages 361–370.
2.    J. Aranda and G. Venolia.(2009) The secret life of bugs: Going past the errors and omissions in software repositories. In ICSE’09: Proceedings of the 31st International Conference on Software Engineering (to appear).
3.    S. Artzi, S. Kim, and M. D. Ernst, (2008)” Recrash: Making software failures reproducible by preserving object states. In ECOOP’08: Proceedings of the 22nd European Object-Oriented Programming Conference, pages 542–565.
4.    N. Bettenburg, S. Just, A. Schr¨oter, C. Weiss, R. Premraj, and T. Zimmermann. (2008) What makes a good bug report? In FSE’08: Proceedings of the 16th International Symposium on Foundations of Software Engineering, pages 308–318, November.
5.    S. Breu, J. Sillito, R. Premraj, and T. Zimmermann. (2009) “Frequently asked questions in bug reports”. Technical report, University of Calgary, March.
6.    P. Fritzson, T. Gyimothy, M. Kamkar, and N. Shahmehri. (1991)Generalized algorithmic debugging and testing. In PLDI’91: Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 317–326.
7.    P. Hooimeijer and W. Weimer, (2007) Modeling bug report quality. In ASE’07: Proceedings of the 22nd International Conference on Automated Software Engineering, pages 34–43.
8.    S. Just, R. Premraj, and T. Zimmermann (2008) Towards the next generation of bug tracking systems. In VL/HCC’08: Proceedings of the 2008 IEEE Symposium on Visual Languages and Human-Centric Computing, pages 82–85, September.
9.    A. J. Ko and B. A. Myers, (2008) Debugging reinvented: asking and answering why and why not questions about program behavior. In ICSE’08: Proceedings of the International Conference on Software Engineering, pages 301–310.
10.    B. Liblit, M. Naik, A. X. Zheng, A. Aiken, and M. I. Jordan (2005) Scalable statistical bug isolation. In PLDI’05: Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and implementation.