|Home - Industry Article - Nov 07 Issue
Quality: The Missing Link in Software Development
By Brad Johnson, Director of Product Marketing, Lifecycle Quality Management, Borland Software
Imagine if businesses across all industries – from pharmaceutical drug testing, to automobiles, to iPods – waited until their products came off the manufacturing line to determine how well they functioned. While we can argue that technological advances have moved us far beyond the days of a ‘trial and error’ approach, the reality is that software development continues to be more art than science. Consequently, project cancellations and failures have become ubiquitous and cost overruns, schedule slippages, low quality and poor reliability have become disturbing norms in the software industry.
This isn’t to say that technology companies haven’t been hard at work developing tools and solutions designed to improve software application development processes and the state of software quality. In fact, it’s quite the contrary. Software development organizations have been and will continue to be under increased pressure to adopt more mature, comprehensive and proactive approaches to ensuring software quality. While traditional approaches to achieving quality relegate most software test and validation to late in the application lifecycle, many IT organizations are now placing an increased emphasis on testing more effectively and more frequently throughout the software delivery lifecycle.
Certainly vendors offering automated software testing products have made strides in this area, however, the fact remains that IT organizations still have no level of visibility or control over what is actually happening during the development process. It’s 2007 – with all the knowledge, resources and technology available to us today, why are companies still struggling to consistently deliver high-quality software?
One reason is that many project teams still operate as individual ‘silos’ that utilize their own approach to quality. This results in disconnected practices that foster inconsistencies throughout the development process. Even worse, many development teams still rely on largely manual and homegrown approaches, such as documenting requirements and test plans in spreadsheets and digging through mountains of logs files to isolate problems, and depending on disconnected bug-tracking systems to align QA and Dev teams. These types of manual, disconnected approaches add to the cost and complexity of quality practices, leading to the loss of quality-related information, duplication of efforts, poor testing coverage of critical functionality, limited visibility into overall application quality, and ultimately, unpredictable quality across projects. Companies need to strive to adopt standard processes for defining, measuring, managing and improving software quality that begin at the beginning of software projects, not depend on what happens at the end.
Instead of being based on guesswork, ‘Tribal Knowledge’ and anecdotal evidence, companies are now beginning to implement more efficient and effective software development processes that are based on metrics and objective measurements. What’s more, today we see that more companies are beginning to hold developers accountable for software quality in ways they weren’t before. Instead of leaving matters to the QA team to deal with during the test phase, many companies are distributing quality responsibility across the development lifecycle by providing the means for developers to address quality within their established processes and ultimately focus their energies on ‘getting software right’ from the start.
One successful approach called Lifecycle Quality Management (LQM) has emerged delivering quality software, which helps organizations infuse quality throughout the entire software development lifecycle. This approach encompasses four core quality processes – plan, verify and validate, improve and manage:
Plan (Define and Prioritize)
Analyze and Classify Requirements
High-quality requirements should be delivered and prioritized based on business needs. This prioritization must then trace all the way to the most finite unit or object-level activities to assure teams are united in their goals.
Perform Risk Analysis and Prioritize Quality Activities
Because risk is inherent in any software development project, those who analyze the risk and develop their project plans and quality plans in light of those risks, have the potential to minimize their impact.
Define and Build Quality Plan
Without a map, it is hard to successfully navigate new areas. Organizations should be careful to define application quality goals with agreed-upon criteria and measures that meet the application business needs.
Verify and Validate
Defined processes enable teams to function efficiently and effectively, since they can see what is required of them in their project role, and help to ensure that software development teams follow the selected processes and procedures. Consistent reviews of process lead to consistent improvements in efficiency.
Two heads are better than one when it comes to addressing most challenges. The same is true when it comes to ensuring software quality. Best practices for software quality include peer reviews of all types of work products (requirements, design, code, tests) to ensure they conform to customer requirements and organization standards.
Work Product Analysis
If quality requirements are not defined early, major architecture errors can be made, particularly when it comes to performance expectations. Aspects of the architecture should be examined and design choices made before much of the system coding begins. This type of early analysis – using static models or simulations – can provide the assurance that a design can confidently be committed to code and that quality requirement can be met.
Create and Execute Test Cases and Suites
Creating appropriate, prioritized and efficient tests is the ‘art’ of Quality Assurance professionals. Harnessing the process of testing is often the difficult and mundane aspect. So, effectively managing the creation, management and execution of all manner of testing – manual or automated, functional or non-functional – becomes the challenge of all organizations. However, this is where repeatability, visibility and economies of scale really enable consistent results from project to project.
Improve (Analyze and Tune)
Analyze Results of Process Audits, Reviews and Tests
With every release, organizations must improve the quality of their software projects by analyzing the results of verification and validation activities and comparing them with their expected outcomes as defined in the quality plan and in specific requirements. And, it isn’t just about analysis of technology issues. Analyzing the process itself leads to continuous efficiency gains.
Diagnose and Pinpoint Issues
In the short-term, analysis of defects to determine their root causes may be perceived as time-consuming and costly, but it can save significant resources in the long run. Finding issues should be the preliminary goal of testing; fixing the problems completes the process.
Update Work Products and Re-Verify
The final phase of the improvement process area is validating that improvements actually achieve their goals. The requirements, design, code or test need to be repaired, and then reviewed or tested again to ensure that the repair correctly handled the defect.
Status Tracking and Reporting
Managers must have the necessary lifecycle quality information in hand at each stage of the SDLC to make the right decisions. Organizations must be able to obtain real-time reports on quality status and project progress, including information about the results of reviews and testing, coverage, and defect find-fix rates to enable efficient resource management and further understand release readiness.
Control All Phases of Quality Management from Planning to Improving
As software travels through the delivery lifecycle, organizations need support for all of their quality processes, yet software development organizations have limited time and resources. To minimize costs and time to market, all resources need to be managed efficiently. Control systems enable organizations to effectively manage activities and assets that drive quality results.
The Future of Software Delivery
We are starting to see the shape of a very different future for software development – not only how it is created, managed and delivered, but how it is able to connect all stakeholders in an effort to improve quality and performance. By employing LQM strategies, stakeholders are able to increase control, predictability, visibility and efficiency over the entire software delivery process. LQM achieves this through a multi-pronged approach, which enables companies to infuse quality throughout the entire software development lifecycle. This approach enables teams to consistently deliver high-quality applications and services that meet business requirements, while systematically reducing costs, risk, defects, rework and time-to-market.
New advancements in LQM technology have the ability to help companies be successful at whatever ‘software delivery’ means for their organization. Whether its service-enabling legacy applications, developing new applications, customizing packaged applications or some combination thereof. Let’s get good at software delivery. Let’s make it more of a managed business process that’s based on metrics and objective measurements, not just guess work and anecdotal evidence. And finally, let’s find an established process, support and automate it with tools, and get some confidence that our output is going to be reliable, predictable and of high-quality.
Brad Johnson is Director of Product Marketing for Lifecycle Quality Management at Borland Software, a company dedicated to helping IT organizations transform software delivery into a managed, efficient and predictable business process. He is responsible for the product strategy and marketing of Borland’s Lifecycle Quality Management solution. In this role, Brad is dedicated to improving the project success rate for IT teams with a comprehensive quality management solution that supports quality early in the lifecycle with complete, testable requirements, helps developers build higher quality code, and leverages powerful test automation to improve efficiency and reduce costs. He brings over 13 years of IT experience to Borland and a proven track record of helping global enterprises continually improve the quality and value of their IT projects. Brad’s IT career experience has spanned development, operations, strategic alliances and sales. Prior to joining Borland, he held senior-level positions in Product Management and Product Marketing at Mercury Interactive and Compuware. Brad earned a Bachelor of Science in Business with a specialty in Management Information Systems from the University of Phoenix. For article feedback, contact Brad at email@example.com