I recently saw a funny picture of a distinguished looking gentleman with the following caption: "I don't always test my code, but when I do, I do it in production." I laughed when I first saw it as did most of my fellow IT folks, but after a few minutes the humor went away. The sad reality is that a lot of code never gets properly tested until it gets released into a production environment. And that is too late.
Worst Case Scenario
Almost 20 years ago I took a position as a developer at a firm that was distributing software on floppy disks and CD's. They thought they were on the cutting edge of technology because they had beaten all competitors in their space to market.
The problem was that when I stepped into that position the ratio of customers to help desk calls was something like 1:1.5. For every 10 pieces of software we delivered we had to handle 15 service calls. I spent 10 hours a day that first week taking calls on software I had never seen before.
Was the problem poor design or incompetent developers? No, the problem was that the product was not properly tested before it was released. In fact, we did not even have a separate testing or quality assurance team. The same people who wrote the code tested the product.
We ended up "standing down" the entire process until we were able to complete regression testing and discover all the bugs that needed to be fixed. Only then did we build and test our next release.
This brings me to the point of this little story: The reason software projects fail is because quality control is ignored, bypassed or short-circuited. The testing team, which we will call the QA or Quality Assurance team, is a central component of any software development life cycle (SDLC). They need to be involved from day one and they must be empowered to properly do their job.
Set Your Goal
In the end, every part of the project team has the same goal—delivering a quality project on time and on budget—yet the individual goals of team members are widely divergent.
The project manager and project owner are often more concerned with the time and budget part of the project. That leaves the quality component resting primarily in the hands of the QA team. It is the responsibility of that team to ensure that a fully functional product is released with a minimal number of defects.
The QA team must report back to a senior manager or VP with as much clout as the senior manager or VP sponsoring the project. Too often I hear QA leads say they need to ensure that the software passes testing by a certain date. That should not be the modus operandi. The QA team must establish their role in the SDLC early on. There are two major areas where the QA team can provide the assurance of quality—in the functionality of the product and in the performance and scalability of the product.
Functional Testing
Once the business requirements are set the QA team needs to meet with the appropriate business analysts and translate the use cases into test cases. If there are only a couple of use cases—things like: 1) user logs on 2) user does some stuff 3) user logs off—they have their work cut out for them.
Lack of appropriate use cases is a signal that the project team does not know what they need to build. The creation of thorough, comprehensive use cases is the foundation of the testing process. The "log on" test case may be decomposed into dozens of test cases—based on user type, user location, type of machine used for log on, etc.
Creating the right balance of test cases is an art. There is a fine line between "over testing" the obvious use cases and spending too much time and effort testing edge cases.
What is essential is that the test cases are created with the sole objective of fairly testing the product—not with the objective og quickly getting the product passed.
Empower QA
The QA team should provide its own input to the project plan. And they can only provide that input once they have a firm understanding of what they are testing. That means that once they have identified the approximate scope and number of test cases only then can they provide a testing schedule.
That input should consist of something like—1200 test cases; three iterations; five days required for each iteration. The QA team cannot be tied to hard dates assigned by a PM and be successful. If the first iteration of functional testing is scheduled for July 1 and on July 1 they have nothing to test the completion date starts to slip.
Likewise, if they do have something to test and they begin testing with a 50 percent failure rate they must be empowered to push back and reset the clock. I have seen too many projects where they development team does not bother to test basic functionality before the product is thrown over the fence to the QA team. It is the responsibility of the development team to smoke test their code for all basic functionality before QA is involved.
Iterate
Once the QA team starts to record defects discovered in testing, the development team begins the process of remediating those defects. It is critical that the defects remain until the next iteration of testing. Development teams have a habit of trying to do break-fix coding and deployment in the middle of a testing cycle. This is not acceptable.
Once a round of testing is accomplished, all defects are corrected, and the development team smoke tests those corrections. Only then is a new release of code compiled and deployed to the QA environment. This process continues until all defects are remediated or until the business owners agree that the test case does not represent a valid requirement of the project.
There are legitimate cases where implementing a particular functionality that was part of the original requirements may not be feasible. That is an acceptable and expected part of the SDLC that is flushed out during the testing process. What is not acceptable is arbitrarily removing test cases from the process simply because they are not passing their test cases.
Is it Scalable?
Performance testing is an interesting process. It must be accomplished in an environment that duplicates the production environment as much as possible. It is not acceptable to performance test in an environment that is one-fourth the size of production and then extrapolate results from the performance test by multiplying four-fold.
Very few software processes scale linearly. Creation of the right test cases for performance testing is critical. Performance test cases need to be patterned against anticipated user scenarios, but they also need to include every "type" of function.
The development team may steer QA away from a particular function that has 20-some inner-joins on a database because they know it is an inherently inefficient process.
The performance testers need to understand the software well enough to properly build the right mix of test cases.
Concurrent Users
Concurrency is one of the most misused and misunderstood terms in software use and requirements. The business requirements or non-functional requirements will provide the expected number of concurrent users.
We need to quickly define what that means. Does 1,500 concurrent users mean 1.5K users logging on in an hour or minute, or does it really mean 1.5K users logging on simultaneously? Only once that is understood can the test cases and scripts be built. If the project is replacing an existing application there should be metrics available to support the expected users. If so then we can probably safely double that number for load testing. If the application is new you will probably need to use a higher multiple.
SLA
Then there are the SLA's. I saw a test plan for a web application that had an SLA of five seconds for page load for everything except log-in which was given seven seconds. If the QA team is provided unnaturally high SLA's it is their responsibility to question the business team.
There are lots of case studies and whitepapers available that provide generally accepted response times for most sorts of applications and functions. Users have little tolerance for high response times.
The performance team needs to help the business understand how response times translate to user acceptance or rejection. I have seen cases where a new bit of software was successfully deployed, but never used because it was too slow.
It is not enough to just perform tests—tests need to be interpreted and explained to the business owner. And then there is the really sordid part of performance testing. This occurs when a particular process or page provides results that are not even close to the SLA.
So, after three rounds of load testing with 3000 users the log-in page has an average response time of 9.78 seconds. Then someone decides that the "real" SLA for that process is 10 seconds. The QA team probably has to report success based on the new SLA but they still have an obligation to report that change in SLA based on negative performance results.
Look at the Hardware
There are usually three parts to performance testing: load testing (where some multiple of the anticipated load is tested); stress testing (increase the number of users until the application "breaks" or falls over); and endurance testing (where the anticipated user load is tested for 12-24 hours).
The results are usually response times per page load or process plotted against the number of users and time into the test. Response times are what we usually take to indicate success or failure but we also need to look at what is happening to the servers during these tests.
If we have an acceptable response time for all functions during a load test but we discover that the applications servers are all running at 85 percent capacity there is a serious problem. Chances are the QA team will need to engage other teams during the test cycle to properly monitor resource use on the hardware and software components of the system.
Friend or Foe?
The QA team and process may at times seems to be adversarial. If that is the perception then one thing is certain—the SDLC process is broken. The QA team is the final (often only) gatekeeper between the development team and the user. They will only be perceived as an adversary if the rest of the project team is more interested in getting a project done quickly than it is in providing a quality product.
By the same token the QA team should not be so tight with the rest of the project that they become enablers in short circuiting best practices. Software developers are often very smart people who like to operate outside of the process. That makes being a developer cool and fun, but it doesn't always mean that the end results are the best they can be.
That is the job of the QA team. Good testing and good quality control are absolutely essential to delivering good product. Respect your developers, but put your faith in your Quality Assurance team.
Please address comments, complaints, and suggestions to the author at prolich@yahoo.com.
© Touchpoint Markets, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to TMSalesOperations@arc-network.com. For more information visit Asset & Logo Licensing.