Print

Hitting the right notes in QA



Jeff Feinman
Email
August 15, 2009 —  (Page 1 of 6)
If the software development process weren’t buoyed by testing and quality assurance metrics, there would be a great deal of guesswork taking place.

It would be like a symphony orchestra getting up on a stage without a single sheet of music and trying to play Bach or Beethoven.

Only with software, the symphony is getting more complex each year. With a constantly evolving software development process that includes different IDEs, systems integrations and a seemingly endless stream of new features, it can be difficult to maintain a sense of the values behind metrics.

According to multiple testing and quality assurance software providers, good metrics should provide adequate information on a defect’s effect on the overall costs of producing the software or the likelihood of the application failing. Good metrics should also offer an idea of how compliant a developer is with industry standards or how secure a developer’s code is.

Bill Curtis, senior vice president and chief scientist for application management company Cast Software, agreed that one of the most important metrics of an application’s health in the long term is the percentage of defects detected before a developer gets to test.

“That’s the best prediction of long-term improvement in the quality of your software. Defects cost much less to fix in the design and coding phase than in testing, and the earlier defects are found, the cheaper they are to fix,” he said.

Chris Wysopal, cofounder and CTO of Veracode, said that while things like authorization problems can certainly be tested for in the final build or when an application is deployed, the cost to fix those problems is very expensive.

“The best time to look for authorization problems is at design time,” Wysopal said. “At that time, you can do threat modeling, which is inspection more than testing. Then you have defects that show up when you’re writing the code, like buffer overflow, and that could be found by static analysis at code design time.”

Threat modeling is the process of assessing and describing what attacks a piece of software is vulnerable to in order to eradicate potential threats.



Related Search Term(s): QA

Pages 1 2 3 4 5 6 


Share this link: http://sdt.bz/33688
 

close
NEXT ARTICLE
Zeichick’s Take: Software QA focused on developers, and not the cloud
Research indicates that the cloud is not being used much for testing Read More...
 
 
 




News on Monday  more>>
Android Developer News  more>>
SharePoint Tech Report  more>>
Big Data TechReport  more>>

   
 
 

 


Download Current Issue
APRIL 2014 PDF ISSUE

Need Back Issues?
DOWNLOAD HERE

Want to subscribe?