With agile, focus on application performance
March 21, 2013 —
(Page 1 of 2)
Related Search Term(s): agile
Agile is neither a well-understood development methodology nor a well-followed best practice. It has potential to be both. But many organizations do not succeed because they fail to incorporate teams across the application life cycle.
In the July 2012 Market Snapshot Report from Voke’s Lisa Dronzek and Theresa Lanowitz, only 28% of survey respondents reported success with agile. The report asserts that agile appears to be shifting the software engineering process to one that is focused on development—“to the exclusion of QA and operations.”
When implemented correctly, agile encompasses the entire application life cycle. It unites business, development and infrastructure with a common focus on speed, quality and value. This focus culminates in application performance on the part of the customer or application end user. As performance continues to be the most critical factor affecting adoption and use of an application, agile shops must focus more on improving speed, measuring quality, and delivering value.
Speed in agile accounts for time-to-market or time-to-acceptable-completion of an application. It refers to the time it takes from idea conception to delivery of a product to the customer. “Acceptable completion” or “Doneness criteria” are paramount. In an agile process with continuous development, testing and deployment, a deliverable is only considered done when it meets all doneness criteria, both functional and performance.
The agile process should build in performance management throughout each stage, from concept to support. This does not put speed at risk. Build-test-deploy automation provides a reliable and fast way to verify features and functions early and often throughout the development life cycle. This automation, combined with an agile process that has garnered acceptance throughout development and infrastructure, speeds the delivery of highest-priority features and functions to the customer.
While time-to-market can be measured from idea to completion in finite terms, measuring quality in a delivered application is more ambiguous. Organizations will often measure the number of defects found in development and compare that to the number of production incidents discovered by infrastructure (or worse, the customers).
On the surface, this is a good objective measurement. However, quality must be weighed with subjective application defects. The customer’s perception of failure is arguably more important than discovered defects when it comes to user adoption and acceptance.
Understanding the tolerance or patience of users for any particular transaction is critical. Performance failures should be considered functional failures. If a transaction takes 30 seconds to complete and users do not wait around for results to be delivered, the function itself has subjectively failed.
Establishing performance thresholds within the build-test-deploy process helps detect both subjective and objective failures earlier in the application life cycle, along with automating the pass/fail criteria for each stage in the life cycle. This provides more value from your automated build-test-deploy system, provides more time for remediation of critical failures, and avoids potentially damaging negative customer experiences.