In Agile development environments, the focus on system performance during early development sprints gets kick started without pushing performance testing to the end (hardening sprint) waiting for the complete system to be ready.

There is no standard approach followed to specify the performance requirements or to carryout performance tests. There is no standards about how to organize the Performance Testers / Engineering team – as part of the development sprint team (in that case it becomes like waterfall model where performance testers have last few days to do all performance testing for the available use cases or features) or as a separate sprint team (to do performance validations for features released during previous development sprint),etc.

Irrespective of however Performance Testers / Engineers are organized, do not bother if there are no Non-Functional Requirements (NFRs) defined. Most of the time non-functional requirements will not be readily available & it usually evolves over time as the system development progress.

Here are the Key Tools that will help you

Tools for Server-side Performance Testing : Use open source performance testing tools like JMeter / Grinder / Taurus or commercial tools like HP LoadRunner, Neoload, BlazeMeter, etc to carryout performance tests. Majority of the tools facilitates running continuous integration performance tests in CI/CD platform.

Tools for Browser (Client-side) Performance Testing : Use browser plugins like YSlow / PageSpeed or Ajax Dynatrace, Fiddler, HTTP Watch, WebPageTest, etc.

Tools for Server Resource Monitoring : Use native operating system performance monitors like PerfMon (Windows), VMStat / SAR commands (Unix flavours). And explore using open source infrastructure monitoring tools like Nagios / Prometheus,etc by working closely with infrastructure teams.

Tools for Code Profiling : Use tools like JProfiler / JProbe / JConsole , etc (J2EE technology stack) & Jet Brains dotTrace profiler/ ANTS Performance profiler / .NET Memory profiler, etc (.NET technology stack) to carryout code performance analysis.

Tools for Performance/Capacity Prediction : Use tools like JMT , PDQ or in-house accelerator tools on QN models. Test result mapping strategies using SPEC / TPC industry benchmarks can also help to start with.

Here are the Key Activities to focus on

  • Instead of waiting for the web page or functionality to be available, always explore if you can carry out unit or component performance tests. Start with API performance testing if the APIs are ready or start with code performance analysis. Explore if you can use stubs to test components in isolation at early stage.
  • Focus on carrying out 1 user performance test to measure the transaction response time on the available test environment. If the transaction response time for 1 user load is beyond acceptable level (say beyond 2-4 seconds) report such transactions as candidates for code profiling analysis.
  • If the transaction response time for 1 user is within acceptable range, then carryout multi-user load tests for low load levels say 10, 25, 50 users, etc. Compare the transaction response times for increase in user loads & if you see increase in response times, these are good candidates for scalability analysis. Ensure product owners are aware to ensure development team fix up these issues before next code drop happens to meet the performance standards. Also, ensure you carryout validation tests after fixes are done without delay.
  • For transactions with high response time, perform code performance analysis using profiling tools to identify the time consuming method calls / top CPU intensive methods & report to development team.
  • Perform memory profiling analysis to check if there are any memory leaks in the code.
  • Regression test suite should be run continuously (periodically) to ensure newly integrated features doesn’t bring any performance degradation. Periodicity of when to run regression suite should be decided based on development plan. Usually dedicated focus should be employed to ensure regression performance test suite (new performance test scripts addition & rework, workload model,etc) is continuously evolving in parallel in order to be ready for carrying out system performance tests.
  • Remember your Performance test script is also a software. Follow of coding best practices like modularized code, naming conventions, versioning, quick comments in script, etc. Following some of these best practices will reduce the script development effort as this is the key area where lot of effort is spent incase of agile environments.
  • Measure the hardware footprints – CPU, Memory, Disk & Network usage levels on Web, Application & DB servers on the performance test environment.
  • Understand performance test environment versus production environment hardware differences. Capacity prediction (QN) model developments or exploration of right strategies to map the test results from test to production environment should ideally kick start from the initial sprint cycles. This will help to have performance metrics dashboards for both test environment and production environment.
  • Test data is an important factor that should not be forgotten. Have constant focus to develop required test data from early sprints for the available features to carryout the tests. Explore using automation tools for test data preparation by working closely with development and functional test automation team. Even if your early sprint level performance tests or regression tests couldnt use realistic test data volumes, atleast the system level performance tests should be run on the right test data volumes as expected/projected to have realistic performance test results.
  • Capture the HTTP Archive Format file (HAR file) which captures the web browser’s interaction with the web site as and when the web pages are ready to measure the number of page components, page loading time, number of network calls for static & dynamic components, browser caching levels, etc. .
  • Measure query execution time using DB profilers & recommend query tuning & optimization activities.
  • It is advisable to present the performance test results in web dashboards in order to have it readily available for the entire project team for review & analysis. Explore & implement right data visualization tools like Kibana, Grafana, etc. for presenting the performance test results to project team.

Definitely there are additional skills expected from a Performance Tester / Engineer to closely work with product development team to help in building the system with performance (rather than just focusing on reactive performance assessment strategies which happens in typical waterfall development model).

Gear up !! Happy Performance Testing & Engineering !!