With enough of reactive performance management practices in traditional waterfall software development environment, Agile & DevOps environments gave birth to really proactive & early performance management practices.

But with increase in Agile / DevOps environments, early performance testing approach is adopted by most of the organizations inspite of challenges to adapt to the new needs of the DevOps environment.

Agile versus DevOps Performance Testing in Nutshell

Ideally, in Agile performance testing, the focus is on providing early (unit level) performance measurements & performance feedbacks during every sprint (or sometimes on target sprint) without waiting for hardening sprint to carryout system level performance tests.

In DevOps performance testing, focus is not just on providing early performance measurements, but on doing it vigorously on every available code release (continuously) to compare it with the baseline build or previous build performance measurements through the power of automation by closely collaborating with development, testing & operational teams. The level of automation adopted by the team will facilitate providing quick early feedback & decreases the turn-around time to fix the performance issues.

Key Differences in Early (Continuous) versus System Level Performance Testing

In the interest of bringing up early performance testing methodologies, sometimes Performance Testers / Engineers tend to underestimate and focus less on the system level performance tests. But system level performance tests are still very important to certify the system for target scalability & capacity using realistic workload patterns.

Its important to understand the test objective & the focus of testing areas during early performance tests vs system level performance tests are completely different. Just because early performance test analysis culture is getting prevalent, early & continuous performance tests run as part of CI pipeline can never be considered as a replacement for system level performance tests.

S.No

Early Performance Tests

System Level Performance Tests

1

The key objective is to measure the performance of identified units (APIs or Microservices or DB queries, etc) & provide early feedback & automate the performance measurements on nightly builds.


The key objective is to measure the system performance for realistic workload on production similar environment to make judgments on scalability levels & server capacity requirements

2

Performance Test Objectives or SLAs are available (partially sometimes) as part of user stories


Stringent Performance Test SLAs exists (sometimes gets re-validated using early unit level performance test measurements)

3

Measure the performance for 1 user & small concurrent user loads (on nightly builds continuously or targeted sprint release)


Measure System Performance , Scalability, Availability & Capacity characteristics for realistic usage pattern of the system

4

Unit Test early whenever available and automate as much as possible.

Use Stubs / Service Virtualization tools.

Ongoing system level performance tests can be conducted (may or may not be part of CI pipeline) along with newly added features. (Regression test suite evolves continuously for carrying out system level performance tests).

Final system level tests should be carried out with ideally 6-8 critical use cases for realistic workload characteristics considering peak traffic hours.


5

Tests can be executed in 1 or more small scale environments (Dev / QA / Staging Env)


Tests should be executed on production similar environment (dedicated Performance test environment or Pre-Prod environment)

6

Do not bother about usage of realistic think time / testing on realistic data volumes / hardware capacity, etc

Create realistic usage patterns on production similar environment configuration / settings. Test data development evolves continuously & realistic test data should be used atleast during last stage system level performance tests. Production User traffic monitoring can help in identifying realistic think time (randomize the average think time value) & pacing time (as transaction throughput can be derived)


7

Carryout short & crisp performance tests as part of CI pipeline to measure performance degradation on every release to report the issues quickly


Conduct System level tests on the integrated builds during every or targeted sprints (some tests can be part of CI pipeline as downstream job & some tests can be run independently based on the maturity level / objectives of the CI/CD/CD environment)

8

It is mandatory to run early unit tests with Server Monitoring tools. Having APM tools would be preferred to provide detailed view of performance root-cause analysis quickly

Dont over measure (but if on Continuous Deployment environment, automate almost everything)


It is mandatory to run system tests with Performance Diagnostic tools (preferably APM tools) on the production similar performance test environment

9

Performance analysis should primarily focus on code performance optimization (identifying costly methods, CPU hotspots, Memory leaks, query performance, etc).

Performance analysis should focus on :

Infrastructure performance analysis (by mapping of test results from TEST to PROD env or by running few tests on PROD env).

Performance modeling analysis (QN) to predict system performance for untested load levels.

Performance analysis should focus on network latency across locations, system scalability assessment & capacity planning.


10

Use of ML/AI techniques can help in build-wise performance comparison trend analysis


Use of ML/AI techniques can help in quick performance diagnosis & in capacity planning

For more details, check out my Neotys PAC virtual conference presentation slides at

https://www.neotys.com/performance-advisory-counci...

Happy Testing & Engineering !!