Challenges in Gathering Performance Requirements

Application Performance is important. Good Performance means revenue. Good Performance means assured business, repeat business. Performance is about people. In today’s competitive world, application Performance can be ‘make or break’ criterion. 

Lack of Performance requirements or lack of ‘good’ performance requirements is one of the leading causes for application Performance not getting adequate attention it deserves.  

Gathering Performance Requirement is often a challenging task. In general, gathering even Functional requirements is not easy due to various known issues like subjectivity involved due to limitations of natural language, language barriers, communication gaps and many other reasons. Writing good Performance requirements needs good technical skills along with functional knowledge. 

Performance attribute of the software falls under non-functional testing type. When we think of system Performance, we typically try to specify ‘How’ the system is expected to behave in production operating conditions (like number of users it can support, number of transactions it can process, transaction mix details, size and configuration required for the infrastructure etc). Refer to Figure 1 below, for some of the typical application Performance issues in production 

Performance issues to worry
Typical Performance issues, any project team would be worried about 

While we try to understand how to gather good Performance requirements, let us first understand few important aspects related to performance requirements:

  • What is a ‘good Performance requirement’? 
  • Who are all the stakeholders, who should be involved in Performance requirement gathering? 
  • Types of Performance requirements. 
  • What are the key performance metrics for typical software which should be part of Performance requirements? 

What is a good Performance requirement? 

A good Performance requirement adds value to end user experience and it is testable. 

Let us start with an example, where user mentions that he wants ‘fast’ web site. Now this requirement has subjectivity and hence it is not testable. If we prompt user with some guiding questions like how much fast? May be, we will get some additional information like say every web page should load in 3 seconds or less. This brings in some level of objectivity, but it is still not completely testable.  

Performance is typically valid for set of statistical observations, e.g., any typical web application or any process can not guarantee that it will be giving exactly same response time results all the time, so there would be variations in response times. So better way of expressing would be, web pages should load in <3 seconds for 90 percent of the time (i.e., 90th percentile figure).  This makes the requirement testable. We have taken example of ‘Response time’ here as it is most prevalent Performance metrics for end user. Workload model under which this response time needs to be measured, is also an important part of a Performance requirement. 

Stakeholders involved in Performance requirement gathering 

  • End user of the application – As we all know customer is King, if end user feels that the application’s perceived Performance is poor in his/her view, it will have negative impact on application usage. 
  • Development team –Most of the time, Performance requirements are written for final product with end-to-end view and they do not provide any view of how these end-to-end requirements gets broken down to unit level performance requirements. This means when developer writes code, he/she has no clue if the requirement will be satisfied till the product gets complete with all higher-level modules and components and gets tested for end-to-end Performance requirements.  Let us take an example of one end-to-end transaction, which is expected to get completed in say, 3 seconds (90th percentile value). This transaction contains a webservice call which in turns fires a database query and returns result back to end user. This indirectly means webservice call and database query can not take say more than 1 second each, keeping 1 second headroom for other delays in network etc. So, developer of a webservice and database query, needs to know this 1second response time expectation while writing code.  
  • Architecture team – Technical architect needs to validate the architecture required to ensure that expected Performance requirement is feasible. While Performance requirements are being discussed, they also need to balance Performance attribute with respect to other non-functional attributes like Security and Usability. This is because of negative impact caused by Security aspect on Performance and vice e versa. 
  • Operation Support team – This team needs to ensure that application infrastructure can be maintained with given Performance requirements of the application. Typical example can be unstable applications needing continuous monitoring, application getting hanged /crashed often due to Performance issues and needing frequent restarts OR archiving process slowing down other background jobs which in turn impacts Online user’s experience. 
  • DBA team – It is known that majority of the Performance bottle necks lies in database layer and hence involvement of DBA team is vital in Performance requirement discussions and gathering 
  • Performance test team – This team needs to ensure that they test application with good testable Performance requirements, they need to ensure that workload model is near realistic and thus Testing is meaningful. 

Types of Performance requirements 

We can categorize Performance requirements in two broad buckets, Explicit and Implicit. 

  • Explicit Performance requirements are some thing which are expected by end users (or domain consultant) which are more visible, and which can be seen/perceived. Key example of explicit requirement is ‘Response Time’. 
  • While implicit Performance requirements are internal to project team, which are fundamental in achieving those explicit requirements. Some of the Implicit requirements are arrived at by Performance test team using Performance Laws like Little’s law. Some examples of implicit requirements are throughput, infrastructure constraints/utilizations (CPU and memory utilizations). For implicit requirements about Infrastructure, it’s good to follow some organization level guidelines, e.g., acceptable threshold of 70 percent CPU and memory utilization can be good guideline to follow. 

Good Performance requirement makes use of Performance metrics, that make them more precise, objective in nature and testable. Some examples of Performance metrics are, 

  • Response time – time taken to complete a transaction/action 
  • Throughput – number of transactions / number of bytes, processed by server per unit time 
  • Infrastructure Utilizations – percent of CPU or memory utilizations 

Based on my experience, below approach helps on gathering and writing Good Performance requirements 

  • Involve all the stakeholders 
  • Best way to involve all stakeholder is to set up small workshop (preferably face-to-face) or meeting in planning / initial phase of the project 
  • Encourage all the participants to join, if not feasible, let the representative join the workshop 
  • Share the agenda well in advance with all stakeholders 
  • Keep the session time bound 
  • Ask questions so that all participants get on same page, many times we get clarifications on many issues (historical or raised during the workshop) which otherwise would not have been possible to get. 
  • Use diagrams/sketches freely to explain application architecture, Performance test architecture or to explain a viewpoint during discussion. Visualization aids on better understanding and in conclusions 
  • Leverage questionnaire with predefined questions, which covers all the aspect of Performance requirement gathering (Infrastructure, Workload model, architectural decisions etc) 
  • Use any available historical data to support discussions or decisions esp. while preparing Workload model  
  • Minute the key discussion points and conclusions 

Snapshot of one such sample Performance Questionnaire can be found below in Table 1, the questionnaire can be exhaustive with various subsections, considering various aspects of performance e.g., questions related to Infrastructure, questions related to user base and their geographic spread, questions related to projected increase in user load based on time or due to festive demands etc. 

SR no Question Answer (Sample) Guidance for the answer 
Is the application present in production or its new roll out? Application is already in production for last 6 months, but we are now facing Performance issues… Pls provide details if it is already in production. 
If already in production, do we have any performance metrics available from production? We do not have adequate monitoring in production  Share any response time or system utilization details from production 
Is application internal facing or external facing? Application is both internal and external facing if external facing, then performance SLAs needs to be more stringent 
How many users would be accessing the application during peak hours? Around 500 users Guesstimate will do, but pls provide conservative numbers 
What is the business criticality of the application? Application has ‘Medium’ business criticality Pls refer to guidelines on business criticality while answering this question 
Sample Questions in a typical ‘Performance Questionnaire’ 

Practically, in large number of organizations, Performance requirements are simply not available. This is because Performance is overlooked criteria. In such cases it becomes even more challenging to implement above process OR to gather Performance requirements in general. In all such cases, we can follow some SLAs (Service Level Agreements) based on thumb-rules to arrive at reasonably good Performance Requirements. 

Let us take an example of internal application which is used for inventory management. 

Table 2 below, lists typical ‘unit level transactions’ for the application. 

Sr No Unit level Transaction Description SLA based on thumb-rules in Seconds (90 percentile value) 
Load /Refresh a page <= 2  
Submit form <= 3 
Generate report for < 1000 records <= 5 
Generate report for >1000 records and < 10000 records <= 7 
Generate report for >10000 records <= 10 
Simple Search query to data base <= 3  
Complex Search query involving more than 1 search criteria <=5 
Sample of Unit level Performance SLAs

Now, a ‘business process’ may be comprised of one or more Unit level transactions from above table, so we can simply add respective SLAs to arrive at SLA at ‘Business process’ level. 

Similar table can be prepared within any organization and can be agreed up on based on consensus. SLA thumb rule numbers will vary based on factors like business criticality of the application, application domain, type of application – internal or external etc. Rest of the implicit requirements like throughput and infrastructure utilizations (CPU and memory percentage utilization) can be worked up on using Performance Laws (Little’s law, Forced Flow law etc). It is recommended to break down end-to-end requirements into module or unit level, so they can be validated early rather than validating at the end. 

Application Performance is ‘make-or-break’ criteria in today’s competitive world. Key ingredient in preparing ‘good performance strategy’ is, ‘good Performance requirements’. Gathering and articulating Performance requirement is a challenging task. This article suggests ways to gather requirement with methodical approach (workshop). Alternatively, team can work out SLAs based on thumb rules and customize them for respective organization. Breaking down end-to-end Performance requirements at module level or unit level or architecture tier level is strongly recommended. 

Know our Super Writer:

Author

Vikrant Joshi

Lead Trainer, Founder at Varcos

Vikrant Joshi has 20+ of rich experience in QA – IT and non-IT. He has worked with top notch IT companies like Infosys, Accenture, TechM, TATA Motors. He was with Deutsche Bank Technology center as Vice President when he decided to leave IT corporate sector last year and pursue his passion of ‘Teaching’ . He has played roles at all levels of testing starting from Test Analyst to Head of a practice. He was instrumental in building Performance Test COEs at Infosys. He has been leading Performance and Automation test practices for the last one decade. 

 Currently he is Lead Trainer, Founder at Varcos . At Varcos,  teaches courses on Performance testing using tools like Load runner and Jmeter, Test Automation using Selenium -Java. 


Your feedback matters, support our super writers!

Click on a heart to show your love!

Average rating 4.5 / 5. Vote count: 25

No votes so far! Be the first to rate this post.

As you found this post useful...

Follow us on social media to stay updated!

We are sorry that this article didn't meet your expectation!

Let us improve this article, your feedback matters!

Tell us how we can improve this article?


Vijeta
Read Next

Up ↑