Performance testing in days gone by was a relatively straightforward task: gather the requirements for performance, identify the business processes to be included in tests, and then plan, construct and execute the performance tests that simulate the desired load. If you could additionally provide insights into which resource was constrained, all the better. That all worked fine in typical client/server and monolith apps.
Then along came “Just-in-Time” compilers, and root cause analysis became a little trickier. Performance testers quickly had an answer to finding the proverbial needle in the haystack by using APM tools during performance tests. However, the more distributed architectures became, the more moving parts needed to be monitored and analysed. In other words, the proverbial haystack became multiple haystacks, which meant finding that needle became progressively more difficult.
As devices, both desktop and mobile, became more capable, more of the presentation layer intelligence and processing moved to the user’s device. Bandwidth in general has also increased dramatically for businesses and home users alike and developers quickly found ways to make good use of additional resources by adding more features to enhance a user’s experience. Moving to the cloud has also helped to provide greater compute resources, be these in the form of virtual servers or containers that can be spun up in peak times and turned down when not required.
For more information, visit www.itecology.co.za
The distributed computing architecture of today, of course, also lends itself to API integrations into other apps in an organisation as well as any third-party apps that add value to the overall system.
Lastly, Agile and DevOps development have vastly changed how we bring apps into production, requiring many more but smaller increments of code changes to be tested.
Unfortunately, while app architectures have significantly increased in complexity, many application owners are oversimplifying the performance impact, believing that the resource gains mentioned above will make up for this. In cloud-based systems, some even believe that the ability of spinning up and turning down additional capacity is an adequate safety net for all eventualities.
In a nutshell, for too many organisations, performance engineering has become a tick-box exercise, knowing it needs to be done without a clear understanding of why and how.
Performance first
An organisation that is serious about delivering responsive user experience considers performance as part of the application’s requirements long before a line of code is written, or an architecture is decided upon. Such basic performance requirements are not technical in nature and should form part of the business requirement of a system. They should, for example, clearly outline who and where the user of the application is, how users are connected, how many users will interact with the application at normal and peak times, and what the scale of expected growth is of such a user base.
Such business requirements then form clear input to the technical requirements of the system, ensuring these requirements are considered in the system specification. Additionally, this approach informs of budgetary requirements for the project when it comes to staffing for performance engineering, the performance engineering process to be followed, as well as any tooling requirements.
As an example, for a DevOps-style project, it should ensure that performance engineering is considered early in the development life cycle, even in development itself. It would guide a shift-left performance engineering process that would determine which modules of code, for example REST APIs, can be performance-tested long before they are consumed in a user interface.
The technical specification would furthermore provide clarity of what performance metrics are valuable. Firstly, is user experience in terms of actual click-to-eyeball response time a key metric, or is server turnaround, as measured by traditional performance test tools, sufficient? Is it important to know which Internet service providers users are coming from, what connectivity they have, and how this affects their user experience? Does a performance engineer need to care about waterfall charts of the pages rendered to users and how these differ between different browsers and browser versions or mobile devices and underlying operating systems?
The cost of a lengthy project delay and associated opportunity cost far outstrips the cost of implementing enterprise-grade performance engineering practices
Knowing the location and connectivity of users then guides performance engineers to include network emulation in performance test exercises if required and calls for service virtualisation so that modular testing can happen while all the parts of the application are not in place yet or their availability is not as expected during tests.
Finding bottlenecks earlier, however, does not in itself fix the problem, and performance engineering should also have the skills and tooling in place to quickly identify the root cause of a bottleneck, working closely with development and operations teams for quick resolution and retesting.
Lastly, no amount of performance testing is a 100% guarantee for production and hence the same metrics deemed important in test should be available in production, including all user experience metrics as well as all deep-dive diagnostics capabilities. Production statistics can then also be used to provide invaluable insights into performance test and environment designs based on real usage. Performance engineering now knows how users navigate through the application, how many users use which parts of the application, the frequency and times of the day/week or month users use the various part of the application, and what the response metrics are, given the user connectivity and production environment’s capacity.
Clearly, there is so much more to performance engineering than meets the eye initially. Yes, placing more focus on developing skills, aligning processes and acquiring tooling does not come cheap, but the cost of a lengthy project delay and associated opportunity cost far outstrips the cost of implementing enterprise-grade performance engineering practices.
Why then is it that so many organisations still see performance engineering as a “tick-box” exercise?
For more details or to enquire about our consulting services, please feel free to contact IT Ecology.
About IT Ecology
Founded in 2004, IT Ecology made it its mission to provide technical testing and monitoring competencies to the sub-Saharan African market that excel at delivering against unique customer requirements. Our low staff turnover ensures that vital experience and IP remains in the organisation while fostering a culture of learning for newer team members. Coupled with a can-do attitude, our team has delighted our customers again and again, exceeding expectations. Customers call upon IT Ecology as the solution thinkers and advisers.
For more information, visit www.itecology.co.za or visit the company’s page on LinkedIn.
- This promoted content was paid for by the party concerned