Improving core banking implementation with a viable performance test approach

Banks and financial institutions are evolving to be more customer-friendly. They no longer need complicated applications & software, bulky systems, or multiple platforms to store customer and banks’ data, simply because it no longer serves the purpose. Hence, they opt for various core banking solutions, which help them bring all data onto a uniform platform and be more strategic and organized in maintaining the details.

Core banking implementation is a transformation journey for financial institutions. But 7 out of 10 banks go through many challenges during the different stages of the implementation project. These challenges include the non-functional aspects like stress, performance, or security penetration of the project. Let us look at the consequences of avoiding these mandatory checks. On a few specific days, banks may see a sudden spike in traffic; it can result in an extreme load on the system leading to performance failure.

In the past, several incidents were recorded where the customers were affected and disappointed with system performance failures. No banks can guarantee to function as usual with a massive surge in digital transaction volumes. Banks have realized the importance of testing the non-functional aspects of the systems, so today, it is an essential and integral part of any large-sized and mid-sized transformation project for banks.

We are a little far from realizing what performance testing means for a bank or financial institution. This article serves as a reference to improve core banking implementation with appropriate testing methodologies and a test automation approach. I will also explain why it is a critical exercise and cannot be cast aside until the last minute.

The important aspects of non-functional testing

A statistical report published in Gartner reveals that the average cost of IT downtime is around $5,600 per minute, which amounts to about $300,000 per hour on average. The calculation excludes regulatory penalties and reputational damage. This brings organizations to much deliberation about preventing downtime during production. The load testing helps an organization confirm that the systems are ready to take the usage load adequately to its capacity in the production stage.  

Knowing the objectives of performance testing

Performance testing measures the system behavior and response during peak activity hours. It ensures the consistency of the systems even with a high load. Performance testing also ensures that system performance does not deteriorate with time under average load and continuous usage. It determines the system sustainability, and if any performance bottleneck is detected or identified during testing, it must be reported and documented immediately. Performance testing also validates that the system and load is uniformly distributed across different product architectural layers. It also ensures that the system allows the access of multiple users at the same time and the system scalability to accommodate more users under the same sessions.

Performance testing scope

Performance testing becomes as essential process for a core banking implementation project and production lifecycle. It must cover all types of process activities including online transaction processing from systems interface and various channels, same day uploads, end of cycle batches, and data migration from legacy systems.

There are two methodologies for testing the core banking implementations, i.e., automated load testing and business simulation. In this article we will specifically talk about the first methodology.

Automated load testing

There are four phases of the automated load tests for performance testing.

  1. Designing phase

In the designing phase, the team gathers the requirements and studies them thoroughly to understand the scope and functionality of the application. They understand the performance requirements from a business viewpoint and analyze the matrix of business volume and historical data. The team finalizes the performance testing goals and objectives based on these requirements, finally measuring acceptable results. The automated load tests are designed, followed by an appropriate action plan.

  • Building phase

The channels and test scripts are prepared in batches for different business scenarios common to the user interface. The channels and batches test scripts are used to simulate load into the application tiers. An initial sanity check of the application is conducted after the data is migrated and uploaded onto the performance test platform. If there are remaining historical data to be created, they are injected into the system. During the build phase, the team configures the monitoring tools for gathering system performance metrics for the testing window.

  • Executing and diagnosing

The performance test is executed to validate environmental configurations and application performance behavior. Executing and diagnosing stage leads to an optimized environment for the final measurement run. Repeated performance testing is conducted to evaluate application performance behavior. There are three possible iterative stages which can be configured. And between each of these iterative stages, the team reinstates the performance testing, and the test is re-run again.

  1. Stage 1

The system behavior is recorded by executing the specific functions at the peak load. If any errors or flaws are noticed in environmental or application configurations, it is immediately reported to the respective stakeholders to upgrade and obtain maximum throughput.

  • Stage 2

In stage two, the system behavior is recorded along with the simulation of an integrated business scenario. Similar to stage one, if there are any errors or flaws in environmental or application configurations, they must be reported to respective stakeholders to upgrade and obtain maximum output.

  • Stage 3

The final round is a final round of simulation where all refinements / fine-tuning / fixes are updated from earlier test rounds and are validated. The load testing is executed at peak hours to ensure the resilience and stability of the system through volumes, endurance runs, and stress tests.

  • Measuring and evaluating

In the final measuring and evaluating phase the system metrics are captured and measured. The system testing is conducted to validate the metrics post-run. It is used to prepare the final performance test report.

Improving core banking implementation with Yethi’s testing services

At Yethi, we follow a strategic test objective. We capture non-functional requirements, set up a testing environment, script used cases, build scenarios, execute the test, and prepare PT documents based on reporting and analysis. We gather & analyze NFR, perform a feasibility study, and identify performance test tools. We set up server tier deployment, populate target database, populate target DB, external systems & licenses, and plan performance test strategy. We develop load test scripts, design load test scenarios, create test data, identify & build volume, soak, and stress scenarios. We determine and define the injector profile for injector deployment and timelines. We execute sanity, volume, isolation, stress, soak and load balancing tests.

When it comes to reporting and analyzing, we collect data samples, determine test outcomes by comparing expected performance, and maintain result reports and dashboards for all types of tests. We focus on the following,

  • Baseline Test – Measures the current performance metrics
  • Load Test – Create demand on a system and measure its response
  • Stress Test – Determine the stability of the system by testing beyond normal operational capacity
  • Soak Test – Run at high levels of load for prolonged periods.

Our performance test management is based on test execution and analysis and transaction capture and analysis. We capture and analyze transactions from applications under test (AUT) by pulling data from a web server, app server, and database server. And execute tests and analyses across various server systems through load injection and KPI monitoring. Our test management module focuses on creating and executing performance test scenarios and creating scenarios for different end-user activities against AUT.

We offer load injector and KPI monitoring as dedicated services to generate requests against AUT simulating concurrent virtual users, executing the specified use-cases. Through load injector and KPI monitoring, we collect performance metrics from all metrics collection agents and store them in the performance metrics repository. We collect load test results by Controller are stored in the result repository database. We execute performance testing for the application under test (AUT) and its components.