Live Load Test Dashboard
The Live Load Test Dashboard provides a comprehensive, live view of test execution once a load test is initiated. It enables teams to monitor performance as traffic is being generated and to analyze detailed results immediately after completion. This dashboard serves as the central analysis workspace for understanding system behavior under load.
All charts and graphs include interactive controls such as Reset Zoom, Download Chart as Image, and Expand Chart for deeper inspection and reporting. Additionally, tables and logs provide export options, allowing users to download execution data and log details for offline analysis and sharing.
1. Test Run Overview
The Test Run Overview provides a high-level snapshot of the executed load test, combining test configuration details with runtime execution information. It helps teams quickly understand what was executed before diving into detailed performance metrics.
Test Information
The Test Information section summarizes the core configuration of the load test:
Test Name - The name assigned to the load test.
Total Virtual Users - The total number of users configured to simulate concurrent traffic.
Duration - The total configured execution time of the test.
Region - The AWS region where the load generators were deployed.
Generators - The number of load generator nodes used to execute the test.
This section provides immediate context about the scale, location, and structure of the executed test.
Test Run Details
Below the Test Information section, the Test Run Details provide execution-specific metadata, including:
Start Time & End Time - The exact execution window of the test run.
Total Virtual Users - The actual number of users applied during the run.
Region & Instance Type - Infrastructure details of the load generators.
Storage - Allocated storage for the test execution.
Status - Current or final execution state (e.g., Completed, Running, Failed).
Executed At - Timestamp indicating when the test was triggered.
2. Execution Overview
The Execution Overview provides high-level performance metrics across the entire run:
Peak Virtual User's - Number of active users during the test.
Duration - Total execution time.
Sample Count - Total number of requests processed.
Avg Response Time - Mean response time across all samples.
P90 Response Time - 90th percentile response latency.
Failure Rate - Percentage of failed requests.
Throughput - Requests processed per second.
In addition, visual graphs display:
Active Users Over Time - Ramp-up behavior and steady-state load.
Overall Samples Over Time - Request rate trends during execution.
These insights help quickly identify bottlenecks, spikes, and stability issues.
3. Request Summary
The Request Summary tab provides API-level performance analysis:
Response Time by API - Trend graph displaying response times across endpoints, with selectable metrics such as Average, P90, P95, P99, Minimum, and Maximum, enabling detailed percentile-based performance analysis.
Throughput by API - Trend graph showing the throughput (processing rate) per endpoint, enabling visibility into request volume distribution and endpoint-level load patterns over time.
API Response Time Distribution - Breakdown by latency buckets across API endpoints.
HTTP Response Codes - Visual distribution of status codes (e.g., 200, 400, 500).
Performance Summary Table
The Performance Summary Table provides a consolidated view of performance metrics for each API or transaction executed during the load test. It enables teams to quickly compare behavior across endpoints and identify bottlenecks or failure patterns.
Sample Name: Identifies the specific API or transaction executed during the test.
Total Requests: Displays the total number of requests processed for the given sample.
Failed Requests: Indicates the number of requests that resulted in errors or unsuccessful responses.
Average Response Time: Shows the mean response time across all requests for the sample.
Minimum Response Time: Displays the fastest recorded response time.
Maximum Response Time: Displays the slowest recorded response time observed.
P90 Response Time: Represents the 90th percentile latency, indicating the response time under which 90% of requests were completed.
Error Rate (%): Shows the percentage of failed requests relative to total requests.
Throughput: Indicates the processing rate, typically measured as requests per second for the given sample.
The table supports advanced filtering to refine analysis:
Transactions/APIs Filter - Users can toggle between viewing transaction-level metrics or API-level metrics.
4. Error Summary
The Error Summary tab focuses on failure diagnostics:
Error Count by Status Code - Time-based visualization of 4xx and 5xx errors.
Error Rate vs Active Users - Correlation between load intensity and failures.
Error Statistics Table - Breakdown of error types (for eg. Bad Request, Unauthorized, Internal Server Error etc.).
Error Summary by Sampler - Endpoint-level failure breakdown across status codes.
This section enables rapid root cause analysis by identifying which APIs failed and under what load conditions.
5. Resources
The Resources tab monitors infrastructure-level metrics of the load generators:
CPU Usage
Memory Usage
Network Traffic
Disk Usage
These metrics help determine whether performance degradation originates from the application under test or from load generator resource constraints.
6. UI Performance (If UI Automation is Enabled)
When UI automation is orchestrated alongside load testing, the dashboard includes a UI Performance tab featuring:
Web Vitals
LCP (Largest Contentful Paint) - Measures the time taken for the largest visible content element (such as an image or heading) to fully render in the viewport.
INP (Interaction to Next Paint) - Measures the latency between a user interaction (click, tap, keypress) and the next visual update on the screen.
CLS (Cumulative Layout Shift) - Quantifies the visual stability of a page by measuring unexpected layout shifts during loading.
FCP (First Contentful Paint) - Indicates when the first piece of content (text, image, canvas, etc.) becomes visible to the user.
TTI (Time to Interactive) - Represents the time required for the page to become fully interactive and responsive to user input.
Overall Performance Summary
The Overall UI Performance Summary Table presents a transaction-level breakdown of critical UI flows executed during the test. It provides detailed latency metrics for each UI transaction, including:
Transaction Name: The name assigned to the transaction.
Average Time - Mean execution time across iterations.
90th Percentile Time - Time within which 90% of executions were completed, highlighting tail latency.
Standard Deviation - Variability in execution time, indicating consistency.
Minimum Time - Fastest observed execution.
Maximum Time - Slowest observed execution.
This table enables teams to identify slow or unstable UI transactions under load and correlate them with backend performance behavior.
Execution Iterations & Artifacts includes:
Iteration Status (Completed / Failed) and Triggered Time
Transaction-Level Timings
Console Logs
Playwright Traces
Video Recordings & Screenshots
This ensures full traceability of user journey validation under load.
7. Logs
The Logs section provides detailed execution-level visibility into the load test by displaying real-time JMeter logs generated during the test run. It captures system-level and script-level events such as environment initialization, configuration loading, JVM settings, property assignments, execution context details, and runtime information for controller and worker nodes. Users can switch between different nodes (e.g., Controller or Worker instances) to inspect distributed execution logs, making it easier to troubleshoot failures and validate configuration settings.
Export Options
Test results can be downloaded as:
Result CSV
HTML Report (ZIP)
These exports allow offline analysis and reporting.
Purpose of the Real-Time Dashboard
The Real-Time Load Test Dashboard enables teams to:
Monitor load test execution live.
Identify performance bottlenecks instantly.
Correlate backend metrics with UI performance (if enabled).
Analyze errors with endpoint-level granularity.
Validate infrastructure resource utilization.
Perform deep debugging using logs and traces.
It provides a unified, actionable view of system behavior under stress-bridging load generation, application performance, and user experience validation within a single analytical interface.
Last updated