diff --git a/assets/images/analytics/build-insights-page-1.png b/assets/images/analytics/build-insights-page-1.png new file mode 100644 index 000000000..2e04db964 Binary files /dev/null and b/assets/images/analytics/build-insights-page-1.png differ diff --git a/assets/images/analytics/build-insights-page-2-tab-1-insights.png b/assets/images/analytics/build-insights-page-2-tab-1-insights.png new file mode 100644 index 000000000..31b383fd0 Binary files /dev/null and b/assets/images/analytics/build-insights-page-2-tab-1-insights.png differ diff --git a/assets/images/analytics/build-insights-page-2-tab-2-tests.png b/assets/images/analytics/build-insights-page-2-tab-2-tests.png new file mode 100644 index 000000000..97629fc30 Binary files /dev/null and b/assets/images/analytics/build-insights-page-2-tab-2-tests.png differ diff --git a/docs/analytics-build-comparison.md b/docs/analytics-build-comparison.md deleted file mode 100644 index c5853403b..000000000 --- a/docs/analytics-build-comparison.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -id: analytics-build-comparison -title: Compare your builds and analyze the results with Insights -sidebar_label: Builds Insights -description: Analytics - Builds Comparison for analyzing the past results with the latest test runs -keywords: - - analytics - - build insights - - build compare - - Perfecto test insights alternative - - perfecto test insights - - test observability -url: https://www.lambdatest.com/support/docs/analytics-build-comparison/ -site_name: LambdaTest -slug: analytics-build-comparison/ ---- - - - ---- -import NewTag from '../src/component/newTag'; - - -## Understanding the Build Comparison - -Build Comparison is a sophisticated analytics feature that revolutionizes how QA teams analyze and understand their test results. Imagine having the ability to look at two different snapshots of your test suite side by side, much like comparing two versions of a document to spot changes. This feature serves as your command center for understanding how your tests perform over time, helping you make informed decisions about your software releases. - -In the world of continuous integration and delivery, understanding test results isn't just about knowing what passed or failed today. It's about understanding patterns, trends, and the overall health of your test suite. Build Comparison addresses this need by creating a comprehensive view of your test execution history, making it as easy to spot a regression as it is to notice a sunny day turning cloudy. - -Traditional methods of comparing test results often involve manually scanning through multiple reports or juggling between different tabs and windows. This process is not only time-consuming but also prone to human error. Build Comparison eliminates these challenges by bringing all the necessary information into one cohesive view, similar to how a weather forecaster can see multiple weather patterns on a single radar screen. - -## How Does It Work? - -The Build Comparison feature operates like a sophisticated microscope for your test results, allowing you to zoom in and out on different aspects of your test execution data. Let's walk through each component: - -### Search and Selection Process -When you first enter the Build Comparison interface, you'll find an intuitive search system that works similar to how you might search for a book in a digital library. Simply enter the build name you're interested in, and the system will present you with matching results. Each build entry is rich with information, including: - -The build duration, which tells you exactly how long the tests took to run, displayed in a clear "hours:minutes:seconds" format. For example, "2:45:30" would indicate a build that took 2 hours, 45 minutes, and 30 seconds to complete. - -The execution timestamp, showing not just when the build ran, but contextual information like "3 hours ago" or "Yesterday at 2:30 PM," making it easy to understand the timeline at a glance. - -The name of the team member who initiated the build, helping maintain accountability and enabling quick communication if questions arise about specific test runs. - -### Analysis Components -The heart of Build Comparison lies in its analysis capabilities, which work together like different instruments in an orchestra to create a complete picture of your test execution: - -**Real-time Visualization System** -The feature processes and displays data instantaneously, much like a heart rate monitor in a hospital. When you select different builds or apply filters, the visualizations update immediately, showing you the impact of each change. Charts and graphs pulse with life as they reflect your test execution data, making it easy to spot patterns and anomalies. - -**Smart Filtering Mechanism** -Think of the filtering system as your personal test result assistant. It allows you to slice and dice your data in meaningful ways: -- Date ranges help you focus on specific time periods, such as last week's releases or yesterday's test runs -- Browser and OS filters let you isolate platform-specific issues -- Resolution filters help identify display-related problems -- Custom tags enable you to group related tests together, creating logical test suites for analysis - -## What Are All The Insights I Can Get? - -The Build Comparison feature is like having a team of expert analysts at your fingertips, each specializing in different aspects of test execution analysis. Here's what you can learn: - -### Test Result Distribution Analysis -Understanding your test result distribution is similar to reading a health report for your application. The feature provides: - -A comprehensive breakdown of test statuses, showing you exactly how many tests passed, failed, or were blocked. This information is presented both numerically and visually, making it easy to grasp the overall health of your test suite at a glance. - -Trend analysis that works like a fitness tracker for your tests, showing you how your test health changes over time. For example, you might notice that your pass rate has been steadily improving over the last five builds, or that a particular type of failure has become more frequent recently. - -### Performance Metrics Deep Dive -The performance metrics section acts like a sophisticated diagnostic tool for your test execution: - -Build duration trends are tracked and analyzed, helping you spot if your test suite is gradually taking longer to execute. For instance, you might notice that what used to be a 30-minute test run is now taking 45 minutes, prompting investigation into possible causes. - -Execution time comparisons allow you to see if specific tests are becoming slower or faster. This is particularly valuable when optimizing your test suite for speed and efficiency. - -## Value Proposition - -The true value of Build Comparison lies in how it transforms the way teams work with test results. Let's explore the benefits for each stakeholder: - -### For QA Teams: A New Era of Efficiency -QA teams using Build Comparison find themselves working smarter, not harder. Instead of spending hours manually comparing test results, they can now: - -Identify patterns in test failures within minutes rather than hours. For example, a QA engineer might quickly notice that a particular test fails only when run on Chrome browsers, leading to faster problem resolution. - -Track test stability over time with the same ease as checking a stock market trend. This helps identify flaky tests that need attention before they become major issues. - -### For Development Teams: Accelerated Problem Resolution -Developers benefit from Build Comparison through: - -Immediate visibility into how code changes impact test results. When a developer pushes new code, they can quickly see if it caused any existing tests to fail, similar to having a safety net that catches problems before they reach production. - -Historical context that helps understand if a current failure is new or recurring. This context can save hours of debugging time by pointing developers in the right direction from the start. - -### For Organizations: Tangible Business Impact -At the organizational level, Build Comparison delivers value through: - -Accelerated release cycles, as teams spend less time analyzing test results and more time improving product quality. This acceleration can mean the difference between releasing weekly instead of monthly. - -Improved resource utilization, as team members can focus on solving problems rather than finding them. This efficiency can lead to significant cost savings and better allocation of human resources. - -Build Comparison isn't just a feature - it's a transformation in how teams understand and work with test results. By providing clear, actionable insights and saving valuable time, it helps organizations deliver higher quality software faster and more confidently than ever before. - diff --git a/docs/analytics-build-insights.md b/docs/analytics-build-insights.md new file mode 100644 index 000000000..9c4194025 --- /dev/null +++ b/docs/analytics-build-insights.md @@ -0,0 +1,294 @@ +--- +id: analytics-build-insights +title: Build Insights - Analyze your test builds and get build level insights +sidebar_label: Builds Insights +description: Analytics - Builds Insights for analyzing test results and build health over time +keywords: + - analytics + - build insights + - build compare + - Perfecto test insights alternative + - perfecto test insights + - test observability +url: https://www.lambdatest.com/support/docs/analytics-build-insights/ +site_name: LambdaTest +slug: analytics-build-insights/ +--- + + + +--- +import NewTag from '../src/component/newTag'; + + +## Overview + +Build Insights is your build-level health dashboard. It shows how stable each build is, how long it took, and which tests are causing problems so you can decide quickly whether a build is safe to promote or needs more work. + +With Build Insights, you can view all your unique builds in a centralized list, then drill down into individual build details to explore comprehensive metrics and test-level insights. The feature is designed to be intuitive and accessible, whether you're a QA engineer analyzing test results or a team lead tracking overall build health. + +## Build Insights Flow + +Build Insights organizes your test data into two main views: + +1. **Build Insights Page** – scan all builds and spot risky ones using high-level metrics. +2. **Build Details Page** – open a specific build to understand *why* it looks good or bad, using detailed charts and test-level data. + +## Page 1: Build Insights - List of All Unique Builds + +Use this page to monitor all builds at a glance and decide which ones need attention. + +Build Insights - List of All Unique Builds + +### Search Functionality + +Use the search bar to quickly find a specific build (for example, by suite name like **Smoke**, **Regression**, or **Nightly**) instead of scrolling through the full list. + +### Build Information Table + +The main table displays your builds with the following columns: + +#### Build Name Column + +Each build entry shows: +- **Build Name**: The full name of the build (e.g., `PROD_Analytics_Playwright_Smoke_2025-12-02`) +- **Duration**: How long the build took to execute, displayed in a readable format (e.g., "27m 59s", "2h 15m") +- **Execution Timestamp**: The date and time when the build was executed (e.g., "02/12/2025, 12:58:46") +- **Project/Tag**: The associated project or tag name (e.g., "atxSmoke") +- **Build Tags**: Visual tags associated with the build (e.g., "atxSmoke_build", "playwright_build") + +#### Last Build Summary Column + +Use this column to quickly judge the latest run of a build: +- **Total**: Total number of tests executed +- **Passed**: Number of tests that passed (displayed in green) +- **Failed**: Number of tests that failed (displayed in red) +- **Others**: Number of tests in other statuses like blocked, skipped, etc. (displayed in grey) + +#### Result History Column + +Use this to understand how reliable a build has been over time (not just in the last run): +- **Donut Chart**: A circular chart showing the overall pass/fail ratio for the build +- **Bar Chart**: A series of 10 vertical bars representing the last 10 build executions, with: + - Green segments indicating successful runs + - Red segments indicating failed runs + - The height of colored segments showing the proportion of pass/fail results + +#### Duration History Column + +A line graph showing how build duration has changed over time for the last 10 builds. Use it to: +- Spot builds that are gradually getting slower. +- Detect sudden spikes that may indicate performance regressions or environment issues. +- Compare duration trends between builds when optimizing your pipeline. + +### Navigation + +Use the pagination controls at the bottom of the table to navigate through multiple pages of builds. Click "Previous" or "Next >" to browse through your build history. + +## Page 2: Build Details - Individual Build Analysis + +Open this page when you want to understand *why* a build looks healthy or unhealthy. The page is split into two tabs: **Insights** (build-level metrics) and **Tests** (test-level details). + +### Navigation and Breadcrumbs + +At the top of the page, you'll see: +- A back arrow to return to the Build Insights list +- The build name as a breadcrumb path (e.g., `PROD_Analytics_Playwright_Smoke_2025-12-02`) + +### Filters and Sharing Options + +Use filters to narrow analysis to exactly the slice you care about (for example, only Chrome failures on macOS). You can also share the build details page with your team. + +**Available Filters:** +- **Browser**: Filter by browser type (Chrome, Firefox, Safari, Edge, etc.) +- **Status**: Filter by test status (Passed, Failed, Error, etc.) +- **OS**: Filter by operating system (Windows, macOS, Linux, etc.) +- **Project**: Filter by project name +- **Build Tags**: Filter by build-specific tags +- **Test Tags**: Filter by test-specific tags +- **Choose Custom Tags**: Select from your custom tag definitions + +**Sharing Options:** +- Click the share icon next to the filters to share the current view with your team +- Generate shareable links to the build details page (note: filter settings are not preserved in shared links) + +## Tab 1: Insights + +Use the **Insights** tab to understand the overall health and performance of the selected build before you dive into individual tests. + +Build Insights - Insights Tab + +### Key Metrics Summary + +At the top of the Insights tab, you'll see a summary row displaying: +- **Total Unique Tests**: The number of unique test cases in the build +- **Total Tests**: The total number of test executions (including reruns) +- **Passed**: Count of passed tests (with green indicator) +- **Failed**: Count of failed tests (with red indicator) +- **Error**: Count of tests that errored (with dark red indicator) +- **Others**: Count of tests in other statuses (with grey indicator) + +### Monthly Progress Bar + +Use this bar to track how the build has behaved over the last 2 months: +- Green segments represent periods where most tests passed. +- Red segments highlight time ranges with frequent failures. +- Together they help you see whether the build is stabilizing or becoming riskier over time. + +### Build History Chart + +This stacked bar chart shows how many tests passed, failed, or errored in each execution of the build: +- **Y-axis**: Number of tests +- **X-axis**: Timestamps of build executions +- **Color-coded segments**: + - Green: Passed tests + - Red: Failed tests + - Dark Red: Error status + - Yellow: Idle timeout or other statuses +- **Legend**: Color-coded legend below the chart explains each status type + +Use it to: +- Identify trends in success/failure rates across runs. +- Quickly see when a spike in failures started. +- Compare executions before and after a code or configuration change. + +### Build Summary Chart + +This pie chart summarizes the current execution: +- **Largest segment**: Passed tests (typically shown in green with percentage) +- **Other segments**: Failed, Error, and other statuses with their respective percentages +- **Legend**: Color-coded legend showing what each segment represents + +Use it for a quick “go / no-go” signal on the current run. + +### Smart Tags Summary + +A grid displaying intelligent test categorization: +- **Total Tests Run**: Overall count of test executions +- **Performance Anomaly**: Tests flagged for unusual performance patterns +- **New Failure**: Recently introduced test failures +- **Flaky Test**: Tests with inconsistent pass/fail patterns +- **Always Failing**: Tests that consistently fail + +Each metric points you directly to tests that need attention (for example, focus first on **New Failure** and **Always Failing** before refactoring **Flaky Test**). + +## Tab 2: Tests + +Use the **Tests** tab when you are ready to debug at the individual test level. + +Build Insights - Tests Tab + +### Search Functionality + +Use the search bar to jump straight to a specific test by name (for example, when a developer shares a failing spec file name). + +### Test Results Table + +The main table displays individual test executions with three key columns: + +#### Test Name Column + +For each test, you'll see: +- **Test Name**: The full name of the test (e.g., `PROD_Verify FTD feature for the build atxRD_flakyBuild - flaky_test_detection.spec.ts`) +- **Duration**: How long the test took to execute (e.g., "82s", "84s") +- **Execution Timestamp**: When the test was executed (e.g., "02/12/2025, 13:25:23") +- **Project/Tag**: Associated project or tag (e.g., "atxSmoke") +- **Device Icon**: Visual indicator showing the device/platform used +- **Test Tags**: Clickable tags associated with the test (e.g., "playwright_test", "atxSmoke_test") + +#### History Column + +A visual representation of the test's recent execution history: +- **10 colored circles**: Each circle represents one of the last 10 test executions + - **Green circles**: Successful test runs + - **Red circles**: Failed test runs +- This visual history helps you quickly identify: + - Test stability patterns + - Flaky tests (alternating green/red patterns) + - Consistently failing tests (mostly red) + - Stable tests (mostly green) + +#### Failure/Blocked Reason Column + +For each test, this column displays: +- **Error Message**: If the test failed or was blocked, the reason is shown in a colored box +- **No Error**: If the test passed, this is indicated in a light yellow box +- This information helps you quickly understand why tests failed without opening individual test details + +### Filtering and Analysis + +Use the filters at the top to: +- Focus on specific browsers or operating systems (for example, only Safari failures on macOS). +- Filter by test status (Passed, Failed, Error) so you can work through failed tests first. +- Narrow down by project or tags to isolate a particular suite or component. +- Apply custom tag filters to align analysis with your internal categorization. + +### Pagination + +Navigate through multiple pages of test results using the "Previous" and "Next >" controls at the bottom of the table. + +## How Teams Typically Use Build Insights + +- **Release readiness checks**: Use the Build Insights page, Key Metrics Summary, and Build Summary chart to decide if a build is safe to ship. +- **Regression and incident analysis**: Use the Build History chart, Duration History, and test History column to find when a regression started and which tests were affected. +- **Stability improvement work**: Use Smart Tags and the Tests tab filters to prioritize fixing always-failing and flaky tests. + +## Build Naming Best Practices + +### Maintain Common Build Names + +To get the most value from Build Insights, we recommend maintaining common build names instead of adding unique identifiers (UIDs) daily or weekly. Here's why: + +**Benefits of Common Build Names:** + +- **Historical Tracking**: When you use consistent build names (e.g., `PROD_Analytics_Playwright_Smoke`), Build Insights can aggregate all executions of that build over time, giving you: + - Accurate Result History charts showing trends across multiple runs + - Meaningful Duration History graphs that track performance over time + - Better visibility into build health patterns + +- **Easier Analysis**: Common build names make it easier to: + - Compare performance across different time periods + - Identify trends and patterns in test stability + - Track improvements or regressions in your test suite + +- **Better Organization**: Instead of creating new build names with dates or UIDs (e.g., `Build_2025-12-02`, `Build_UID_12345`), reuse the same build name for similar test suites. The system automatically tracks each execution with its timestamp, so you don't need unique names to distinguish runs. + +**Recommended Approach:** + +- Use descriptive, consistent names like: `PROD_Smoke_Tests`, `Regression_Chrome`, `Nightly_Build` +- Avoid adding dates or UIDs to build names unless necessary for specific use cases +- Let the execution timestamps handle the temporal distinction between runs +- Use tags and filters to further categorize and organize your builds + +This approach ensures that Build Insights can provide you with meaningful historical analysis and trend identification for your test suites. + +## Best Practices + +1. **Check builds early and often**: Start your day on the Build Insights page to spot risky builds before they block releases. +2. **Filter with intent**: Use filters to answer specific questions (for example, “Are failures only on Windows?”) instead of browsing everything at once. +3. **Trust history, not one run**: Use Result History, Duration History, and the test History column to judge stability over time, not just a single execution. +4. **Share context, not just failures**: When sharing a build, also mention which metrics you looked at (for example, “pass rate dropped from 98% to 90% in the last 3 runs”). +5. **Standardize build names**: Maintain common build names so histories stay meaningful and easy to compare across days and weeks. + diff --git a/sidebars.js b/sidebars.js index e94ed54a7..277295dba 100644 --- a/sidebars.js +++ b/sidebars.js @@ -3947,7 +3947,7 @@ module.exports = { items: [ "analytics-test-insights", "analytics-modules-test-intelligence-flaky-test-analytics", - "analytics-build-comparison", + "analytics-build-insights", "analytics-smart-tags-test-intelligence", "analytics-test-failure-classification", "analytics-ai-root-cause-analysis",