Accurate and continuous data capture is extremely important for the entire shift-deep approach. Fortunately, in today's agile and tech-enabled workplace, this is largely achievable.
We now have efficient and mature tools to instrument, log, store, and process large amounts of data. Gone are the days when every new software project would start with creating (from scratch) a logging engine.
Today, frameworks like ELK (Elasticsearch, logstash, kibana) or EFK (elasticsearch, fluentd, kibana) do the job. With the advent of the public cloud, availability and access to cheap and fast storage is also a huge enabler for the stage of shift-deep testing.
The following table provides a curated list of various data capture scenarios and available tools for achieving the same:
Data Capture Scenario
Available Tools (sample)
Log and trace data from product under test
Network (wire) data when product is being tested
Wireshark, Netsh, tcpdump
Test artifact data (test results, screenshots, videos, etc.)
Selenoid/Selenium Hub, Browserstack, Cypress
Monitoring data when product is being used over periods of time (stress, endurance, load)
ELK, EFK, Prometheus, Dynatrace
Test Metadata (tags, system and environment info, etc.)
Tricentis TOSCA, SmartBear, Reportsportal
Once data is captured, the next stage is to organize and structure it. In certain cases, the tools/stack supports a good level of analysis capabilities and customization. For example, the metrics obtained in Prometheus can be effectively bubbled up into a trends-based dashboard in Grafana with a high level of granularity, drill-down capability, and customization.
In some other cases, there is a need to create custom solutions to do this. This stage of data organization is, however, extremely important because the better organized and structured your test ecosystem data is, the easier and more effective the analysis and interpretation for gauging product quality and finding defects.
The following table provides a curated list of key use cases we have seen in test lifecycle journeys where Stage 2 plays a key difference (use the scroll bar at right to keep reading table content).
Test Lifecycle Usecase
How Shift-Deep Data Organization Helps
Sprint-wise software quality tracking.
An ability to identify certain indicators/metrics which can be collected as part of sprint testing, then view their progress over time. Examples of indicators could vary from simple test metrics like regression %, P1 defects, etc. to complex quality metrics like performance envelopes and code quality variance/deviation.
In this use case, tools like reportsportal, Grafana, and time-series DBs like influx, a test tracking infrastructure can be created where testing output, as well as quality-related outputs, are aggregated and visualized using charts and dashboards.
For large complex systems with a mix of legacy and modern components, it is very difficult to gauge test coverage for patches or bug fixes. Some teams end up running the entire test suite, which is costly and takes time. The ability to identify the right level of test case coverage (and selection) based on the code turmoil is a very real and pressing need.
In this use case, there is a need for an initial data organization to baseline the product under test. Some tools like DSM (Dependency Structure Matrix) and Lattix can be used effectively to bucket/classify the product modules and sub-components. Then using tags and heuristics, the right set of test cases can be selected for a given turmoil input (provided by dev teams). This can reliably optimize the test coverage.
Root cause analysis
In industrial and compliance-oriented industries, product quality and defect removal entails RCA (Root Cause Analysis) practices like 5-Whys and FMEA. With these industries also becoming more digital and software-centric, these techniques are moving from manual and user created to predictive/prognostic/prescriptive and thus automatically generated. The ability to transform root-cause analysis into a more digital nature is an evolving trend in quality engineering.
Closed-loop quality management
In this case, solutions that come to the fore are QMS and eQMS systems. They are again normally specific to the domain and typically bespoke. A closed-loop QMS system process must have a good level of data integration across all enterprise systems; for example, a customer issue from field service automatically triggers a complaint, which automatically initiates a product investigation, which can automatically trigger an action in the customer record. The key aspects here from a testing perspective include mechanisms to have robust traceability across the product chain (requirements to code to test cases to defects), which can then be correlated with field observations and fixed across the loop.
The acme of Shift-Deep is when the organized data starts yielding consistent and coherent insights. This can transform the whole testing process and give a significant uptick in the value of testing. In Stage 3, attempts are made to extract insights from the organized data first in the form of manual SOPs/steps and then using AI/ML-driven algorithms.
The manual SOPs and steps are, to some extent, covered in the use cases in the previous section (example: use of FMEA or 5-Why process for root-cause defect analysis.
However, a truly exciting area is the use of AI and ML algorithms and technology for gathering insights.
One example of this within GlobalLogic itself is a technology accelerator called InteliQ.
InteliQ applies a machine learning approach to regression testing to more effectively prioritize test cases and identify critical issues and high-risk areas earlier. The solution also helps test engineers automate manual processes, identify problematic autotests, and detect any outliers that could create a weakness in the development process.
By automating QA and detecting defects earlier, InteliQ can reduce project phase costs by around 11% and accelerate the regression test cycle timeline.
InteliQ uses Python and related ML toolkits like Scikit-learn, Pandas and NumPy as part of its solution.
Several more areas of exploration are underway in this Stage 3 step, some of which are:
Differential testing: comparing application versions overbuilds, classifying the differences, and learning from feedback on the classification.
Visual testing: leveraging image-based learning and screen comparisons to test the look and feel of an application.
Declarative testing: specifying the intent of a test in a natural or domain-specific language and having the system figure out how to carry out the test.
Self-healing automation: auto-correcting element selection in tests when the UI changes.
Learn More About InteliQ