The application of AI and Variant Testing
How did you get involved in your field of research?
I’ve been involved in several European research projects, particularly on the application of model delivery engineering for cyber-physical systems with a particular focus on verification, validation and testing. That’s how over the years we have built the profile and expertise that we have now, at RISE research institutes of Sweden in the Västerås office.
What is the context for variant testing and why is variant testing important to industrial partners?
Variant testing is important. We observed that companies typically do not produce one single product but, rather have a series and portfolio of different products and product versions and variants. Variant testing becomes more and more important in industrial applications because we want to ensure that different products that are developed and customized for customers in different regions, in different markets, have sufficient levels of quality and adhere to regional standards.
Is it possible to test all these configuration combinations? And how do you determine when you’ve done enough testing?
When we think about all the possible product variants, one important question is how many possible combinations of different product features could be tested. This is particularly important when we consider the short time that companies have in order to deliver a product to their customers. They need to be able to deliver their product to the market faster than their competitors. Meaning, we need to identify what subset of combinations need to be tested in order to assure a certain level of product quality before we deliver it. Certain domains might have strict requirements on safety so we need to ensure a satisfactory safety level of the product is met while others might focus on other quality characteristics so the amount of testing can vary from one domain to another.
Does manual testing have a role in variant testing?
Manual testing is still very important, especially if we look at the quality of tests that are manually written versus the ones that are automatically generated. Manual testing and manual test cases can have a higher quality than automatically generated test cases. We may be able to generate a huge number of test cases automatically, but they might not find any issues in the system. The main question here is if manual testing is scalable with respect to the complexity and size of industrial products. That’s why we are moving towards automated solutions for generation and execution of test cases, because we observe that manual testing is not scalable anymore.
I notice that you and your team have been using machine learning and natural language programming and processing to enable variant testing. Can you amplify why A.I. techniques are important to variant testing?
What we observed was that when companies receive an order for a new product, they go back and look into their existing products and previous projects to try to understand what features and components can be reused for building a new system. We can automate this very time-consuming process that heavily relies on previous experience and knowledge. We use natural language processing to automatically process these new set of requirements that we receive from the customer and then by looking into the existing software and performing similarity analysis, we can automatically recommend to the developers and the testers, which components can be taken and can be reused across different product versions.
You’ve had great success with your approach with Bombardier Transportation, but is your approach transferable to other domains and industry use cases?
I can say that there is nothing specific that limits our solution to the Bombardier context, but it’s important to have natural language requirements written in a particular format that can be automatically processed by a solution. Also, having traceability between requirements and software components is very important so that we can automate the whole process.
Can you talk a little bit about one of the tools that you’ve built to support this VARA tool?
We built a tool called VARA – Variability-Aware Reuse Analysis – that encapsulates all these techniques that we talked about, mainly machine learning and natural language processing. This tool enables us to perform similarity analysis across various projects. When a company receives orders for new projects or new products from their customers, they can automatically perform this analysis and identify key components and features to reuse and take from previous projects and use them for building a new product version and variants.
In your research, where’s the future of variant testing?
I believe as the size and complexity of software-intensive systems grows and more industrial companies employ software-based solutions as part of their products. This gains further importance in the industry, as we can optimize the whole testing process from what needs to be tested, when we need to be testing as well as identifying patterns of issues across different products, versions and variants.
Dr. Mehrdad Saadatmand is the senior researcher and leader of the software testing group at the RISE Research Institutes in Sweden. Mehrdad holds a Ph.D. in software engineering from Mälardalen University with a focus on model-based engineering of real time embedded systems. He has led and played a key role in multiple major international software systems testing research projects which aim to exploit the synergies between Model-Based Analysis and testing in the verification of embedded systems.
Peter Watkins is the Chief Operating Officer at QA Consultants and is the current President of the World Network of Productivity Organizations. Peter previously worked as Executive Vice President and Chief Technology Officer for the McGraw-Hill companies, Executive Vice President and Chief Information Officer for the Canadian Imperial Bank of Canada, Global Leader for Financial Services for EDS, and National Practice Director for Information Technology for E&Y.