Skip to content

QA Consultants Hosts ITEA 2020 Plenary Meeting (Day Two)

QAC EmTech
QAC Emerging Technologies Quality Assurance

This monthly newsletter will focus on QAC’s activities regarding R&D, Connected Vehicles, Cognitive Autonomous Systems, Artificial Intelligence, Internet of Things, and Blockchain Quality Assurance Services

FOCUSED ON THE FUTURE

Leading the way
 
 
Welcome to the fourteenth edition of the EmTech newsletter. In this edition, we’ll continue our focus on the recent XIVT project virtual Plenary meeting hosted by QAC. We will also review various presentations from the XIVT Project Plenary meeting on Knowledge-Based Test Optimization and Testing Configurable Products.
Welcome to the 14th edition of our EmTech Quality Assurance Newsletter

Keeping you informed

 

Our emerging technologies quality assurance workstreams

R&D and Grant Projects
Research and development of new technologies that position QAC to become a world leader in quality assurance services.
Connected Vehicles
Testing and Quality Assurance services exclusively developed to provide integration testing services for highly connected vehicles.
Cognitive Autonomous Systems
Fully automation of testing and quality Assurance services exclusively developed for Cognitive Autonomous Systems
Cybersecurity, IoT, AI, and Blockchain
Focus on developing new technologies that utilizes AI to address QA challenges on Cybersecurity, IoT, and Blockchain domains

The XIVT Project is a global collaborative effort motivated by the fundamental idea that knowledge-based testing and model-based testing are complementary concepts. Combined, they can solve problems that arise from the complexity of variant and configurable systems. During the Plenary meeting, two central technological work packages (WP2 and WP3), on knowledge-based test optimization and testing of configurable products, respectively were discussed. The following is an abstract overview of the topics that were presented in the Plenary meeting.

Work Package Two: knowledge-based test optimization

 

This working session started with a presentation from the Institut for Automation und Kommunication (IFAK).The subject discussed focused on the development of a Natural Language Processing-based, rule-based approach for requirements formalization, and automated test case generation called “NLP-ReForm”.

The approach handles an extended range of domains and formats of requirements and provides enhanced but easily interpretable intermediate results. It is a (Semi-) automated generation of functional formal requirements model. The tool-aided formalization of simple functional requirements is based on NLP analysis. It provides suggestions for possible formal models of more complex requirements. In addition, it links requirements to features, variants,and test cases.


The Faculty of Sciences of the University of Lisbon (FCUL) discussed extracting system requirements through noun-phrase chunking and association rules mining. The presentation defined the relationship between the universe of components to determine relevant aspects in the rule extraction process. A pipeline that extracts components based on noun phrase chunking was proposed. NP Chunking is proven to be an efficient component extracting technique that can be recognized from the design specifications.

The third presentation was delivered by Percepio AB.This discussion focused on their current test optimization developments for variant-intensive software libraries.This moderate sized project contains a couple of libraries and system definitions. To find non-redundant test cases, they’re developing a tool to extract symbols and their dependencies. As a result, this tool can avoid testing all possible permutations and save testing time. Additionally, they are seeking a method to automatically display used symbols.


The last presentation was delivered by the Research Institutes of Sweden (RISE). The motivation behind product line engineering (PLE) and test reuse were discussed and followed by feature models and Case-Based Reasoning (CBR). Then, they focused on Hypothesis Testing and showed that requirements similarity is not strongly associated with software similarity, but it is positively co-related.

On the second day, five presentations were delivered in the scope of Testing of Configurable Products within WP3. This work package is dedicated to the development of the main technological advances with respect to test generation and test execution for variant and configurable systems.
 

Work Package Three: testing of configurable products

 
The Faculty of Sciences of the University of Lisbon (FCUL) kicked off the presentations. The first presentation introduced a tool called DeltaFuzzer, which focused on security testing through targeted fuzzing. DeltaFuzzer is a grey box fuzzer to detect several classes of vulnerabilities presented in software constructed for C/C++. It is the first fuzzer that implements a targeted fuzzer approach that makes the fuzzer focus on the parts that needed to be tested and reuses knowledge acquired in previous testing campaigns. DeltaFuzzer generates a test case (randomly or through a mutation strategy of existing test cases) and runs it in the software under test (SUT) and collects various metrics. Next, it determines if the program suffered a failure, thus saving the test case, and if the test case is “interesting”, i.e., if it is capable of uncovering new execution paths and causing a SUT failure, saving it and reusing it to generate another test case.

The next topic was a XIVT Robotics Demonstrator presented by Fraunhofer FOKUS. .It was an open-source codebase for handling variability in model-based testing of cyber-physical systems in a Robot Operating System (ROS) environment. It models a flexible production cell that is capable of gluing and welding sample workpieces. Currently, it is capable of handling various variations: structural (number of robots involved, position and orientations), module (the type of robot, end effector, controller), and process (fixed vs. mobile gluing tool, docking station, visual inspection).

The last presentation was on the final version of a requirement-based tool for variability modelling named BeVAR which was presented by QAC. The purpose of this tool was to enable a human operator to convert information from project documents into a Product Line Mode. This model describes a family of systems in terms of shared features and development history. It includes a Core model representing the minimum common set of features and Δ-models describing delta (Δ) artifacts added to the core model to generate a variance.

The Product Line Model enables automatic generation of abstract test cases and efficient variant selection for test optimization. Tool input will be manually provided by the user through a Graphical User Interface (GUI). As the result of the user’s work, a Product Line Model will be generated, in the form of a collection of UML diagrams encoded as XML documents.


STAY TUNED

Coming next month
This newsletter will be the last issue of the series of newsletters on the XIVT Plenary meeting that have been distributed to you over the past two months. More topics to come soon! Stay tuned to our next newsletter.

Our partners:

QA Consultants Pagepartners: ACE
QA Consultants Pagepartners: Ontario Tech University
QA Consultants Pagepartners: Fraunhofer FOKUS
QA Consultants Pagepartners: Ontario Centres of Excellence
QA Consultants Pagepartners: National Research Council of Canada

 

Recent thought leadership