DefinePK

DefinePK hosts the largest index of Pakistani journals, research articles, news headlines, and videos. It also offers chapter-level book search.

Explicit State Model Checking Effects on Learning-Based Testing


Article Information

Title: Explicit State Model Checking Effects on Learning-Based Testing

Authors: Iram Jaffar, Muddassar Azam Sindhu, Shafiq Ur Rehman

Journal: International Journal of Innovations in Science & Technology

HEC Recognition History
Category From To
Y 2024-10-01 2025-12-31
Y 2023-07-01 2024-09-30
Y 2021-07-01 2022-06-30

Publisher: 50SEA JOURNALS (SMC-PRIVATE) LIMITED

Country: Pakistan

Year: 2024

Volume: 6

Issue: 7

Language: English

Keywords: Software TestingExplicit State Model CheckingLearning-Based TestingCounterexamples.

Categories

Abstract

Exploring the impact of integrating an explicit state model checker into the learning-based testing (LBT) framework presents an intriguing challenge. Traditionally, LBT has leveraged symbolic model checkers such as NuSMV and SAL, which use Binary Decision Diagrams (BDDs) to analyze multiple states concurrently. In contrast, explicit state model checkers evaluate one state at a time, a key distinction that suggests potential advantages for explicit state checking in the context of LBT. Thus, it is valuable to investigate how integrating an explicit state model checking algorithm might influence the performance of LBT. Model checkers explore the state space to verify conformance with user-defined correctness requirements, typically represented as Linear Temporal Logic (LTL) formulas. If a property violation is detected, it is presented as a counterexample. NuSMV and SAL employ different algorithms for generating and displaying counterexamples. This paper specifically examines the effect of SPIN-generated counterexamples on the LBT process. Evaluation metrics include Total LBT Iterations, First Bug Reporting Time (in milliseconds), Counterexample Length, Precision, and Efficiency, among others. Total Model Checking Time (in milliseconds) captures the cumulative time spent verifying the model over all iterations. SPIN consistently requires the least time for all specifications compared to other model checkers. As a result, experiments demonstrate that SPIN is more efficient when integrated with LBT, leading to faster convergence of the LBT hypothesis to the target System Under Test (SUT) in comparison to NuSMV and SAL.


Research Objective

To investigate the impact of integrating an explicit state model checker (SPIN) into the Learning-Based Testing (LBT) framework and compare its performance against symbolic model checkers (NuSMV and SAL).


Methodology

The study integrates the SPIN model checker with an incremental automaton learning algorithm within the LBT framework. Two Systems Under Test (SUTs) were used: a vehicle cruise controller and a three-floor elevator system. Performance was evaluated using metrics such as Total LBT Iterations, First Bug Reporting Time, Counterexample Length, Precision, Efficiency, Total Model Checking Time, and Hypothesis Size. Experiments were conducted using the IKL model inference algorithm.

Methodology Flowchart
                        graph TD;
    A[Integrate SPIN with LBT Framework] --> B[Select Systems Under Test - SUTs];
    B --> C[Define Evaluation Metrics];
    C --> D[Conduct Experiments with SUTs and SPIN];
    D --> E[Compare SPIN Performance with NuSMV and SAL];
    E --> F[Analyze Results and Draw Conclusions];                    

Discussion

The paper discusses how the explicit state model checking approach of SPIN, particularly its nested depth-first search algorithm, contributes to its efficiency compared to the BDD-based approach of NuSMV and SAL. The uniqueness of SPIN-generated counterexamples allows LBT to build more refined hypotheses, accelerating the learning process. The paper also details the experimental setup, the SUTs used, and the various performance metrics analyzed.


Key Findings

SPIN consistently outperforms NuSMV and SAL in terms of efficiency when integrated with LBT. This leads to faster convergence of the LBT hypothesis to the target SUT. SPIN demonstrates faster state space exploration, generates more unique counterexamples, and requires less total model checking time.


Conclusion

Integrating the SPIN model checker into the Learning-Based Testing framework significantly enhances testing efficiency. SPIN's explicit state verification capabilities and effective counterexample generation lead to faster convergence, more refined hypotheses, and reduced execution times compared to NuSMV and SAL.


Fact Check

1. SPIN's Total Model Checking Time: For the cruise controller SUT, SPIN's total model checking times across four specifications were 30.7, 35.54, 42.3, and 23.4 milliseconds.
2. NuSMV's Total Model Checking Time: For the cruise controller SUT, NuSMV's total model checking times across four specifications were 34.12, 39.5, 43.5, and 29.8 milliseconds.
3. SAL's Total Model Checking Time: For the cruise controller SUT, SAL's total model checking times across four specifications were 195.73, 209.69, 236.57, and 184.14 milliseconds.


Mind Map

Loading PDF...

Loading Statistics...