by Mauro Milella, Giovanni Pagliarini, Andrea Paradiso, Ionel Eduard Stan
Applied Computational Logic and Artificial Intelligence (ACLAI) Laboratory,
Department of Mathematics and Computer Science,
University of Ferrara,
Italy
Modal symbolic learning is the subfield of artificial intelligence that brings together machine learning and modal logic to design algorithms that extract modal logic theories from data. The generalization of model checking to multi-models and multi-formulas is key for the entire inductive process (with modal logics). We investigate such generalization by, first, pointing out the need for finite model checking in automatic inductive reasoning, and, then, showing how to efficiently solve it. We release an open-source implementation of our simulations.
Given a specific parametrization, the purpose of this module is to compute model-checking over multiple models and multiple formulas. Results are plotted, comparing the time elapsed with different memoization intensities.
Please, install the required dependencies by entering the Julia Package Manager (press ]
in the REPL) and launch instantiate
:
(OVERLAY2022) pkg> instantiate
Then, the code can be executed by launching the command
julia --project=. src/experiments.jl --nmodels N_MODELS --nworlds N_WORLDS --nletters N_LETTERS --fmaxheight F_MAXHEIGHT --fmemo F_MEMO --nformulas N_FORMULAS --prfactor PR_FACTOR --nreps N_REPS --rng RNG_SEED
where the flags meaning is the following:
-
--nmodels
: (int,$\gt 0$ ) number of Kripke models. -
--nworlds
: (int,$\gt 0$ ) number of worlds in each Kripke model. -
--nletters
: (int,$\gt 0$ ) alphabet cardinality. -
--fmaxheight
: (int,$\gt 0$ ) formulas max height. -
--fmemo
: string representing the max height of shared (sub)formulas.An example of valid usage is
--fmemo='no,0,1,2'
, where'no'
means that memoization is not shared between different (sub)formulas model-checking on the same model. -
--nformulas
: (int,$\gt 0$ ) number of formulas to be model-checked on each Kripke model. -
--prfactor
: (float ranging in$[0,1]$ ) pruning factor.This corresponds to the probability of interrupting the formula-generation process. As this parameter is set to be closer to
$1.0$ , formulas are generally smaller in size than the chosen--fmaxheight
. -
--nreps
: (int,$\gt 0$ ) number of experiment repetitions. -
--rng
: (int) seed to reproduce the experiment.
For more information, please launch
julia --project=. src/experiments.jl --help
A utility bash script is provided (see src/launch_experiments.sh
) to enqueue the execution of more parametrizations, instead of manually launching the command above.
The generated plot is saved in outcomes/plots
. Each execution it's also associated with a CSV (created in outcomes/csv
) useful to adjust and customize plottings with ad-hoc scripts: in this regard, an example python script is provided (it's placed in the latter director) to obtain results graphically similar to those presented in the paper.
Results of the multi-models and multi-formulas model checking with the single and shared memoization approaches with varying values of
Figure 1:
The blue group in the latter plot (fourth quadrant) is generated by the following command:
julia --project=. src/experiments.jl --nmodels 50 --nworlds 20 --nletters 2 --fmaxheight 8 --fmemo "no,0,1,2,4,8" --nformulas 8000 --prfactor 0.2 --nreps 10000
Figure 2:
Below, the maximum possible speedup for each
Max Speedup Table | ||||
---|---|---|---|---|