Skip to content

l2brb/Sp3llsWizard

Repository files navigation

logo

Python version Conda Repo size Issues

Sp3llsWizard: From Sound Workflow Nets to LTLf Declarative Specifications by Casting Three Spells

This repository contains the implementation and experimental toolbench presented in the paper “From Sound Workflow Nets to LTLf Declarative Specifications by Casting Three Spells".🧙 The work presents a systematic approach to synthesizing declarative process specifications from safe and sound Workflow Nets (WF-nets), ensuring full behavioral preservation. Here you’ll find the complete toolchain and experimental setup, tested on both synthetic and real-world datasets, used to analyze the correctness and performance of the implemented algorithm.

Overview

Sp3llsWizard is a framework to formally synthesize DECLARE specifications from safe and sound Workflow Nets. The proof-of-concept implementation automatically generates LTLf constraints from an input WF-net provided as a .pnml file.

Quickstart

git clone https://github.com/l2brb/Sp3llsWizard.git
cd Sp3llsWizard
conda create -n sp3lls-env python=3.13 -y
conda activate sp3lls-env
pip install -r requirements.txt
python3 main.py run-algorithm --pnml-file ${INPUT_WN}  --output-format json --output-path ${OUTPUT_PATH}

Repository

The main content of the repository is structured as follows:

Setup & Execution

  • Requires Python 3.13.0 or newer.
  • Tested on:
    • Ubuntu Linux 24.04.1
    • macOS
    • Windows 11 (via WSL or Unix-like shell)
  • No installation is required — just clone, create the python environment with the dependencies and run.

Run the algorithm:

python3 main.py run-algorithm --pnml-file ${INPUT_WN}  --output-format json --output-path ${OUTPUT_PATH}

execution

Evaluation

We evaluated our algorithm on a range of both synthetic and real-world data. For the real-world testbed, we take as input processes discovered by a well-known imperative process mining algorithm from a collection of openly available event logs. We conducted the performance tests on an AMD Ryzen 9 8945HS CPU at 4.00 GHz with 32 GB RAM running Ubuntu 24.04.1.

Bisimulation

To experimentally validate the correctness of our approach in the transmission and computation phases, we run a bisimualtion test. To this end, we . We run the stand-alone MINERful... . The convergence results are available in /output/ in the form of a workflow net.

Performance analysis

To evaluate the runtime memory utilization of our Sp3llsWizard implementation, we run a performance test, split into two different configurations.

Increasing constraint-set cardinality.

We rely on an iterative expansion mechanism, implemented in /expansion_rules/, that applies four soundness-preserving transformation rules to progressively grow a Workflow net. Starting from a base net, we fix a central transition (t1) as a pivot and preserve the initial and final places (p1, p2). At each iteration:

  1. A transition is split into two, extending the activity sequence.
  2. Parallel paths are introduced to add concurrency.
  3. Conditional branches are added to create alternative paths.
  4. Loops are added for iteration.

This process is repeated for 1000 iterations, each time expanding the net and applying our algorithm.
Results are available in /cardinality_test_results/.

cardinality

Increasing constraint formula size

Here, we configure the test on memory usage and execution time to investigate the algorithm’s performance while handling an expanding constraints’ formula size (i.e., with an increasing number of disjuncts). To this end, we progressively broaden the Workflow net by applying the soundness-preserving conditional expansion rule.

formulasize

Real-world process model testing

To evaluate the performance of our algorithm on real process models, we run memory usage and execution time tests on workflow nets derived from real-world event logs. First, we generate workflow nets by applying the Inductive Miner algorithm, available in pm4py.

python3 miner.py

We run our algorithm on the generated workflow nets to derive the corresponding Declare specification in JSON format. The scripts record memory usage (in MB) and execution time (in ms) during processing.

Event log Trans. Places Nodes Mem.usage [MB] Exec.time [ms]
BPIC 12 78 54 174 19.97 5.11
BPIC 13cp 19 54 44 19.76 1.70
BPIC 13inc 23 17 50 19.89 2.03
BPIC 14f 46 35 102 19.90 3.31
BPIC 151f 135 89 286 20.44 8.39
BPIC 152f 200 123 422 20.91 12.30
BPIC 153f 178 122 396 20.77 11.49
BPIC 154f 168 115 368 20.55 11.38
BPIC 155f 150 99 320 20.43 9.16
BPIC 17 87 55 184 19.91 5.67
RTFMP 34 29 82 19.81 3.47
Sepsis 50 39 116 19.75 3.65

Process Diagnostics

The goal of this module is the use of the synthesized constraints as determinants of process diagnosis. We aim to demonstrate how we can single out the violated rules constituting the process model behavior, thereby spotlighting points of non-compliance with processes.

To this end, we developed a dedicated diagnostic module that extends a declarative specification miner for constraint checking via the replay of runs on semi-symbolic automata.

./run-MINERfulFitnessChecker-unstable.sh -iLF ${INPUT_LOG} -iLF xes -oCSV ${OUTPUT_PATH}

As a test case, we use real-world data from BPIC 15_5f. After preprocessing (975 valid traces), the diagnostic pipeline works as follows:

  • Discover a process model from the event log using the α-algorithm.
  • Translate the resulting Workflow Net into a DECLARE specification via our synthesis algorithm.
  • Check the original event log against the declarative model to observe which constraints are satisfied or violated.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published