This repository contains a CLI for comparing proof obligations in the context of formal verification. The dashboard lets you compare and filter proof obligations generated by formal methods tools.
A proof obligation is a JSON file with the following structure:
{
"time": "The time it took to generate the proof obligation in seconds",
"checks": [
{
"kind": "safe" | "warning" | "error",
"title": "A category for the check, e.g. 'Invalid Memory Access'",
"message": "A message describing the check",
"range": {
"start": {
"line": "The line number where the check starts",
"column": "The column number where the check starts",
"file": "The file where the check starts"
},
"end": {
"line": "The line number where the check ends",
"column": "The column number where the check ends",
"file": "The file where the check ends"
}
},
"callstack": "An optional object containing a unique identifier for the current context; it can be any string that helps identify the check context",
}
]
}The title field in the checks array can be one of the following categories:
Assertion failureInvalid memory accessDivision by zeroSigned integer overflow in arithmetic operatorUnsigned integer overflow in arithmetic operatorSigned integer overflow in explicit castUnsigned integer overflow in explicit castSigned integer overflow in implicit castUnsigned integer overflow in implicit castInvalid pointer comparisonInvalid pointer subtractionDouble freeNegative array sizeInvalid floating point operationStub conditionInsufficient variadic argumentsInsufficient format argumentsInvalid type of format argumentFloating-point division by zeroFloating-point overflowIncorrect number of argumentsInvalid shift
Any other value is treated as an invalid check.
The dashboard takes two required positional arguments for the proof
obligations to compare. If one of them is stdin, the proof obligation is
read as a single-line JSON value from standard input.
| Argument | Description |
|---|---|
--project <path> |
Path to the project directory. Defaults to pwd. |
--output <file> |
File to write the JSON output to. Defaults to stdout, formatted for readability. |
--analyze <file> |
Filter checks to show only those related to the given file. Defaults to all files. |
--exclude-not-found |
Exclude files that are not found in the project directory from the output. Defaults to false. |
--filter-kind <kind,...> |
Filter conflicts by kind, as a comma-separated list. Defaults to all kinds. See Kinds. |
--filter-error-category <category,...> |
Filter conflicts by error category, as a comma-separated list. Defaults to all categories. See Check categories. |
The JSON output format is still being finalized. See report.ml and conflict.ml for the current format.
The kind field of a check object can be one of the following:
no_conflict_safe: Both analyzers agree that the code is safe.no_conflict_warning: Both analyzers agree that the code is a warning.no_conflict_error: Both analyzers agree that the code is an error.unchecked: This should not appear in the output. It means the check has not yet been processed by the dashboard.only_one_proof_obligation: Only one proof obligation was generated by one of the analyzers.safety_w1: The first analyzer considers the code safe, but the second one considers it a warning or an error.safety_w2: The second analyzer considers the code safe, but the first one considers it a warning or an error.precision_w1: Both analyzers agree that the code is a warning. However, the first analyzer says that a sub-range is safe.precision_w2: Both analyzers agree that the code is a warning. However, the second analyzer says that a sub-range is safe.error_level: The error level is different between the two analyzers.
The error ranges are computed dynamically from the file contents. In the output, expect the ranges from the less precise tool.
- 0 - no conflicts found
- 1 - error in input files or arguments
- 2 - precision conflicts found
- 3 - only one tool emits a PO for a given range
- 4 - safety conflicts found
- 5 - internal error (unexpected state)
To set up the dashboard, run opam install . --deps-only in the repository
root, then build the project with dune build.
Follow the instructions on the installation page.
Dashboard output is not yet incorporated into the main Goblint repository. To use Goblint with the dashboard, clone this fork:
git clone -b checks https://github.com/Robotechnic/analyzer.gitThen choose one of the following options:
- Follow the installation instructions in the "Installing" section of the README. In this case, Goblint will be installed in a separate switch, which may be less convenient for development.
- Install Goblint in the current opam switch by running the following commands in the root of the cloned repository:
make deps
make installmopsa-c -output mopsa.json -format=json -show-safe-checks examples/simples/overflow.c
goblint --outfile goblint.json --result dashboard examples/simples/overflow.cFinally, run the dashboard to compare the two proof obligations:
dune exec dashboard -- --exclude-not-found true mopsa.json goblint.json
To get the example used in the paper, run
screenshot_script/allexamples.sh. This creates a results directory
containing the mopsa.json and goblint.json files needed for the GUI. To
use the example in the GUI, compress
examples/allErrorTest and upload it together with the
two JSON files (mopsa.json and goblint.json).
If you only want to see the console output, run the dashboard with the following command:
dashboard -- --exclude-not-found true ./results/mopsa.json ./results/goblint.jsonTo build the Docker image, run the following command in the repository root:
docker build -t dashboard .You can then start an environment with the dashboard CLI using:
docker run -it --rm -v $(pwd):/dashboard/app dashboard bashThis mounts the current directory so it is available inside the Docker container.
You can also run the dashboard CLI directly with:
docker run --rm -v $(pwd):/dashboard/app dashboard dune exec dashboard -- [args]Replace [args] with the appropriate arguments.