Though Deep Neural Networks (DNNs) are being used extensively in many fields including safety critical systems such as autonomous driving and medical diagnostics, there are chances for the DNN to exhibit erroneous behaviours.
Testing a DNN is not that feasible as classical software testing. In a classical software, we know whats happening at every point of time. When you have a problem, you exactly know where it occurs and how to rectify them. Hence we would use white-box testing which touches the internals of the software. But in a AI software, testing the software internals is of no use. After a model is trained, we have no clue whats happening inside all those networks, it just happens. Hence the best option is to follow the black-box testing approach in which we test the functionality of the software from the outside.
The goal of this RnD is to test deep neural networks. The focus is not on testing the performance of a deep neural network, instead to test its capability. To aid this testing, we use behaviour driven development methods which takes in both requirements from test engineers and the capabilities of blender thereby generating synthetic test dataset in blender for testing the learned DNN models.
- The most verified process of testing deep neural networks is to use test datasets. This aspect of testing is more viable and trusted than the other aspects of testing.
- In most DNN image classifiers, the trained models misunderstands one class with the other, or even sometimes show one-sided biases. Hence the properties of the class are violated and misinterpreted.
- Some DNNs perform well on simple tasks, but when it comes to the point to expand its use case, the user might begin to face capability issues from the model.
- We could never say a DNN model is perfectly trained and can perform well on any situation, because the possibilities real world inputs to this model is infinite.
- Clone the repository to the local machine.
- The user must have the latest version of blender installed in the machine.
- Download the input data from the here, extract it and place the extracted input folder in the root of the repository
The control starts with the src/main.cpp file where the blender application is started given a bpy script. Inside the bpy script, the entire model creation and alteration process is carried out. This script uses the functions from src/bpy_functions.py. Models to be imported in blender are inside input folder.
The input folder contains the dataset with images and the ground truth csv. Training uses this data to train the model and gives out model.pth to the output folder.
The model.pth is validated with a new set of data and the output is stored in the output folder.
To generate dataset, use the command
make dataset
To train the model, use the command
make train
To validate the model, use the command
make validate
To run the BDD tests, use the command
make test