-
Notifications
You must be signed in to change notification settings - Fork 0
Rstevanak/models tester #38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
scripts/test_model.py
Outdated
| @click.option('--loader', '-l', type=click.Choice(['PKLot', 'PKSpace']), | ||
| default='PKSpace', help='Loader used to load dataset') | ||
| @click.argument('dataset_dir') | ||
| @click.argument('model_file') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rstevanak Here we can add similar line to the trainer file output = os.path.join(dataset_dir, 'out.pkl') so the click would take care of handling the existence of these file for us.
scripts/test_model.py
Outdated
| model = pickle.load(mp) | ||
| spaces, ground_answers = loader.load(dataset_dir) | ||
| model_answers = model.predict(spaces) | ||
| print(classification_report(ground_answers, model_answers)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rstevanak Maybe add a way to print these in some different, more machine friendly, format so other application can run this and perhaps display it somewhere.
|
Looks good from my point of view, would be indeed interesting, if we could also output the results in some other way, so they are machine readable. I am not sure if you can get that from classification report, but if you can, that should be pretty easy to do. |
|
@mrshu @vidriduch what do you think? |
| @click.argument('model_file', | ||
| type=click.Path(exists=True, file_okay=True, dir_okay=False, | ||
| resolve_path=True)) | ||
| @click.option("--machine_friendly", '-f', is_flag=True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rstevanak I don't really think the --machine_friendly is the right name to use here. I think it would be much better if it would be --output or --format and user could possibly choose from multiple formats like yaml/json for instance.
| for ans_class in [0, 1]: | ||
| row = answer[ans_class] | ||
| col_sum += row[column] * row['support'] | ||
| answer['avg'][column] = col_sum / answer['avg']['support'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rstevanak How about we would use http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html. I think it would be nicer. It should return the things we need as was also mentioned here: https://stackoverflow.com/a/42467096
|
@rstevanak any updates on this? |
Can be tested from main project folder by running python -m scripts.test_model "path_to_testing_dataset" "path_to_model" --loader "Optional_loader_specification"
It adds the ability to print classification report from scikit learn on any testing model, on any testing data.