-
Notifications
You must be signed in to change notification settings - Fork 3
Feature/compress models uploaded to mlflow #140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/compress models uploaded to mlflow #140
Conversation
quadra/utils/utils.py
Outdated
| quadra_export.generate_torch_inputs(input_size, device=device, half_precision=half_precision), | ||
| ) | ||
| types_to_upload = config.core.get("upload_models") | ||
| mlflow_zip_models = config.core.get("mlflow_zip_models") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Visto che è un boolean, mettiamo False come valore di default invece che lasciare None
quadra/utils/utils.py
Outdated
| base_dir=model_name, | ||
| ) | ||
| shutil.move("assets.zip", temp_dir) | ||
| with mlflow.start_run(run_id=mlflow_logger.run_id) as _: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perchè fai partire la run tutte le volte invece che farlo una sola volta fuori?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perché prima veniva fatto così, veniva fatta partire per onnx e per torch, cambio e metto un'unica run fuori
Summary
Describe the purpose of the pull request, including:
Type of Change
Please select the one relevant option below:
Checklist
Please confirm that the following tasks have been completed: