-
Notifications
You must be signed in to change notification settings - Fork 0
Increase training speed by testing different techniques #17
Copy link
Copy link
Open
Description
tfha
opened on Aug 6, 2022
Issue body actions
- Profiling code for time with profiling techniques to find the parts to optimize (cProfiler etc.): https://machinelearningmastery.com/profiling-python-code/
- MKL:
- https://pypi.org/project/mkl/
- https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl.html#gs.7t60es
- https://stackoverflow.com/questions/69986869/how-to-enable-and-disable-intel-mkl-in-numpy-python
- Running on NGI Odin Machine
- Running on Azure cloud (if its available for use in our NGI Azure cloud) or AWS/Google cloud etc.
- Sharing database and optuna optimize from many machines against the same database-files
- Make it possible to automatically kick off the number of processes as the number of cpus on a machine, ie. parallelization:
- https://wiki.python.org/moin/ParallelProcessing
- https://mpi4py.readthedocs.io/en/stable/
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels