-
Notifications
You must be signed in to change notification settings - Fork 19
Open
Description
From tensorflow docs: »...to configure a virtual GPU device with tf.config.set_logical_device_configuration and set a hard limit on the total memory to allocate on the GPU.«
Found in eynollah.py: #gpu_options = tf.compat.v1.GPUOptions(per_process_gpu_memory_fraction=7.7, allow_growth=True)
Could the gpu be utilized better with gpu memory limit?
Y = memory usage in MB. Logarithmic scale. min = 10 MB. Sampled in 3 s intervals with nvidia-smi.
red = sbb-binarize, blue = eynollah-segment, green = calamari-recognize (no gpu)
Metadata
Metadata
Assignees
Labels
No labels
