Open
Description
When call tf2onnx.convert.from_graph_def API in a python process using tensorflow_gpu, from_graph_def will use all the gpu memory, make it difficult for someone else who shares the GPU card with me.
I have noticed that the following code try to place the tf session on cpu device, however is does not seem to work as expected.
with tf.device("/cpu:0"):
with tf.Graph().as_default() as tf_graph:
with tf_loader.tf_session(graph=tf_graph) as sess:
tf.import_graph_def(graph_def, name='')
frozen_graph = tf_loader.freeze_session(sess, input_names=input_names, output_names=output_names)
input_names = tf_loader.inputs_without_resource(sess, input_names)
frozen_graph = tf_loader.tf_optimize(input_names, output_names, graph_def)
I try to add tf.compat.v1.ConfigProto() with allow_growth=True setting, it seems to work.
I wonder if you have plan to add session config setting when using tf.Session, or did I use the API wrong?