-
Notifications
You must be signed in to change notification settings - Fork 1k
Open
Description
Can you please update the requirements.txt with specific versions of packages that work with this repository?
Also, I appreciate if you mention a working python 3 version. Is it python 3.7? python 3.12?
Which version of tensorflow-gpu?
Chapter 8, trying to train cvae-cnn code throws the following error which is obviously a version-related error
Model: "encoder"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃ Connected to ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ class_labels (InputLayer) │ (None, 10) │ 0 │ - │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ dense (Dense) │ (None, 784) │ 8,624 │ class_labels[0][0] │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ encoder_input (InputLayer) │ (None, 28, 28, 1) │ 0 │ - │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ reshape (Reshape) │ (None, 28, 28, 1) │ 0 │ dense[0][0] │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ concatenate (Concatenate) │ (None, 28, 28, 2) │ 0 │ encoder_input[0][0], │
│ │ │ │ reshape[0][0] │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ conv2d (Conv2D) │ (None, 14, 14, 32) │ 608 │ concatenate[0][0] │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ conv2d_1 (Conv2D) │ (None, 7, 7, 64) │ 18,496 │ conv2d[0][0] │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ flatten (Flatten) │ (None, 3136) │ 0 │ conv2d_1[0][0] │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ dense_1 (Dense) │ (None, 16) │ 50,192 │ flatten[0][0] │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ z_mean (Dense) │ (None, 2) │ 34 │ dense_1[0][0] │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ z_log_var (Dense) │ (None, 2) │ 34 │ dense_1[0][0] │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ z (Lambda) │ (None, 2) │ 0 │ z_mean[0][0], │
│ │ │ │ z_log_var[0][0] │
└──────────────────────────────┴───────────────────────────┴─────────────────┴───────────────────────────┘
Total params: 77,988 (304.64 KB)
Trainable params: 77,988 (304.64 KB)
Non-trainable params: 0 (0.00 B)
Model: "decoder"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃ Connected to ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ z_sampling (InputLayer) │ (None, 2) │ 0 │ - │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ class_labels (InputLayer) │ (None, 10) │ 0 │ - │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ concatenate_1 (Concatenate) │ (None, 12) │ 0 │ z_sampling[0][0], │
│ │ │ │ class_labels[0][0] │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ dense_2 (Dense) │ (None, 3136) │ 40,768 │ concatenate_1[0][0] │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ reshape_1 (Reshape) │ (None, 7, 7, 64) │ 0 │ dense_2[0][0] │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ conv2d_transpose │ (None, 14, 14, 64) │ 36,928 │ reshape_1[0][0] │
│ (Conv2DTranspose) │ │ │ │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ conv2d_transpose_1 │ (None, 28, 28, 32) │ 18,464 │ conv2d_transpose[0][0] │
│ (Conv2DTranspose) │ │ │ │
├──────────────────────────────┼───────────────────────────┼─────────────────┼───────────────────────────┤
│ decoder_output │ (None, 28, 28, 1) │ 289 │ conv2d_transpose_1[0][0] │
│ (Conv2DTranspose) │ │ │ │
└──────────────────────────────┴───────────────────────────┴─────────────────┴───────────────────────────┘
Total params: 96,449 (376.75 KB)
Trainable params: 96,449 (376.75 KB)
Non-trainable params: 0 (0.00 B)
CVAE
Traceback (most recent call last):
File "/scratch/htc/nhajarol/Advanced-Deep-Learning-with-Keras-master/chapter8-vae/cvae-cnn-mnist-8.2.1.py", line 260, in <module>
reconstruction_loss = mse(K.flatten(inputs), K.flatten(outputs))
^^^^^^^^^^^^^^^^^
File "/scratch/htc/nhajarol/miniconda3/envs/packt/lib/python3.11/site-packages/keras/src/legacy/backend.py", line 869, in flatten
return tf.reshape(x, [-1])
^^^^^^^^^^^^^^^^^^^
File "/scratch/htc/nhajarol/miniconda3/envs/packt/lib/python3.11/site-packages/tensorflow/python/ops/weak_tensor_ops.py", line 88, in wrapper
return op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/scratch/htc/nhajarol/miniconda3/envs/packt/lib/python3.11/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/scratch/htc/nhajarol/miniconda3/envs/packt/lib/python3.11/site-packages/keras/src/backend/common/keras_tensor.py", line 194, in __tf_tensor__
raise ValueError(
ValueError: A KerasTensor cannot be used as input to a TensorFlow function. A KerasTensor is a symbolic placeholder for a shape and dtype, used when constructing Keras Functional models or Keras Functions. You can only use it as input to a Keras layer or a Keras operation (from the namespaces `keras.layers` and `keras.ops`). You are likely doing something like:
x = Input(...)
...
tf_fn(x) # Invalid.
What you should do instead is wrap `tf_fn` in a layer:
class MyLayer(Layer):
def call(self, x):
return tf_fn(x)
x = MyLayer()(x)
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels