Skip to content

Fine-Tuning EfficientNetB0 with pretrained 'imagenet' is not reproducIble (i.e,. saving model and loading) #797

Closed
@Qasim-Latrobe

Description

@Qasim-Latrobe

-- tensorflow 2.16.1
-- Similar behavior is observed in tensorflow 2.17.0

I am encountering an issue with fine-tuning an EfficientNetB0 model that was originally pretrained on ImageNet.

Model Training and Fine-Tuning: I start with an EfficientNetB0 model pretrained on ImageNet and fine-tune it on my specific dataset.

Saving the Model: After fine-tuning, I save the model using model.save() with the .keras format.

Loading the Model: When I later load the model using load_model(), the performance of the model does not match the performance achieved during the fine-tuning phase. The results appear to be inconsistent or random.

I am initializing random states through seed for reproducibility.

from tensorflow.keras.applications import EfficientNetB0
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
import tensorflow as tf

Set seed for reproducibility

tf.random.set_seed(42)
np.random.seed(42)

Define and compile the model

base_model = EfficientNetB0(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x) # Adjust number of classes
model = Model(inputs=base_model.input, outputs=predictions)

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

Train and fine-tune the model

model.fit(x_train, y_train, epochs=10)

Save the fine-tuned model

model.save('fine_tuned_model.keras')

Load the model later

loaded_model = tensorflow.keras.models.load_model('fine_tuned_model.keras')

Evaluate performance

results = loaded_model.evaluate(x_test, y_test)

I have observed that when saving the model weights in HDF5 format (.h5) and subsequently loading them within the same session, the validation performance is consistently reproduced. However, when the .h5 model weights are loaded in a different session, the validation performance does not match the original results, and returns random accuracies.

Additionally, when using EfficientNetB0 with pretrained weights set to 'None', the model's performance remains consistent regardless of the session, and it can reproduce its performance.

Other models such as ResNet50, VGG16 runs as expected, only the efficientNetBx are having this issue.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions