0

0

I am trying to fine-tune a VGG16 model for an ECG-image classification task with the resolution of 1700 * 2200

This is how an ECG image look like

I haven't reshaped and reduced the size of the images because I thought some information about the signals might get lost. I developed a basic VGG model to fine-tunning architecture. However, I'm encountering issues when trying to train the model in Google Colab—it consistently crashes due to excessive RAM usage before the training even starts. I'm loading the image details and corresponding classes from a CSV file (formatted as [path|class]) and feeding them into the model pipeline using two specific functions.

def preprocess_image(image_path, label):
  """Loads an image, resizes, and applies preprocessing."""
  img = tf.io.read_file(image_path)
  img = tf.image.decode_jpeg(img, channels=3)
  img = tf.cast(img, tf.float32) / 255.0  # Normalize pixel values
  return img, label

def load_data_from_csv(csv_file, batch_size): df = pd.read_csv(csv_file) image_paths = df['img_path'].tolist() labels = df['classification_labels'].tolist()

Convert string labels to lists

labels = [eval(label) for label in labels]

One-hot encode labels using the previously fitted MultiLabelBinarizer

one_hot_encoded_labels = model.encoder.transform(labels)

dataset = tf.data.Dataset.from_tensor_slices((image_paths, one_hot_encoded_labels)) dataset = dataset.map(preprocess_image, num_parallel_calls=tf.data.AUTOTUNE) dataset = dataset.batch(batch_size) dataset = dataset.prefetch(tf.data.AUTOTUNE) return dataset

dataset = load_data_from_csv('data.csv', BATCH_SIZE)

So is the crash associated with my code? If so can you suggest a way to work around this issue(I'm using the free colab plan)

rav2001
  • 101

0 Answers0