Hello, I also have four different classes that I want to train. Here, my num_class_embedds is set to 4 and class_embed_type is set to None. However, I’m having trouble writing the class_labels , which is causing an error in the line hidden_states = hidden_states + temb . Can you please tell me how to create the class_labels ?
This is my class_labels code
def class_label_tensor(examples, is_train=True):
def class_tokenizer(text):
class_names = [['C0201'], ['R0201'], ['L2016'], ['F1210']]
class_label = text
num_classes = len(class_names)
class_vector = torch.zeros(num_classes, dtype=torch.int)
class_index = class_names.index(class_label)
class_vector[class_index] = 1
class_tensor = class_vector.view(1, num_classes)
return class_tensor
captions = []
for caption in examples[caption_column]:
if isinstance(caption, str):
captions.append(caption)
elif isinstance(caption, (list, np.ndarray)):
# take a random caption if there are multiple
captions.append(random.choice(caption) if is_train else caption[0])
else:
raise ValueError(
f"Caption column `{caption_column}` should contain either strings or lists of strings."
)
label_tensor = class_tokenizer(captions)
return label_tensor
I always get RuntimeError: The size of tensor a (64) must match the size of tensor b (320) at non-singleton dimension 4in my case.
Thx!