Hi,
I try to cache some outputs of teacher model( Knowledge Distillation ) by using map function of Dataset library, while every time I run my code, I still recompute all the sequences. I tested Bert Model like this, I got different hash every single run, so any idea to deal with this?
from transformers import BertModel
from transformers import AutoTokenizer
import torch
token = ['hello']
model = BertModel.from_pretrained("bert-base-uncased").eval()
tok = AutoTokenizer.from_pretrained("bert-base-uncased")
def abcd():
with torch.no_grad():
out = model(**tok(token,return_tensors='pt'))[0]
# out = tok(token)
return out
from datasets.fingerprint import Hasher
my_func = abcd
print(Hasher.hash(my_func))
print(abcd())