Hey friends! Just wondering a simple question
Given a pretrained model, can I change the batch size?
eg:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2")
Cant find any documentation on how to do so, thanks!