No option for embedding size in transformers.RobertaConfig

What parameter of transformers.RobertaConfig controls the embedding size of input tokens to Roberta? (Like n_embd (int, optional, defaults to 768) param in case of transformers.GPT2Config)

Hi @iamneerav
Thanks for the issue, I think in this case you may want to use hidden_size, check: transformers/configuration_roberta.py at e2bd7f80d023b27bf3b93fa3d3f5ca941ff66572 · huggingface/transformers · GitHub