Hey there, I am training Roberta from scratch for protein sequences. To this end, I build a tokenizer for protein sequences, which is very much like a character-level tokenizer. After that, I stored my tokenizer and used it in the Roberta model followed this tutorial: How to train a new language model from scratch using Transformers and Tokenizers.
The code runs fine on CPU but failed on GPU, and the error message is as follows:
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling
Thanks in advance for any thoughts on this issue!