Special tokens with inputs_embeds input

I want to train models with my own embeddings. I am doing two sequences classification task.
How I deal with CLS and SEP token with his case ?
cls_embedding = torch.randn(1, embedding_dim) # [CLS] token embedding
sep_embedding = torch.randn(1, embedding_dim) # [SEP] token embedding
And pad my embeddings ? Or there some other way for that ?