How to get embedding to each n-grams from a sentence using BERT?

Given a set of labels with different numbers of words, such as:

labels=["computer accessories", "baby", "beauty and personal care"]

Is there an approach to computing label embeddings in a single BERT forward pass (considering the list of labels as a single sentence)? Or has it the same as the computational cost of a forward pass for each label?