Weights of pre-trained BERT model not initialized

I am using the Language Interpretability Toolkit (LIT) to load and analyze the ‘bert-base-german-cased’ model that I pre-trained on an NER task with HuggingFace.

However, when I’m starting the LIT script with the path to my pre-trained model passed to it, it fails to initialize the weights and tells me:

    modeling_utils.py:648] loading weights file bert_remote/examples/token-classification/Data/Models/results_21_03_04_cleaned_annotations/04.03._8_16_5e-5_cleaned_annotations/04-03-2021 (15.22.23)/pytorch_model.bin
    modeling_utils.py:739] Weights of BertForTokenClassification not initialized from pretrained model: ['bert.pooler.dense.weight', 'bert.pooler.dense.bias']
    modeling_utils.py:745] Weights from pretrained model not used in BertForTokenClassification: ['bert.embeddings.position_ids']

It then simply uses the bert-base-german-cased version of BERT, which of course doesn’t have my custom labels and thus fails to predict anything. I think it might have to do with PyTorch or HuggingFace, but I can’t find the error.

If relevant, here is how I load my dataset into CoNLL 2003 format (modification of the dataloader scripts):

    def __init__(self):

        # Read ConLL Test Files

        self._examples = []

        data_path = "lit_remote/lit_nlp/examples/datasets/NER_Data"
        with open(os.path.join(data_path, "test.txt"), "r", encoding="utf-8") as f:
            lines = f.readlines()

        for line in lines[:2000]:
            if line != "\n":
                token, label = line.split(" ")
                    'token': token,
                    'label': label,
                    'token': "\n",
                    'label': "O"

    def spec(self):
        return {
            'token': lit_types.Tokens(),
            'label': lit_types.SequenceTags(align="token"),

And this is how I initialize the model and start the LIT server (modification of the simple_pytorch_demo.py script):

    def __init__(self, model_name_or_path):
        self.tokenizer = transformers.AutoTokenizer.from_pretrained(
        model_config = transformers.AutoConfig.from_pretrained(
            num_labels=15,  # FIXME CHANGE
        # This is a just a regular PyTorch model.
        self.model = _from_pretrained(

## Some omitted snippets here

    def input_spec(self) -> lit_types.Spec:
        return {
            "token": lit_types.Tokens(),
            "label": lit_types.SequenceTags(align="token")

    def output_spec(self) -> lit_types.Spec:
        return {
            "tokens": lit_types.Tokens(),
            "probas": lit_types.MulticlassPreds(parent="label", vocab=self.LABELS),
            "cls_emb": lit_types.Embeddings()

Anyone has an idea what the issue could be?

It’s logical to not have bert pooler weights in token classification (we are not using the pooler in this model), the warning suggests you are not using the last version of :hugs: Transformers but in any case, you can safely ignore it.

Okay, I see - but still I don’t get any predictions from the model. So that these weights are not initialized is not the issue? Does that also apply to the embedding position ids?