OK to add arbitrary entries to model's config?

I was wondering, is it OK to add custom entries to a given model’s config attribute and, consequently, to its config.json file? Are there any caveats to consider here?

What I want to do is to store certain parameters from training there. For example, whether the model was trained on cased or uncased data. Model’s config seems like a really natural place for this. I know that bert-base-uncased stores this sort of thing in tokenizer’s config, but in my use case the data is first preprocessed independently of the tokenizer, so doesn’t make sense to put it there. Also, I want to store a number of other parameters that I need to be able to evaluate the model correctly.