Using the same dataset for fine-tuning and training

Hi, I’m working on a project that utilizes BERT model.

initially, I have split the dataset 80/20 training/testing.
then I used the training split for fine-tuning the BERT model using Trainer, and testing was passed for validation.
after that, I have extracted the BERT embeddings, and stacked it on a TF model, then compiled it, then trained it using the same portion of data used for the fine-tuning

my question is:
is it OK to use the same data portion for fine-tuning AND training?
would that cause over-fitting?

I searched for similar posts but I couldn’t find any answer,
thanks in advance.

As long as you have one held-out set that is never used for during-training-evaluation (only as a final test set), that’s okay. It’s not different from training for multiple epochs.

Perfect!, Thanks a lot.