Hi,
I’m trying to do some emotion regression with bert and similar models. Hi basically adapted the scripts in HF tutorials, and they seem to work fine BUT
To give some robustness to the metrics I get, I train several times the same models and give a mean of the figures I get. I know I should probably do cross-validation here, but to keep things easy I just added an external cycle to iterate the number of times I want over the whole defining datasets, tokenizing and training processes.
For some reasons, though, after the second iteration, I keep getting the same figures, ie:
pearson’s r MAE MSE
0.5113754316712945 0.19221895595766464, 0.05939128027188472
0.47599068764848923 0.17558474052982273 0.05101348304725949
0.47599068764848923 0.17558474052982273 0.05101348304725949
0.47599068764848923 0.17558474052982273 0.05101348304725949
I don’t understand where the problem is… At the end of each cycle I do del model, trainer and predictions and remove the directory with the saved best model - and anyway, if the problem is that something persists in memory, shouldn’t I always get the same figures and not just starting from the second iteration?
BTW, I’m working on google colab.
I know it’s difficult to give advise without more information, but given the situation I should attach the whole code inside the cycle… And that code works, if I just execute once at a time!
Thanks in advance,
Giovanni