Load_cache before training

I’m training a model using an online cluster using scheduling, so the training will be interrupt. when the last training start again, the cache might be problem.
f.read() get nothing from cache file, then there is an error.


how should i do ?
thanks.