Train loss goes to zero after some epochs

I am using the Decision transformer in the HF Transformers library, I modified the tutorial notebook to adapt it to my problem. After this I trained it for some epochs but the train loss goes to zero, while the agent does not perform very well. I have no idea why this happens, I thought maybe it could be because the loss is truncated to zero since it has very small values. I add an image of the loss/epochs graph.