AWS Deep Learning Containers

Hi,

I’m a complete novice in this space. What I want to do is reproduce the paper ‘Large Language Models are Few-Shot Health Learners’. So I wana do zero/few-shot classification and soft prompt tuning. I want to use Llama-2 70B.

Therefore I plan to run all of this on AWS. It’s quite expensive though, so I don’t want to waste any computing time. Thus I’m wondering: what is the intended workflow? Are we supposed to run Deep Learning Containers (DLCs) locally and debug everything with a substitute model and when everything works, do it on AWS with the big model and GPU?

Thanks for the advice!