FAQ about the course projects

What kind of compute resources will be supplied by AWS? Which type, how much RAM, for how long can we run it?

Background: I have some ideas for projects, but they require training a large model which cannot be trained with a normal single google colab pro T4/P100 GPU (for training e.g. a DeBERTa-large model even the ‘high-RAM’ GPUs from colab are not enough).