Few shot learning

How can I do few shot learning using the transformers library and gpt-j-6B?
I have found the contents of the following link helpful but it only describes few shot learning using the API.



I do have a notebook on that here: Transformers-Tutorials/Inference_with_GPT_J_6B.ipynb at master · NielsRogge/Transformers-Tutorials · GitHub

1 Like

Thanks for providing the code. When I run this code, it gives the following errors–>

The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.

**RuntimeError** : "LayerNormKernelImpl" not implemented for 'Half'
Can you please help me with these?

If you using CPU , instead of GPU please chnage revision=“float32” instead of revision=“float16”

1 Like