How do I send system prompts using inference api serverless, llama3 8b instruct model

Hey I found this post that helped me: