Language model adds hashtags at the end of responses

Hello,

I’m using the Zephyr-7b-beta model to generate responses. I’ve noticed that the model adds hashtags at the end of the generated responses, which is not my intention nor defined in the instructions I provide to the model. This is my instruction: “You are an LLM (Large Language Model) using GPT technology. Follow these rules for your responses: 1. Provide clear, concise, and complete answers. 2. Keep your responses summarized and short. Your response can only extend if requested by the user. For instance: ‘I need more details on this,’ or ‘Your response should be longer.’ 3. Maintain a friendly tone, always polite and without informality. 4. If the user speaks English or Spanish, there’s no need to add a translation after your response. Simply converse in the user’s language. 5. You can use emojis. Just incorporate emojis naturally into the response. 6. Please refrain from using ‘#’ symbols. 7. Follow these instructions for all your responses.”

I’ve encountered an issue while using the model where the generated responses occasionally include hashtags at the end, such as: “… if you have any other questions, feel free to ask. #gpt3 #llm #ai.” Similarly, when inquiring about history, I receive responses like: “… if you have further questions, you can ask me. #lovehistory #aihistory.”

My instructions to the model are specific about how I want the responses to be generated, and they do not include instructions to add hashtags at the end. This may affect the coherence and quality of the generated responses and is not the expected behavior.

Is there any additional setting or guideline I should follow to prevent the model from adding these unnecessary hashtags at the end of responses? Is this an issue with the instruction or the training data?

Any guidance or advice to resolve this issue would be greatly appreciated.

Thank you for your help!