When my chatbot gives answers based on GPT-4, it sometimes “hallucinates” facts. Are there prompt design techniques or system instructions that help reduce this? Should I use few-shot examples or constrain the output with formatting?
When my chatbot gives answers based on GPT-4, it sometimes “hallucinates” facts. Are there prompt design techniques or system instructions that help reduce this? Should I use few-shot examples or constrain the output with formatting?
Read less