Overcoming Generic AI Output: A Constraint-Based Prompting Strategy
Analysis
Key Takeaways
- •ChatGPT outputs can sound generic due to the model gravitating towards the average of its training data.
- •Specifying words and phrases to avoid is more effective than general instructions like 'be more human'.
- •Detailed negative constraints help steer the model away from producing bland, corporate-sounding content.
“The actual problem is that when you don't give ChatGPT enough constraints, it gravitates toward the statistical center of its training data.”