User Warns Against 'gpt-5.2 auto/instant' in ChatGPT Due to Hallucinations
Analysis
Key Takeaways
- •Specific versions of language models can exhibit inconsistent performance.
- •Hallucination remains a significant problem in some AI configurations.
- •User feedback is crucial for identifying and addressing model flaws.
“It hallucinates, doubles down and gives plain wrong answers that sound credible, and gives gpt 5.2 thinking (extended) a bad name which is the goat in my opinion and my personal assistant for non-coding tasks.”