Suppressing Chat AI Hallucinations by Decomposing Questions into Four Categories and Tensorizing
Published:Dec 24, 2025 20:30
•1 min read
•Zenn LLM
Analysis
This article proposes a method to reduce hallucinations in chat AI by enriching the "truth" content of queries. It suggests a two-pass approach: first, decomposing the original question using the four-category distinction (四句分別), and then tensorizing it. The rationale is that this process amplifies the information content of the original single-pass question from a "point" to a "complex multidimensional manifold." The article outlines a simple method of replacing the content of a given 'question' with arbitrary content and then applying the decomposition and tensorization. While the concept is interesting, the article lacks concrete details on how the four-category distinction is applied and how tensorization is performed in practice. The effectiveness of this method would depend on the specific implementation and the nature of the questions being asked.
Key Takeaways
- •The article proposes a method to reduce AI hallucinations by enriching query information.
- •The method involves decomposing questions using the four-category distinction (四句分別) and tensorizing them.
- •The article lacks concrete details on the implementation of the proposed method.
Reference
“The information content of the original single-pass question was a 'point,' but it is amplified to a 'complex multidimensional manifold.'”