MedKGI: Improving LLMs for Clinical Diagnosis
Analysis
This paper addresses the limitations of Large Language Models (LLMs) in clinical diagnosis by proposing MedKGI. It tackles issues like hallucination, inefficient questioning, and lack of coherence in multi-turn dialogues. The integration of a medical knowledge graph, information-gain-based question selection, and a structured state for evidence tracking are key innovations. The paper's significance lies in its potential to improve the accuracy and efficiency of AI-driven diagnostic tools, making them more aligned with real-world clinical practices.
Key Takeaways
- •MedKGI integrates a medical knowledge graph to ground reasoning in validated medical ontologies.
- •The framework selects questions based on information gain to maximize diagnostic efficiency.
- •An OSCE-format structured state is used to maintain consistent evidence tracking across turns.
- •MedKGI outperforms strong LLM baselines in both diagnostic accuracy and inquiry efficiency.
“MedKGI improves dialogue efficiency by 30% on average while maintaining state-of-the-art accuracy.”