Explainable Statute Prediction with LLMs
Analysis
This paper addresses the important problem of explainable statute prediction, crucial for building trustworthy legal AI systems. It proposes two approaches: an attention-based model (AoS) and LLM prompting (LLMPrompt), both aiming to predict relevant statutes and provide human-understandable explanations. The use of both supervised and zero-shot learning methods, along with evaluation on multiple datasets and explanation quality assessment, suggests a comprehensive approach to the problem.
Key Takeaways
- •Proposes two methods: AoS (attention-based) and LLMPrompt (LLM prompting) for explainable statute prediction.
- •AoS uses supervised learning with sentence transformers.
- •LLMPrompt uses zero-shot learning with LLMs, exploring standard and Chain-of-Thought prompting.
- •Evaluates prediction performance and explanation quality.
- •Addresses the need for explainability in legal AI systems.
“The paper proposes two techniques for addressing this problem of statute prediction with explanations -- (i) AoS (Attention-over-Sentences) which uses attention over sentences in a case description to predict statutes relevant for it and (ii) LLMPrompt which prompts an LLM to predict as well as explain relevance of a certain statute.”