This paper proposes a clinical chatbot that grounds answers in official guidelines using prioritized evidence retrieval and verifiable citations. It is relevant for builders working on high-stakes RAG systems where source quality, traceability, and hallucination control matter.
arXiv:2605.00846v1 Announce Type: new Abstract: Clinical diagnosis requires answers that are accurate, verifiable, and explicitly grounded in official guidelines. While large language models excel at natural language processing, their tendency to hallucinate undermines their utility in high-stakes medical contexts where precision is essential. Existing retrieval-augmented generation (RAG) systems treat all evidence equally, producing noisy context and generic answers misaligned with clinical…