Energy-Based Transformers are Scalable Learners and Thinkers Paper • 2507.02092 • Published Jul 2 • 69 • 26
Energy-Based Transformers are Scalable Learners and Thinkers Paper • 2507.02092 • Published Jul 2 • 69 • 26
When Models Lie, We Learn: Multilingual Span-Level Hallucination Detection with PsiloQA Paper • 2510.04849 • Published Oct 6 • 114 • 7
The Curious Case of Factual (Mis)Alignment between LLMs' Short- and Long-Form Answers Paper • 2510.11218 • Published Oct 13 • 2
How Much Do LLMs Hallucinate across Languages? On Multilingual Estimation of LLM Hallucination in the Wild Paper • 2502.12769 • Published Feb 18 • 3 • 2