MIT highlights a training method that makes reasoning models better at expressing uncertainty without losing accuracy. For builders, that matters because calibrated confidence is a practical lever for reducing hallucinations and improving agent reliability.

### Teaching AI models to say "I'm not sure" A new training method improves the reliability of AI confidence estimates without sacrificing performance, addressing a root cause of hallucination in reasoning models. April 22, 2026 Read full story → Headshots of Jacob Andreas and Brett McGuire ### Jacob Andreas and Brett McGuire named Edgerton Award winners The associate professors of EECS and chemistry, respectively, are honored for exceptional contributions to teaching, research, and service at…