Skip to main content

Posts

Featured

Why Language Models Hallucinate

Imagine an AI legal assistant, tasked with drafting a memo, confidently citing a series of compelling but entirely non-existent legal precedents. This scenario is not science fiction; it is a manifestation of "LLM hallucination," a persistent challenge in artificial intelligence that poses significant risks to the legal profession. For lawyers, whose credibility rests on precision and verifiable facts, understanding the roots of this phenomenon is no longer an academic exercise but a professional necessity. OpenAI recently released a new research paper " Why LanguageModels Hallucinate " which argues that language models hallucinate because standard training and evaluation procedures reward guessing over acknowledging uncertainty. An LLM hallucination is a plausible but false statement generated by an AI model (p. 1). These are not random glitches but confident, well-articulated falsehoods, such as providing an incorrect dissertation title or inventing a birth date f...

Latest posts

Swimming up Niagara Falls! : the battle to get disability rights added to the Canadian Charter of Rights and Freedoms

How to think about AI : a guide for the perplexed

May Contain Lies: How Stories, Statistics, and Studies Exploit Our Biases – And What We Can Do about It

Contracting and contract law in the age of artificial intelligence

Smart, not loud : how to get noticed at work for all the right reasons

AI - Limits and Prospects of Artificial Intelligence

Legal Guide to Emerging Technologies

Virtual Advocacy: Litigating from a Distance