ChatGPT can discuss chess brilliantly but routinely makes illegal moves during actual gameplay. AI researcher Gary Marcus argues this isn’t a quirk—it reveals that large language models lack “world models,” the internal representations of reality that humans use to understand how things work. This limitation may be fundamental to current AI architectures, with significant implications for how we should think about and use these powerful but imperfect tools.