Taken together, our experiments suggest that [Large Language Models] possess some genuine capacity to monitor and control their own internal states. This doesn’t mean they’re able to do so all the time, or reliably. In fact, most of the time models fail to demonstrate introspection—they’re either unaware of their internal states or unable to report […]
The post An AI’s Internal Representation appeared first on Fluid Imagination.
Published on October 30, 2025 13:59