The Apple paper has been widely cited as proof that today’s LLMs fundamentally lack scalable reasoning ability, which, as I argued here, might not have been the fairest way to frame the study in , Large Reasoning Models (LRMs) Several companies are currently working on Large Reasoning Models (LRMs), also known as Reasoning Language Models (RLMs). These are LLMs that have been trained to break multi-step reasoning prompts into a series of steps. For example, a question might be: There are 100 objects in a box., However, this is not “reasoning.” Ultimately, today’s AI, including LLMs + CoT, derives its capabilities from correlating variables across datasets, showing us how much one changes when others change. They can deduce specific observations from generalized information or induce general conclusions from specific data., Instead, the authors contend, these reasoning LLMs are actually performing a kind of “pattern matching” and their apparent reasoning ability seems to fall apart once a task becomes too complex , Today’s LLMs like GPT-4 and Claude are impressive pattern-recognition tools, but they’re not anywhere near true intelligence. Despite the hype, they lack core AGI traits like reasoning, autonomy, and real-world understanding. This article cuts through the noise, explaining why fears of imminent AGI are wildly premature., .