Additionally, they exhibit a counter-intuitive scaling Restrict: their reasoning hard work increases with problem complexity around a degree, then declines Even with possessing an suitable token budget. By comparing LRMs with their normal LLM counterparts below equal inference compute, we recognize three general performance regimes: (1) very low-complexity duties https://www.youtube.com/watch?v=snr3is5MTiU