Language Models

PhD Knowledge Not Required: A Reasoning Challenge for Large Language Models

A new benchmark based on NPR Sunday Puzzle Challenge evaluates AI models' reasoning capabilities using general knowledge rather than specialized expertise. OpenAI o1 shows superior performance in this benchmark, while analysis reveals interesting failure patterns in models like DeepSeek R1 and identifies optimal reasoning lengths for different systems.

LIMO: Less is More for Reasoning

LIMO challenges conventional wisdom by achieving superior mathematical reasoning capabilities using only 817 training samples, outperforming models trained on 100x more data. The research introduces the Less-Is-More Reasoning Hypothesis, suggesting that complex reasoning can emerge through minimal but precise demonstrations when domain knowledge is well-encoded during pre-training.