Machine Learning

Deep dive into LLMs like ChatGPT by Andrej Karpathy (TL;DR)

Andrej Karpathy's deep dive into LLMs covers the complete lifecycle from pretraining to post-training, explaining tokenization, neural network architectures, and fine-tuning processes. The comprehensive guide explores how LLMs process information, handle hallucinations, and utilize reinforcement learning to improve performance and reasoning capabilities.

LIMO: Less is More for Reasoning

LIMO challenges conventional wisdom by achieving superior mathematical reasoning capabilities using only 817 training samples, outperforming models trained on 100x more data. The research introduces the Less-Is-More Reasoning Hypothesis, suggesting that complex reasoning can emerge through minimal but precise demonstrations when domain knowledge is well-encoded during pre-training.

http://www.jacksonpollock.org/ by Miltos Manetas!

OpenAI's Sora, a text-to-video AI model, can create highly realistic and accurate 60-second videos from text descriptions, showcasing remarkable consistency and potential to revolutionize video content creation. Sora's ability to understand physical motion and time, along with its grasp of the real world, represents a significant advancement in AI-generated media.

Understanding Reasoning LLMs

A comprehensive exploration of reasoning LLMs focuses on four main approaches: inference-time scaling, pure reinforcement learning, supervised finetuning with RL, and pure supervised finetuning with distillation. The article analyzes DeepSeek R1's development pipeline and compares it with OpenAI's o1, highlighting how reasoning capabilities can emerge through different training methodologies. Practical insights are provided for developing reasoning models on limited budgets, including alternative approaches like journey learning and small-scale implementations.

Gemini 2.0 is now available to everyone

Google announces general availability of Gemini 2.0 Flash across its AI products, introduces new experimental 2.0 Pro model with enhanced coding capabilities, and launches cost-efficient 2.0 Flash-Lite model. The updates include improved performance benchmarks, expanded context windows up to 2 million tokens, and multimodal capabilities, with more features planned for release in coming months.

S1: The $6 R1 Competitor?

A groundbreaking paper demonstrates how a $6 AI model, running on a laptop, achieves near state-of-the-art performance using only 1,000 training examples and innovative inference-time scaling techniques. The research reveals simple yet effective methods for controlling AI model thinking time and highlights the accelerating pace of AI development through cost-effective experimentation.

OmniHuman-1: Rethinking the Scaling-Up of One-Stage Conditioned Human Animation Models

OmniHuman is an advanced AI system capable of generating realistic human videos with diverse visual and audio styles, supporting various aspect ratios and body proportions. The system excels in producing high-quality animations driven by music, speech, or video inputs, while handling complex gestures and accommodating multiple body poses and singing forms.