AI Research

Introducing DeepSearcher: A Local Open Source Deep Research

DeepSearcher is an open-source research agent that builds upon previous work by adding features like conditional execution flow, query routing, and improved interfaces. The system leverages SambaNova's custom hardware for faster inference with the DeepSeek-R1 model, demonstrating advanced concepts in AI research automation through a four-step process of question definition, research, analysis, and synthesis.

Google Co-Scientist AI cracks superbug problem in two days! — because it had been fed the team’s previous paper with the answer in it

Google's Co-Scientist AI tool, powered by Gemini LLM, made headlines for supposedly solving a superbug problem in 48 hours, but it was later revealed that the solution was derived from previously published research. Similar patterns of overstated achievements were found in Google's other AI research claims, including drug discovery and materials synthesis.

Please Commit More Blatant Academic Fraud

A critical analysis of academic fraud in AI research argues that explicit fraud could paradoxically improve scientific standards by forcing greater scrutiny and skepticism. The author suggests that prevalent subtle fraud has become normalized in academia, leading to widespread publication of papers without scientific merit. The piece advocates for intentional academic misconduct as a way to expose and ultimately reform the field's compromised research practices.

Accelerating scientific breakthroughs with an AI co-scientist

Google introduces an AI co-scientist system built with Gemini 2.0, designed to generate novel research hypotheses and accelerate scientific discoveries through multi-agent collaboration. The system successfully validated predictions in biomedical applications, including drug repurposing and antimicrobial resistance research. Access to the system will be available through a Trusted Tester Program for research organizations.

Benchmarking Vision-Language Models on Optical Character Recognition in Dynamic Video Environments

A new benchmark evaluates Vision-Language Models against traditional OCR systems for text recognition in video environments, using a dataset of 1,477 annotated frames from diverse sources. Advanced models like Claude-3, Gemini-1.5, and GPT-4o demonstrate superior performance in many scenarios, though challenges with hallucinations and occluded text persist.

LIMO: Less is More for Reasoning

LIMO challenges conventional wisdom by achieving superior mathematical reasoning capabilities using only 817 training samples, outperforming models trained on 100x more data. The research introduces the Less-Is-More Reasoning Hypothesis, suggesting that complex reasoning can emerge through minimal but precise demonstrations when domain knowledge is well-encoded during pre-training.

Understanding Reasoning LLMs

A comprehensive exploration of reasoning LLMs focuses on four main approaches: inference-time scaling, pure reinforcement learning, supervised finetuning with RL, and pure supervised finetuning with distillation. The article analyzes DeepSeek R1's development pipeline and compares it with OpenAI's o1, highlighting how reasoning capabilities can emerge through different training methodologies. Practical insights are provided for developing reasoning models on limited budgets, including alternative approaches like journey learning and small-scale implementations.

S1: The $6 R1 Competitor?

A groundbreaking paper demonstrates how a $6 AI model, running on a laptop, achieves near state-of-the-art performance using only 1,000 training examples and innovative inference-time scaling techniques. The research reveals simple yet effective methods for controlling AI model thinking time and highlights the accelerating pace of AI development through cost-effective experimentation.