AGI

ARC-AGI Without Pretraining

A novel approach demonstrates that lossless information compression during inference time can produce intelligent behavior, achieving 34.75% accuracy on ARC-AGI training set without pretraining or extensive datasets. The method, CompressARC, processes each puzzle in 20 minutes using only compression objectives and efficient inference-time computation, challenging conventional reliance on extensive pretraining and data.

OpenAI, in deep trouble

OpenAI's GPT-4.5 release has received harsh criticism from industry experts, signaling a potential decline in the company's market leadership. The company faces significant challenges including high operational costs, diminishing competitive advantage, and the departure of key personnel. Despite previous ambitious claims about AGI development, OpenAI appears to be struggling with technical advancement and financial sustainability.

Home | Substack

The progression of AI capabilities should be measured by the ratio of useful output per unit of human input, rather than through AGI timelines. Drawing parallels between self-driving cars and language models, the focus should shift to measuring how long AI systems can operate effectively without human intervention. While AI systems are becoming increasingly productive, they may never achieve complete autonomy without human guidance.