2025-02-13

GitHub - deepseek-ai/DeepGEMM: DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling

DeepGEMM is a CUDA library offering efficient FP8 matrix multiplications with fine-grained scaling, supporting both normal and Mix-of-Experts GEMMs. The lightweight library matches or exceeds performance of expert-tuned libraries, featuring runtime compilation and Hopper tensor core optimization, while maintaining a simple ~300-line core kernel.

Original archive.is archive.ph web.archive.org

Log in to get one-click access to archived versions of this article.

read comments on news aggregators:

Related articles

GitHub - deepseek-ai/profile-data: Analyze computation-communication overlap in V3/R1.

Detailed profiling data from a training and inference framework is shared, highlighting communication-computation overlap strategies with PyTorch Profiler visualizations. The framework implements DualPipe with MoE layers across different configurations, including EP64/TP1 for training and EP32/TP1 for prefilling, demonstrating balanced routing and micro-batch optimization techniques.

RoboPianist: Dexterous Piano Playing with Deep Reinforcement Learning

Researchers developed a deep reinforcement learning system that trains anthropomorphic robot hands to play piano, using MuJoCo physics engine and MIDI files for simulation. The system achieves high performance by incorporating human fingering patterns and energy optimization, demonstrating significant improvements over baseline methods with an average F1 score of 0.79 across test pieces.

The FFT Strikes Back: An Efficient Alternative to Self-Attention

FFTNet introduces a novel approach to sequence processing using Fast Fourier Transform, achieving O(n log n) complexity compared to traditional self-attention's quadratic complexity. The framework employs spectral filtering and modReLU activation to efficiently capture long-range dependencies, demonstrating superior performance on Long Range Arena and ImageNet benchmarks.

GitHub - deepseek-ai/DeepEP: DeepEP: an efficient expert-parallel communication library

DeepEP is a communication library optimized for Mixture-of-Experts (MoE) and expert parallelism, providing high-throughput GPU kernels and low-latency operations. The library supports both intranode and internode communication, offering specialized kernels for asymmetric-domain bandwidth forwarding and low-latency inference decoding, with comprehensive support for FP8 and RDMA networks.

GitHub - deepseek-ai/FlashMLA

FlashMLA is a high-performance MLA decoding kernel optimized for Hopper GPUs, achieving up to 3000 GB/s in memory-bound configurations and 580 TFLOPS in computation-bound scenarios. The implementation supports BF16 and paged kvcache, requiring CUDA 12.3+ and PyTorch 2.0+.

Grok 3: Another Win For The Bitter Lesson

xAI's Grok 3 demonstrates unprecedented performance, matching or exceeding models from established labs like OpenAI and Google DeepMind. The success reinforces the 'Bitter Lesson' principle that scaling compute power consistently outperforms algorithmic optimization in AI development. The paradigm shift from pre-training to post-training has leveled the playing field for newcomers while highlighting the critical importance of GPU access.

Ask HN: Is anybody building an alternative transformer?

Various alternative architectures to Transformers are being explored, with MAMBA showing promise through faster inference and lower compute costs, performing on par with transformers up to 7B parameters. Researchers are investigating recurrent architectures, state-space models, and efficient attention mechanisms, while debating the future direction of foundation models.

We Were Wrong About GPUs

A detailed account of Fly.io's venture into GPU infrastructure reveals challenges in meeting market demands, as developers primarily seek LLM APIs rather than raw GPU access. Despite significant investment in GPU machines and security measures, the project faced technical hurdles with Nvidia drivers and virtualization, while market trends shifted towards API-based AI solutions.

Zed now predicts your next edit with Zeta, our new open model - Zed Blog

Zed introduces an AI-powered edit prediction feature using Zeta, their new open-source model derived from Qwen2.5-Coder-7B. The editor now anticipates and suggests edits that can be applied with a tab key, incorporating sophisticated latency optimization and thoughtful integration with existing features.