2025-02-26

GitHub - deepseek-ai/profile-data: Analyze computation-communication overlap in V3/R1.

Detailed profiling data from a training and inference framework is shared, highlighting communication-computation overlap strategies with PyTorch Profiler visualizations. The framework implements DualPipe with MoE layers across different configurations, including EP64/TP1 for training and EP32/TP1 for prefilling, demonstrating balanced routing and micro-batch optimization techniques.

Original archive.is archive.ph web.archive.org

Log in to get one-click access to archived versions of this article.

read comments on news aggregators:

Related articles

RoboPianist: Dexterous Piano Playing with Deep Reinforcement Learning

Researchers developed a deep reinforcement learning system that trains anthropomorphic robot hands to play piano, using MuJoCo physics engine and MIDI files for simulation. The system achieves high performance by incorporating human fingering patterns and energy optimization, demonstrating significant improvements over baseline methods with an average F1 score of 0.79 across test pieces.

GitHub - deepseek-ai/DualPipe: A bidirectional pipeline parallelism algorithm for computation-communication overlap in V3/R1 training.

DualPipe is a bidirectional pipeline parallelism algorithm that optimizes computation-communication overlap in neural networks by achieving full overlap of forward and backward phases. The solution, presented in the DeepSeek-V3 Technical Report, reduces pipeline bubbles and requires implementation of custom overlapped forward-backward methods for specific modules.

GitHub - deepseek-ai/DeepEP: DeepEP: an efficient expert-parallel communication library

DeepEP is a communication library optimized for Mixture-of-Experts (MoE) and expert parallelism, providing high-throughput GPU kernels and low-latency operations. The library supports both intranode and internode communication, offering specialized kernels for asymmetric-domain bandwidth forwarding and low-latency inference decoding, with comprehensive support for FP8 and RDMA networks.

GitHub - deepseek-ai/FlashMLA

FlashMLA is a high-performance MLA decoding kernel optimized for Hopper GPUs, achieving up to 3000 GB/s in memory-bound configurations and 580 TFLOPS in computation-bound scenarios. The implementation supports BF16 and paged kvcache, requiring CUDA 12.3+ and PyTorch 2.0+.

Introduction to CUDA Programming for Python Developers

GPU architecture enables massive parallel processing through thousands of CUDA cores, contrasting with CPU's sequential processing capabilities. CUDA programming provides a platform for developers to harness GPU's parallel power through kernel functions and thread management. The document explores memory management, shared memory optimization, and practical applications in LLM workloads like FlashAttention.