AI Hardware

ewintr.nl

A detailed walkthrough of building a budget-friendly AI workstation with 48GB VRAM for running local LLMs, costing around 1700 euros using second-hand Tesla P40 GPUs. The setup enables running various AI models locally, achieving 5-15 tokens per second depending on model size, while maintaining independence from cloud-based AI services.

DeepSeek research suggests Huawei's Ascend 910C delivers 60% of Nvidia H100 inference performance

DeepSeek researchers report Huawei's Ascend 910C processor achieves 60% of Nvidia H100's inference performance, potentially reducing China's GPU dependence despite sanctions. While showing promise in inference tasks and manual optimization potential, the processor still faces challenges in long-term training reliability and stability compared to Nvidia's established ecosystem.