Breakthrough in Domestic AI Computing Power Platform! Huawei NPU Trains Nearly Trillion Parameter Large Models

Breakthrough in Domestic AI Computing Power Platform! Huawei NPU Trains Nearly Trillion Parameter Large Models

Click the blue text Follow us NEWS TODAY Huawei’s Ascend NPU cluster has broken through the training barrier for large models with nearly a trillion parameters, achieving stable training of a 718B parameter MoE model with over 6000 chips, increasing computing power utilization by 58.7%! Purely domestic hardware has smoothly overcome four major technical challenges … Read more

Goodbye, NVIDIA! Huawei’s NPU Achieves Near-Trillion Parameter Large Model

Goodbye, NVIDIA! Huawei's NPU Achieves Near-Trillion Parameter Large Model

Jin Lei from AofeisiQuantum Bit | WeChat Official Account QbitAI Now, training a trillion-parameter large model can completely say goodbye to NVIDIA. The one achieving this is Huawei! Technical report: arxiv.org/abs/2505.04519 It is important to note that before this, training a trillion-parameter large model faced many “roadblocks”. For example, challenges such as load balancing difficulties, … Read more

Huawei Ascend NPU Achieves Near-Trillion Parameter Large Model, Showcasing Domestic Computing Power Strength

Huawei Ascend NPU Achieves Near-Trillion Parameter Large Model, Showcasing Domestic Computing Power Strength

Huawei has made significant breakthroughs in the training of AI large models, with its Ascend NPU successfully running a near-trillion parameter large model, marking a leap for domestic computing platforms into the world-leading ranks in AI large model training. Previously, training trillion-parameter large models faced numerous challenges, such as difficulties in load balancing, high communication … Read more