Industry Research: Advantages of FPGA Over GPU in Terms of Latency and Flexibility

Industry Research: Advantages of FPGA Over GPU in Terms of Latency and Flexibility

Click the blue text above to follow for more exciting content. This article has a total of 697 words, and reading it will take about 2 minutes. â‘  FPGA Has a Strong Latency Advantage in AI Inference: The “Batch-less” architecture of FPGA provides a significant latency advantage in AI inference. Due to network conditions and … Read more

SambaNova Lays Off Staff, Abandons Training Chips

SambaNova Lays Off Staff, Abandons Training Chips

👆If you would like to meet regularly, feel free to star 🌟 and bookmark it~Source: This article is translated from Zach, thank you. In late April, one of the most well-funded AI chip startups, SambaNova Systems, significantly deviated from its original goals. Like many other AI chip startups, SambaNova initially aimed to provide a unified … Read more

Xiangteng NPU Chip (2) – AI Chip for Inference Applications

Xiangteng NPU Chip (2) - AI Chip for Inference Applications

The Xiangteng NPU chip is an AI chip designed for inference.When applying artificial intelligence, we encounter two concepts: Training and Inference, which are two stages in the implementation of AI. Let’s first understand these two issues. What is the difference between training and inference? What key points should we focus on to distinguish between AI … Read more

Arm Launches New Cortex and Ethos Processor Cores with Up to 50x AI Inference Performance Improvement and Custom Instruction Set Support

Arm Launches New Cortex and Ethos Processor Cores with Up to 50x AI Inference Performance Improvement and Custom Instruction Set Support

EETOP focuses on chips and microelectronics, click the blue text above to follow us Yesterday, Arm launched two new IPs (Cortex-M55 and Ethos-U55) to expand its AI-related product offerings. Cortex-M55 The Cortex-M55 CPU brings many new features announced by Arm over the past year. The first new feature is support for custom instructions. Arm first … Read more

NTT Launches AI Inference Chip for Low-Power Video Processing at the Edge

NTT Launches AI Inference Chip for Low-Power Video Processing at the Edge

NTT has launched an AI inference chip designed for video processing on edge devices and power-constrained terminals. According to the company, the large-scale integration (LSI) provides real-time AI processing for video at up to 4K resolution and 30 frames per second, enabling low-power inference at the edge. NTT states that compared to GPUs deployed in … Read more

Arm CEO’s Latest Interview: The Rise of Competitors to Nvidia is Just a Matter of Time

Arm CEO's Latest Interview: The Rise of Competitors to Nvidia is Just a Matter of Time

For the past two years, many executives in the cloud computing and chip industries have predicted that Nvidia’s dominance in the AI server chip market would decline, but this situation has yet to occur. On February 27, Beijing time, Nvidia announced its fourth-quarter financial report for fiscal year 2025, with results once again exceeding Wall … Read more

Edge AI Development: How to Accelerate Your Journey?

Edge AI Development: How to Accelerate Your Journey?

After cloud computing, edge computing will become a new growth point in the IoT market over the next decade, which is an undeniable fact. According to market research firm Gartner, by 2025, 75% of data will be generated at the network edge, meaning that the distribution of computing resources in the entire smart world is … Read more

Building OpenVINO Road Segmentation Inference Task with WasmEdge and WASI-NN

Building OpenVINO Road Segmentation Inference Task with WasmEdge and WASI-NN

On July 28, WasmEdge 0.10.1 was officially released. Today, we will take a detailed look at the wasi-nn proposal in version 0.10.1. This article is the first in the wasi-nn series, and the next article will introduce the optimization points of WasmEdge for the wasi-nn proposal. Nowadays, the term AI inference is no longer unfamiliar. … Read more