Adaptive Computing Accelerates Software-Defined Hardware Era

In the past, designing a product required careful planning of the hardware architecture, and only after the hardware design was completed would software development begin, followed by the release of the complete product. Now, with the development of cloud computing and the internet, along with the rise of AI, 5G, and autonomous driving, the development process of hardware and products is undergoing unprecedented changes, such as higher hardware performance; increased security and confidentiality requirements; a growing number of sensor types and interfaces; evolving AI algorithms and models; and the need for software development to proceed in sync with hardware development, among others.
Driven by these new demands, the concept of “software-defined hardware” has been frequently mentioned, as people hope that all operations within the chip can be controlled and scheduled by software, thereby reducing the corresponding hardware overhead and reallocating the savings for computation and on-chip storage. While this vision seems appealing, there are significant challenges in implementation. For example, while FPGAs can achieve some software-defined hardware functionalities, they are less efficient than ASICs and consume more power. So, is there a better solution?

Advantages of Adaptive Platforms

Xilinx’s adaptive computing platform was born for this purpose. According to Xilinx’s Adaptive Computing White Paper and Adaptive Computing Zone, adaptive computing is based on FPGA technology and supports the dynamic construction of domain-specific architectures (DSAs) on the chip. This means that adaptive computing allows DSAs to be dynamically updated according to changing demands, thus avoiding the constraints of lengthy ASIC design cycles and high NRE costs. As the distributed level of processing continues to improve, adaptive computing can not only support wireless (OTA) updates for software but also for hardware, and updates can be performed almost an unlimited number of times.
The term “adaptive platform” refers to any type of product or solution centered around adaptive hardware. The adaptive platform is entirely based on the same adaptive hardware foundation, but it encompasses much more than just chip hardware or devices; it includes all hardware and a complete set of design and operating software.
Adaptive Computing Accelerates Software-Defined Hardware Era
“Adaptive Computing White Paper”
Scan to Download Now
With the adaptive platform, hardware engineers can be freed from repetitive and low-end design tasks, allowing them to focus on developing specialized functions, while software engineers can begin designing concurrently with hardware engineers, rather than waiting until all hardware designs are completed.

Adaptive Computing Accelerates Software-Defined Hardware Era

Figure: Schematic of unconfigured and configured adaptive hardware (Source: Xilinx)
Of course, in addition to this benefit, the adaptive platform also has several other advantages:
First, it accelerates the product launch process. For instance, one of Xilinx’s adaptive computing platform products, the Alveo data center accelerator card, can accelerate applications built using this accelerator card without the need for special hardware customization. Simply adding the PCIe card to the service allows direct invocation of acceleration libraries from existing software applications.
Second, it can reduce operational costs. Compared to CPU-based solutions, optimized applications based on the adaptive platform can provide significantly improved efficiency per node due to increased computational density.
Third, it allows flexible and dynamic workload configuration. The adaptive platform can be reconfigured according to current demands. Developers can easily switch between deployed applications within the adaptive platform, using the same device to meet constantly changing workload requirements.
Fourth, it is future-compatible. The adaptive platform can continuously adjust itself. If existing applications require new functionalities, the hardware can be reprogrammed to achieve these functions optimally, reducing the need for hardware upgrades, thereby extending the lifespan of the system.
Fifth, it can accelerate overall applications. Since AI inference rarely exists in isolation, it is generally part of a larger data analysis and processing chain, often coexisting with multiple upstream and downstream levels using traditional (non-AI) implementation solutions. The embedded AI components in these systems benefit from AI acceleration, and non-AI components can also benefit from acceleration. The inherent flexibility of adaptive computing is suitable for accelerating both AI and non-AI processing tasks, referred to as “overall application acceleration.” As compute-intensive AI inference permeates more applications, its importance is increasingly recognized.

Successful Use Cases of Adaptive Computing

In the past, if engineers wanted to use FPGAs, they needed to build their own hardware boards and configure the FPGAs using hardware description language (HDL). Today, developers of adaptive platforms only need to use familiar software frameworks and languages (such as C++, Python, TensorFlow, etc.) to directly leverage the power of adaptive computing. This means that software and AI developers do not need to build circuit boards or become hardware experts to effectively use adaptive computing.
Even more conveniently, engineers can not only directly call their existing software code through APIs but also utilize the ecosystem of independent software vendors (ISVs) and open-source libraries provided by vendors, which contain a wealth of acceleration APIs available for use.
Taking Xilinx’s two mass-produced adaptive computing platform products, Kria SOM and Alveo accelerator card, as examples. Kria SOM is built on the Zynq UltraScale+ MPSoC architecture, enabling developers to create edge applications on a turnkey adaptive platform. By standardizing the core components of the system, developers have more time to focus on creating differentiated functional features.

Adaptive Computing Accelerates Software-Defined Hardware Era

The first mass-produced Kria SOM product is the K26 SOM, which is designed based on the Zynq UltraScale+ MPSoC architecture, with overall dimensions of 77×60×11mm, featuring a quad-core Arm A53 processor, 4GB of 64-bit DDR4 memory, 256K system logic cells, and 1.4TOPS AI processing performance, supporting 4K 60p H.264/265 video encoding and decoding.
Kria SOM is designed, manufactured, and tested as a production-ready product that can withstand various harsh application environments. Currently, Kria SOM is available in both industrial and commercial grades, with the industrial grade supporting higher vibrations and more extreme temperatures, along with a longer lifecycle and maintenance grade.

Adaptive Computing Accelerates Software-Defined Hardware Era

Kria SOM is primarily targeted at intelligent vision applications, making it suitable for high-speed target detection in smart city applications, such as license plate recognition; it can also be used for machine vision applications on industrial production lines.
As for the Alveo accelerator card, it uses the industry-standard PCI-e interface to provide hardware offloading capabilities for any data center application, and it can also be used for SmartSSD storage to accelerate at storage access points. Furthermore, it can be utilized for SmartNIC, providing acceleration directly on network traffic.
For instance, the Alveo SN1000 SmartNIC expands the performance envelope of SmartNICs, targeting data center and edge computing platforms, combining high-performance networking, CPU clusters, and large-scale FPGAs on a single board to build a high-performance computing (HPC) platform with significant network acceleration capabilities.
Additionally, the Alveo SN1000 SmartNIC adopts standardized software frameworks, eliminating the need to directly handle FPGA programming, making it more convenient to use. Engineers can leverage most of the firmware supported by Xilinx or third parties used in FPGAs, as well as software running on CPU clusters. CPU clusters can run standard Linux distributions such as Ubuntu and Yocto Linux. SmartNIC drivers are available for host platforms like Red Hat Enterprise Linux (RHEL), CentOS, and Ubuntu.
In terms of applications, Alveo is suitable for genomics analysis, graphical databases, medical image processing and analysis, as well as video monitoring applications. In terms of practical applications, it has already been utilized in data centers and genomic sequencing applications.

Adaptive Computing Accelerates Software-Defined Hardware Era

Conclusion

Software is changing hardware, and the integration of software and hardware development will alter product forms and further change our lives. Although software-defined hardware has not yet been widely adopted, we can clearly see the increasing role of software in products, from commonly used mobile applications to industrial applications and the role of software in adaptive computing platforms. In the future, adaptive computing platforms are bound to accelerate the arrival of the software-defined hardware era.
Adaptive Computing Accelerates Software-Defined Hardware Era
Scan the QR code on the left to download
“Adaptive Computing White Paper”
Learn more information
Adaptive Computing Accelerates Software-Defined Hardware Era

Click to read the original text for more information

Leave a Comment