Conference Recommendation
IoT Ecosystem Development Forum (June 28, Beijing)
Translated from: semiengineering Author: Ann Steffora Mutschle
Compared to its former self, the current FPGA is no longer just a collection of look-up tables (LUTs) and registers; it has far exceeded the exploration of current architectures, providing design frameworks for future ASICs.
This series of devices now includes everything from basic programmable logic to complex SoCs. In various application fields (including automotive, AI, enterprise networking, aerospace, defense, and industrial automation), FPGAs enable chip manufacturers to implement systems that can be updated as needed. This flexibility is crucial in markets where protocols, standards, and best practices are still evolving and where new markets require ECOS to remain competitive.
Aldec’s marketing director Louie de Luna stated that this is the reason Xilinx decided to add Arm cores to its Zynq FPGAs to create FPGA SoCs. “Most importantly, vendors have improved their tool flows. This has generated significant interest in Zynq. Their SDSoC development environment looks like C, which is great for developers since applications are often written in C. Thus, they leverage software capabilities and allow users to allocate these functions to hardware.”
Xilinx’s Zynq-7000 SoC. Source: Xilinx
Some of these FPGAs are not just similar to SoCs; they are SoCs themselves.
“They may contain multiple embedded processors, dedicated computing engines, complex interfaces, large memory, and more,” said Muhammad Khan, a product specialist at OneSpin Solutions. “System architects plan and use the available resources of FPGAs just as they do for ASICs. Design teams use synthesis tools to map their SystemVerilog, VHDL, or SystemC RTL code to the underlying logic elements. The difference between effectively targeting FPGAs and targeting ASICs or fully custom chips is narrowing for most design processes.”
ArterisIP’s CTO Ty Garibay is very familiar with this evolution. “Historically, Xilinx started down the Zynq path in 2010, defining a product that incorporated Arm SoC hard macros into existing FPGAs,” he said. “Then Intel (Altera) hired me to do essentially the same thing. The value proposition is that SoC subsystems are what many customers want, but due to the characteristics of SoCs, especially processors, they are not suitable for synthesis on FPGAs. Embedding this level of functionality into actual programmable logic is daunting because it uses almost the entire FPGA for that function. However, it can be implemented as a small part of the entire FPGA chip or as a hard function. You give up the ability to provide truly reconfigurable logic for SoCs, but it can be programmed as software to change functionality in this way.”
This means that software programmable capabilities, hard macros, and hardware programmable functionalities can work together, he said. “There are some pretty good markets, especially in low-cost automotive control, where traditionally a medium-performance microcontroller type device is placed next to an FPGA. Customers will simply say, I’ll just put the entire function onto the hard macro of the FPGA chip to save board space, reduce BOM, and lower power consumption.”
This aligns with the evolution of FPGAs over the past 30 years, where the original FPGAs were just programmable structures and a bunch of I/Os. Over time, memory controllers were hardened alongside SerDes, RAM, DSP, and HBM controllers.
Garibay said, “FPGA vendors have continued to increase chip area but have also continued to add more and more hard logic that is widely used by a significant portion of the customer base. What is happening today is extending to the software programmable side. Most of what was added before this ARM SoC was different forms of hardware, mainly related to I/O, but also included DSP, making it sensible to harden them to save programmable logic gates because there is enough planned utility.”
One perspective issue
This has essentially turned FPGAs into Swiss Army knives.
“If you shorten time, it’s just a bunch of LUTs and registers, not gates,” said Anush Mohandass, VP of marketing and business development at NetSpeed Systems. “They have a classic problem. If you compare any general-purpose task with its application-specific version, general-purpose computing will provide more flexibility, while application-specific computing will provide some performance or efficiency advantages. Xilinx and Intel (Altera) are increasingly trying to ally with this, noting that almost every FPGA customer has DSP and some form of computation. So they added Arm cores, they added DSP cores, they added all different PHY and common stuff. They strengthened this, making it more efficient, and the performance curve improved.”
These new capabilities open doors for FPGAs to play significant roles in various emerging and existing markets.
“From a market perspective, you can see that FPGAs will definitely enter the SoC market,” said Piyush Sancheti, senior marketing director at Synopsys. “Whether you’re doing an FPGA or a mature ASIC is economic. These lines are starting to blur, and we certainly see more and more companies – especially in certain markets – engaging in FPGA production economics that are better for production.”
Historically, FPGAs have been used for prototyping, but for production purposes, it has been limited to markets like aerospace, defense, and communications infrastructure, Sancheti said. “Now the market is expanding to automotive, industrial automation, and medical devices.”
AI: A booming FPGA market
Some companies adopting FPGAs are system vendors/OEMs looking to optimize their IP or AI/ML algorithm performance.
“NetSpeed’s Mohandass said, “They want to develop their own chips, and many of them are starting to do ASICs, which can be a bit daunting. They may not want to spend $30 million in wafer costs to get a chip. For them, FPGAs are an effective entry point; they have unique algorithms, their own neural networks, and they can see if it delivers the performance they expect.”
Stuart Clubb, senior product marketing manager for Mentor’s Catapult HLS synthesis and verification, noted that the current challenge for AI applications is quantization. “What kind of network is needed? How do I build this network? What is the memory architecture? Starting from the network, even if you only have a few layers and a lot of data with many coefficients, it quickly translates to millions of coefficients, and storage bandwidth becomes very scary. No one really knows what the right architecture is. If the answer is unknown, you won’t jump into building an ASIC.”
In the enterprise networking space, the most common issue is that password standards seem to be constantly changing. Mohandass said, “Instead of trying to build an ASIC, why not put it in an FPGA and make the cryptographic engine better? Or, if you’re doing any kind of packet processing on a global network, FPGAs still offer you more flexibility and programmability. This is where flexibility comes into play, and they have already used it. You can still call it heterogeneous computing; it still looks like an SoC.”
New rules
With the use of the new generation of FPGA SoCs, the old rules no longer apply. “Specifically, if you’re debugging on the circuit board, you’re doing it wrong,” Clubb pointed out. “While debugging on development boards is considered a lower-cost solution, it harkens back to the early days when you could say, ‘It’s programmable, you can put an oscilloscope on it, and you can see what’s happening. But now to say, ‘If I find a bug, I can fix it, write a new bitstream in a day, and put it back on the board, and then find the next bug,’ that’s crazy. This is an area where you see a lot of the mindset that is not considered a cost in terms of employee time. Management won’t buy simulators or system-level tools or debuggers because ‘I’ll just pay this person to get it done, and I’ll scream at him until he works hard.’”
This behavior is still common because there are enough companies with a 10% drop annually mentality to keep everyone grounded.
However, FPGA SoCs are real SoCs that require stringent design and verification methods. “The fact that it’s programmable doesn’t really affect design and verification,” Clubb said. “If you’re making an SoC, yes, you can follow what I’ve heard some customers say about ‘Lego’ engineering. This is a block diagram approach. I need a processor, a memory, a GPU, some other parts, a DMA memory controller, WiFi, USB, and PCI. These are the ‘Lego’ blocks that you assemble. The trouble is, you have to verify that they work and that they work together.”
Nevertheless, FPGA SoC system developers are quickly catching up with the verification methods that SoC systems focus on.
“They are not as advanced as traditional chip SoC developers; their thinking is, ‘This is going to cost me $2 million, so I better be prepared,’ because [using FPGAs] is cheaper,” Clubb said. “But if you spend $2 million developing an FPGA and you get it wrong, now you’re going to spend three months fixing these bugs, but there are still issues to resolve. How big is the team? How much does it cost? What is the penalty for being late to market? These are all very difficult costs to quantify clearly. If you’re in the consumer space, then during the holiday season, you’re really concerned about how to use an FPGA, it’s unlikely, so there’s a different priority. The overall cost and risk of completing an SoC in custom chips pull the trigger. And they also say, ‘This is my system, I’m done,’ you don’t see that much. It’s well known that this industry is consolidating, and the big players with big chips are fewer. Everyone has to figure out a way to make it happen, and these FPGAs are making it happen.”
New trade-offs
Sancheti said it’s not uncommon for engineering teams to design their intent to keep their options open for target devices. “We see many companies creating RTL and verifying it, often without knowing whether they’re going to do FPGA or ASIC, since many times that decision can change. You can start with an FPGA, and if you reach a certain volume, the economics may favor debugging an ASIC.”
This is especially true in today’s AI application space.
Mike Gianfagna, VP of marketing at eSilicon, noted, “The technology for accelerating AI algorithms is evolving. “Clearly, AI algorithms have been around for a long time, but suddenly we’re becoming much more sophisticated in how we use them and our ability to run them at near real-time speeds, which is very magical. It started with CPUs, then moved to GPUs. But even GPUs are programmable devices, so they have some generality. While the architecture excels at parallel processing, because that’s what graphics acceleration is all about, it’s convenient because that’s what AI is all about. To a large extent, it’s good, but it’s still a general-purpose approach. So you can achieve a certain level of performance and power consumption. Some will next turn to FPGAs because you can target the circuits better than using GPUs, and both performance and efficiency improve. ASICs are the extreme in power consumption and performance because you have a fully customized architecture that meets your needs, no more, no less. That’s clearly the best.”
AI algorithms are difficult to map to chips because they are in a state of almost constant flux. So, at this point, making a fully custom ASIC is not an option because it’s already outdated by the time the chip is manufactured. “FPGAs are very good at this because you can reprogram them, so even though they are expensive, they don’t become obsolete, and your investment doesn’t go to waste,” Gianfagna said.
There are some custom memory configurations and certain subsystem functions, like convolution and transposed memory, that can be reused, so while the algorithms may change, certain blocks won’t change and will be used over and over again. With this in mind, eSilicon is developing capabilities for software analysis to view AI algorithms. The goal is to be able to select the best architecture for specific applications more quickly.
“FGPA gives you the flexibility to change the machine or engine because you may encounter a new network, and submitting an ASIC is a significant risk; in this sense, you may not have the best support, so you can have that flexibility,” said Deepak Sabharwal, VP of IP engineering at eSilicon. “However, FPGAs always have limitations in capacity and performance, so you can’t really reach product-level specs with an FPGA; ultimately, you’ll have to go to ASIC.”
Embedded LUTs
Another choice that has made progress in recent years is embedded FPGAs, which integrate programmability into ASICs while adding the performance and power advantages of ASICs to FPGAs.
Geoff Tate, CEO of Flex Logix, said, “FPGA SoCs are still primarily processing chips with relatively small FPGA areas. “In block diagrams, the proportions look different, but in actual pictures, it’s mainly FPGAs. However, there’s a class of applications and customers where the right ratio between FPGA logic and the rest of the SoC is to have a smaller FPGA, making their RTL programmability a more cost-effective chip size.”
This approach is gaining traction in areas such as aerospace, wireless base stations, telecommunications, networking, automotive, and visual processing, particularly in AI. “The algorithms change so quickly that the chips are almost outdated by the time they come back,” Tate said. “With some embedded FPGAs, it allows them to iterate on their algorithms faster.”
Nijssen noted that in this case, programmability is crucial to avoid redoing the entire chip or module.
Debugging design
Like all SoCs, understanding how to debug these systems and build instrumentation can help you catch issues before they arise.
“As system FPGAs become more like SoCs, they need the development and debugging methods expected in SoCs,” said Rupert Baines, CEO of UltraSoC. There’s a (perhaps naïve) belief that because you can see anything in an FPGA, it’s easy to debug. This is true at the bit-level in waveform viewers, but it doesn’t hold at the system level. The latest large FPGAs are clearly system-level. At this point, the waveform-level view you get from bit-probing types of arrangements isn’t very useful. You need a logic analyzer, a protocol analyzer, and good debugging and tracing capabilities of the processor cores themselves.”
The size and complexity of FPGAs require a verification process similar to ASICs. Advanced UVM-based test platforms support simulation, which is often also supported by simulation. Formal tools play a key role here, from automatic design checks to assertion-based verification. While it is indeed possible to change FPGAs faster and cheaper than ASICs, the difficulty of detecting and diagnosing errors in large SoCs means that thorough verification must be done before entering the lab, Khan from OneSpin said.
In fact, in one area, the verification requirements for FPGA SoCs may be higher than those for equivalency checks between RTL inputs and netlists after synthesis for ASICs. Compared to traditional ASIC logic synthesis flows, the refinement, synthesis, and optimization stages of FPGAs often make more modifications to the design. These changes may include moving logic across cycle boundaries and implementing registers in memory structures. Khan added that thorough sequential equivalence checking is critical to ensure that the final FPGA design still meets the original intent of the designer in the RTL.
In terms of tools, there is still room for optimizing performance. “With embedded vision applications, many of which are written for Zynq, you might get five frames per second. But if you accelerate on hardware, you might get 25 to 30 frames per second. This paves the way for new types of devices. The problem is that simulating and verifying these devices isn’t simple. You need integration between software and hardware, which is difficult. If you run everything in an SoC, it’s too slow. Each simulation may take five to seven hours. If you collaborate on simulation, you can save time,” de Luna from Aldec said.
In short, the same types of methods used in complex ASICs are now being applied to complex FPGAs. This is becoming increasingly important as these devices are used for functionally safe applications.
“That’s the purpose of formal analysis, to ensure there are error propagation paths and then verify those paths,” said Adam Sherer, marketing director at Cadence. “These things are very suitable for formal analysis. Traditional FPGA verification methods make these types of verification tasks almost impossible. In FPGA design, it is still very popular to assume that it is very fast and easy to do hardware testing that runs at system speed and only requires simple simulation-level integrity checks. Then you program the device, go into the lab, and start running. This is a relatively fast path, but the observability and controllability in the lab are extremely limited. This is because it can only probe based on data from inside the FPGA to the pins so that you can see them on the testers.”
Dave Kelf, CMO of Breker Verification Systems, agrees. “This has led to an interesting shift in how these devices get verified. In the past, you could verify smaller devices by loading the design into the FPGA itself and running it in real-time on a test card as much as possible. With the emergence of SoC and software-driven designs, it can be expected that this ‘self-design prototyping’ verification method may apply to software-driven technologies and may apply to certain stages of the process. However, identifying issues and debugging them during the prototyping design phase is very complex. This early verification phase requires simulation, so SoC-type FPGAs look increasingly like ASICs. Considering this two-stage process, the commonality between them makes the process more efficient and includes common debugging and testing platforms. New advances like portable stimulus will provide that commonality, effectively making SoC FPGAs easier to manage.”
Conclusion
Looking forward, Sherer noted that users are seeking to apply the stricter processes now used in ASIC domains to FPGA processes.
“There’s a lot of training and analysis; they want more techniques for debugging in FPGAs that support this level,” he said. “The FPGA community tends to lag behind existing technologies and tends to use very traditional methods, so they need training and understanding in areas like space, planning, management, and requirements traceability. Those elements from SoC processes are absolutely necessary in FPGAs, and it’s not so much that FPGA itself drives it, but those industrial standards in the end application are driving it. For engineers who have always worked in FPGA environments, this is a reorientation and re-education.”
The lines between ASICs and FPGAs are blurring, driven by applications that require flexibility, a system architecture that combines programmability with hardwired logic, and tools that are now being applied to both. And this trend is unlikely to change anytime soon, as many new application areas that require these combinations are still in their infancy.
Conference Recommendation
IoT Ecosystem Development Forum (June 28, Beijing)

Click to read the original text for more WeChat articles