As global software architects and developers discuss how to better ensure the safety of edge artificial intelligence, a €41 million pan-European project aims to position Europe as a leader in this field.The EdgeAI-Trust project is funded by the EU Chips Joint Undertaking and was launched last year, with partners including NVIDIA, Infineon, ZF, and TTTech Auto (now owned by NXP Semiconductors). All participants are committed to developing robust, secure, and energy-efficient architectures for AI processing at the network edge and on devices, applied in three key areas: autonomous vehicles, manufacturing and industrial processes, and agriculture. However, the significance of this project extends beyond technical development.
It is no secret that Europe lags behind the United States in cloud-based AI development. Data released by McKinsey at the end of last year showed that U.S. hyperscale companies hold about 85% of the market share, while European cloud computing companies account for less than 5%.The partners of the EdgeAI-Trust project hope their initiative will help Europe make strides in the field of edge AI.
The coordinator of EdgeAI-Trust and TTTech’s Senior Innovation and Funding Manager, Mohammed Abuteir, stated: “As the industry moves towards edge AI, we in Europe do not want to repeat the mistakes made in the cloud AI space. In the U.S., we have already seen strong AI innovation. For these reasons, we are seeking to innovate in the field of edge computing, especially in AI, where security is a huge challenge.”

Europe is striving to catch up in cloud AI innovation, but can it lead in edge AI?(Source:Free Malaysia Times)
Edge Security
Traditionally, artificial intelligence relies on large data centers for model training and deployment, but this approach brings challenges such as high energy consumption and latency, limiting real-time processing capabilities.
Bringing AI to devices can alleviate the load on central processors, reduce network congestion and latency, and eliminate the need to transmit sensitive data over the network. Furthermore, since edge AI devices do not rely on cloud connectivity, they can continue to operate even if the connection is interrupted. However, decentralized AI introduces new challenges in terms of security.
As edge AI develops, more data will be processed on more devices, meaning each device must be secured. At the same time, if edge AI devices have limited computing power and storage, data will need to be stored locally, increasing the risk of privacy breaches.
To address this, EdgeAI-Trust is designing architectures to support cross-device and platform interoperability and trusted data exchange from sensors to cloud systems. Next-generation hardware and software are being developed to enable devices to learn and adapt at the edge.

Mohammad Abuteir of EdgeAI-Trust
Abuteir pointed out: “As decentralized AI evolves, multiple systems will collaborate, meaning data will be shared between systems. End users need to ensure that their personal data is not exposed to the world. We must focus on the security of edge AI.”
But this is not just about data. Ensuring the security of the actual AI models running on devices is crucial. Potential attackers can access edge AI devices, reverse-engineer the AI models and algorithms, and then manipulate the model’s output or modify the algorithm to spread false data or compromise performance.
One of the partners in the EdgeAI-Trust project, Asvin company(based in Stuttgart, Germany) outlined how so-called adversarial machine learning can manipulate AI to generate incorrect results, rendering the model unreliable. Errors made by compromised AI models in autonomous vehicles, drones, or industrial robots could have catastrophic consequences. As part of the EdgeAI-Trust project, this cybersecurity software developer is looking for ways to monitor decision-making in edge AI applications and detect anomalies.
Asvin CEOMirko Ross stated: “Through EdgeAI-Trust, we are committed to laying a solid foundation for the development and deployment of trustworthy AI solutions that not only ensure security and reliability but also meet the edge systems’ requirements for low latency and low energy consumption. Our focus is on helping operational technology operators enhance their AI-augmented edge systems’ resilience against erroneous decisions and manipulation.”
The Barcelona Supercomputing Center(BSC) Computer Architecture – Operating Systems(CAOS) group researchers are collaborating with EdgeAI-Trust project partners to define and generate secure AI models and map them to hardware platforms.BSC has Europe’s powerful MareNostrum 5 supercomputer, which is equipped with NVIDIA-Arm superchips. As part of this project, CAOS researchers will collaborate with NVIDIA to develop its AI accelerators.
“EdgeAI-Trust provides BSC with the opportunity to test its AI related technologies in complex industrial use cases,” said Jaume Abella, Chief Researcher of EdgeAI-Trust and co-leader of the CAOS research group, “This allows us to reach a higher level of technological readiness and paves the way for fully leveraging our AI related assets.”

EdgeAI-Trust was officially established in May 2024, with approximately 53 member organizations from 13 European countries, all committed to collaborating to make decentralized edge intelligence sustainable, secure, and trustworthy.(Source:EdgeAI-Trust)
Global Regulation
In August 2024, shortly after the launch of the EdgeAI-Trust project, the EU’s Artificial Intelligence Act officially came into effect, providing the first regulatory framework for the safe and transparent use of AI on an international scale. It is not surprising that the EU has set this global goal; while Europe lags behind the U.S. and China in AI innovation, it has consistently been at the forefront of responsible AI use.In 2018, the EU’s General Data Protection Regulation (GDPR) ensured the protection of all personal data (including data processed by AI), helping to shape global AI ethics. Today, the 2024 EU Artificial Intelligence Act adopts a risk-based approach, specifically regulating the development and deployment of AI, prohibiting applications deemed to pose “unacceptable risks” to citizens.
For the partners of the EdgeAI-Trust project, the EU’s Artificial Intelligence Act highlights the timeliness of their initiative. As Abuteir stated: “The EU’s Artificial Intelligence Act was approved shortly after our project was launched, and we are very attentive to this. These AI rules apply to all industries, and we will align with them.” Therefore, as members of EdgeAI-Trust promote the safe development of edge AI, their systems are also being developed in the context of the latest legislation for everyone to use. “All partners, including myself, are pleased with the Artificial Intelligence Act, and now we can truly develop compliant products according to this standard,” Abuteir said, “AI is evolving rapidly, and our project is not just about advancing technology; it is a strategic initiative aimed at ensuring Europe remains at the forefront of the global technology industry.”
(Editor: Franklin)