Information Security in Embedded Systems

In a controlled environment, preventing accidental errors and hardware failures is sufficient to achieve secure behavior. If an unrecoverable situation is detected, the system can switch to a limited or non-functional state, but it remains safe.

In uncontrolled environments, various forms of attacks can jeopardize the security of the system. Only by considering security at every step of the lifecycle can such situations be prevented. These steps include:

1 Threat modeling – Security requirements must be identified during the design phase.

2 Secure components – Software developers must correctly implement security requirements. Additionally, they must implement all other requirements in a way that does not introduce vulnerabilities into the system.

3 Secure deployment – When delivering software (e.g., from supplier to manufacturer), the supplier must provide mechanisms to check the integrity and authenticity of the software.

4 Proactive maintenance – Throughout the product’s lifecycle, manufacturers must correct vulnerabilities found in component usage.

5 Secure updates and boot – When updating the system, the product manufacturer must ensure the integrity and authenticity of the updated software.

1. Threat Modeling

The first step in creating a secure system is to identify the system’s assets. Product manufacturers use threat modeling to analyze how attackers threaten these assets.

Threat modeling must be integrated into the design process and executed at all levels of abstraction. Therefore, when defining high-level requirements, software architects identify further abstract threats. These threats lead to additional security requirements to mitigate them. This analysis will be integrated into subsequent development steps.

Embedded systems must protect:

• System security: Most types of attacks affect the security of the system.

• System availability: If an attacker can shut down the system, customers cannot use the device.

• Trade secrets: If an attacker can access the firmware, they can obtain the trade secrets it contains.

• Legal compliance: If an attacker can cause the system to violate laws (send spam, monitor others), this will have legal implications for the product manufacturer.

• Company reputation: System failures can negatively impact the manufacturer.

Once the system assets are identified, product manufacturers must estimate the costs and resources attackers are willing to invest to attack these assets. Based on this information, the development team can define the security level the product must achieve. Based on the defined security level, developers can derive security requirements to minimize risk.

IEC 62443 Security Levels

IEC 62443 defines five security levels (SL) that classify the security level achieved by components:

• Security Level 0: No special requirements or protections are needed.

• Security Level 1: Prevents unintentional or accidental misuse.

• Security Level 2: Prevents intentional misuse with minimal resources, general skills, and simple activities.

• Security Level 3: Prevents intentional misuse using moderate resources, system-specific knowledge, and moderate activities.

• Security Level 4: Prevents intentional misuse using substantial resources, system-specific knowledge, and high-level activities.

When developing systems according to functional safety standards, no further activities are required for Security Level SL-1. The terms “few,” “moderate,” and “high” do not clearly define higher security levels. The general understanding of security levels is:

• SL-2 can prevent amateurs or disgruntled former employees from accessing public information about the security of the system they want to attack.

• SL-3 can prevent professional hackers from profiting through extortion, selling vulnerabilities, or extracting information.

• SL-4 can defend against professional hacker organizations with substantial funding from companies or governments.

IEC 62443 Security Requirements

IEC 62443 lists functional requirements that components must implement to meet security levels. For example, “Individuals interacting with the system…”

• SL-1:…must be identified and verified.

• SL-2:…must be uniquely identified and authenticated (no shared administrative accounts).

• SL-3:…if accessed from an untrusted network, must be uniquely identified and authenticated through multi-factor authentication.

• SL-4:…must be uniquely identified and authenticated through multi-factor authentication across all networks.

Security Zones

Barriers (walls, doors, security, firewalls, virtualization technologies, etc.) that enhance security divide the system into zones, each achieving its component’s minimum security level. While IEC 62443 primarily addresses operational technology security issues in automation and control systems, the defined security levels are valuable for any security-related discussion.

2. Secure Components

The second step in implementing a secure system is ensuring that the design and implementation of each component are secure and provide secure interfaces for other components. Component developers must adhere to the principle of “designing securely,” consider applicable security standards, and incorporate the results of threat modeling into every step of the design process.

Interface Conventions

In addition to correctly implementing the functional requirements of components, security also depends on strict adherence to the conventions used by other components. In this context, conventions refer to the requirements that components impose on their callers.

For example:

• Every memory block taken from a memory pool must be returned to the same memory pool.

• Function parameters cannot be zero.

• Functions must be called within critical sections.

Imposing such restrictions on callers has many reasons (performance, flexibility, portability, etc.), and the system’s security requirements must meet these conventions.

Execution Expectations

When implementing new components, developers should enhance security by validating compliance with conventions as much as possible. Since software cannot verify all conventions at runtime, component developers must clearly describe the remaining unverified expectations in the corresponding documentation. Component users can follow these conventions and use various software development techniques (such as reviews, static analysis, testing, etc.) to demonstrate compliance with the conventions.

Communication Protocol Validation

Components implementing communication endpoints must ensure that all communication messages conform to the agreed-upon and documented protocol.

Specifically, this means validating the type, value range, size, and encoding of each message field, as well as validating metadata such as the number of messages per transmission, sender address, and expected message order. Software must check enumerations against a security list rather than excluding items from a deny list. Responses to protocol violations must be designed to be non-abusable.

Isolating Components

Sometimes, development teams may want to use components that do not match the target security level of the product. In such cases, the component can be run in a restricted isolation environment. One approach is to use memory protection units to limit memory access and use preemptive schedulers to guarantee runtime. Damage caused by vulnerabilities in components will be confined to the isolated environment and will not affect the rest of the system.

Security Checklists

The first step in making a component secure is to ensure it has a relatively simple API that guides users on how to use it correctly. Such components require detailed, up-to-date documentation, including security manuals: checklists are lists of all steps users must take to ensure the secure use of the component (e.g., “run this validation,” “compile with these options,” or “insert the public key in this constant”).

Encryption Algorithms

The security of the system is likely based on certain cryptographic operations for encryption, decryption, or the generation and verification of signatures and checksums. These cryptographic operations only provide security under the following conditions: state-of-the-art algorithms are used, they are implemented proficiently, and they can be replaced by something more secure in the future.

Few people possess the knowledge and mindset to design good encryption algorithms. New algorithms are considered secure (at least temporarily) and published only after cryptanalysis experts have spent years attempting and failing to break them.

Securely implementing algorithms is nearly as tricky. There are countless obstacles to overcome, from choosing the appropriate padding scheme to preventing timing attacks or using cryptographic primitives in the correct mode.

Relying on reputable third-party solutions is a widely accepted best practice.

System Configuration

If embedded devices have configuration parameters that affect their security, they should have default secure values. For example, password-protected interfaces should have unique strong passwords, or users should be forced to change passwords before the device operates.

Using default passwords and requiring users to change them is far from sufficient. Similarly, system design must default to enabling encryption rather than suggesting users enable it later.

System Logging

Logging security-related events is critical for auditing and analyzing security vulnerabilities, but it is challenging to implement. The integrity and confidentiality of logs are security assets that product manufacturers must analyze in threat modeling.

For security vulnerability analysis, it is desirable to include as much information and detail in logs as possible. However, system resources in embedded devices limit this decision. If the confidentiality of the protocol cannot be guaranteed, a lot of valuable information may leak. Additionally, some laws may restrict the recording of personal data or require its deletion within a short time.

Another aspect the development team must consider is log flooding. If an attacker performs actions that generate a large volume of logs, irrelevant information may overwrite critical information.

When designing secure logging schemes, it should be considered that logs are often evidence in liability cases between device manufacturers and device owners, and device owners may be interested in manipulating logs.

3. Secure Deployment

The third step in implementing a secure system is to ensure that compiled code, source code, and all documentation are stored on secure media and transmitted through secure channels. Before files are transferred to embedded systems, attackers can compromise the system by modifying parts of the design, code, or binaries.

IT Environment

Attackers can achieve attacks by accessing or manipulating developers’ machines, servers with source code management systems, or manipulating software during transmission. Measures to prevent manipulation of developer machines and servers include:

• Appropriate IT access management

• Regular updates of all used software

• Restricting physical access

• Security policies (e.g., computers must be locked when employees are away from their desks)

Software Transmission

Measures to prevent software from being manipulated during transmission include:

• Establishing a secure communication channel using PGP or X.509 certificates, signing (and encrypting) all communications.

• While sending information via email, transmitting encrypted secure fingerprints through different channels (phone, mail, encrypted chat, etc.).

• Using data transfer portals protected by TLS, X.509 certificates, authentication, and authorization.

4. Proactive Maintenance

The fourth step in implementing a secure system is to ensure that all components remain secure during operation.

In the study of protocols, cryptography, and libraries, significant vulnerabilities may be discovered at any time, and end users of the product must update their security systems. Software component suppliers must establish a system to notify all customers of discovered vulnerabilities. Library users must be able to receive this information to create and deploy software updates in their systems.

5. Secure Updates and Secure Boot

The fifth step in implementing a secure system is to ensure the integrity and authenticity of all software executed on the system, which can generally be achieved by the following means: only the manufacturer can install software, the update process is secure, and the boot process is secure.

If end users cannot update the software, systems with known vulnerabilities must be disabled and replaced with improved devices. In some cases, this may be a viable solution, but typical embedded systems require a method to install new firmware.

If an attacker gains physical access to the embedded system, they may manipulate its firmware. In this case, the boot process must be secure. The secure update process does not affect boot time and applies to most embedded systems. Both mechanisms must ensure the application:

• Firmware authenticity: The firmware was created by a trusted party.

• Firmware integrity: No one else has modified the firmware.

• Firmware version: Updates must provide a version that is newer than the installed firmware (preventing rollback to vulnerable older versions).

Product manufacturers should use digital signature schemes to verify the authenticity of firmware. Message Authentication Codes (MAC) provide the same level of security but require a shared key between the software provider and the embedded device. Multiple devices should not use the same shared key.

The product requires a trust root for the authentication scheme. This trust root is a certificate or public key that the embedded system can use to verify the signature of the firmware.

Authentication schemes typically also verify the integrity of the firmware.

Finally, the boot code must check the software versions. The boot code stores the firmware version number in a tamper-proof secure location. The boot code will not install or boot firmware with a lower version number.

Secure Updates

In a secure update scheme, the firmware verifies the software upon receipt and stores it in a trusted storage area (internal flash). The device uses the updated and verified software in subsequent boots. Challenges in the update process include that complete firmware often does not fit in temporary communication memory, and attackers may cut power to prevent the execution of security checks.

Secure Boot

In a secure boot scheme, the boot code loads and verifies the software at each reboot, allowing us to store the software on insecure media (external flash, SD cards, network servers, etc.), simplifying the update process and reducing hardware manufacturing costs. However, it requires more memory and takes longer each reboot.

The boot code still needs trusted memory to store the firmware loader and trust root (for verifying the firmware’s encryption key).

After a reset, the boot code loads the firmware into RAM, verifies its authenticity, integrity, and version, decrypts it, and executes it in RAM.

If the application contains security vulnerabilities that allow attackers to modify the boot code, then attackers can gain complete control of the device. There are several hardware methods to prevent such attacks:

• One-Time Programmable (OTP) boot records lock the checksums of the boot code or trust anchor.

• Hardware Security Modules (HSMs) are dedicated coprocessors with higher privileges and tamper-proof storage.

• Encryption coprocessors with key storage allow the use of keys (for decryption or verification) while preventing any part of the system from reading or modifying the keys.

To utilize this hardware support, the boot code is typically equipped with a three-stage boot process:

• Hardware verifies the boot manager and trust root.

• The boot manager verifies and executes the bootloader.

• The bootloader loads, verifies, decrypts, and executes the application.

The bootloader and boot manager are separate, with the boot manager being as simple as possible, less likely to contain errors, and requiring updates only in rare cases. The bootloader contains all the complex device drivers and network protocols, making it more likely to contain errors. Since it does not involve hardware, updating the bootloader is easier.

Conclusion

In a constantly changing threat environment, considering the security of products during the design phase is crucial. This article provides an overview and insights into the challenges of software development. This knowledge will help you ensure the security and integrity of products from the planning stage in a rapidly evolving threat environment.

Flexible Safety RTOS is a functional safety pre-certified operating system based on μC/OS-II. BMR Tech is the agent for Flexible Safety RTOS in China, with over 20 years of market, service, and training experience in embedded real-time operating systems and functional safety software. Contact [email protected].

Information Security in Embedded Systems
Information Security in Embedded Systems

Scan the QR code on the right to enter the online school for learning

Welcome to follow the WeChat public account【BMR Tech】, reply “Join Group” to join the technical exchange group

Product Consultation:

Beijing: 010-62975900

Shanghai: 021-62127690

Shenzhen: 0755-82977971

Leave a Comment