Click “Read Original” to access more VxWorks resources
This article discusses the design philosophy of the Wind kernel. As mentioned earlier, VxWorks’ Wind kernel adopts a customizable microkernel design, featuring multitasking concurrent execution, preemptive priority scheduling, optional time-slice scheduling, inter-task communication and synchronization mechanisms, fast context switching, low interrupt latency, quick interrupt response, support for interrupt nesting, support for 256 priority levels, support for priority inheritance, and a task deletion protection mechanism. The Wind kernel operates in privileged mode, does not use trap instructions and jump tables, and all system calls are implemented in the form of function calls.
The Wind kernel is a strong real-time operating system kernel, but like other mature operating systems, it is not a “hard real-time” operating system. “Hard real-time” means that when a certain event occurs, the system must respond within a predetermined time, or before the deadline, otherwise catastrophic consequences will occur. Operating systems with this characteristic must either commit to every job submitted and its timing requirements or immediately refuse (so that the submitter can consider taking other actions). Jobs submitted to hard real-time systems can also be without time requirements, but the commitments for these jobs naturally do not include timing factors, and they will be treated as background jobs.
VxWorks does not use deadline-based scheduling algorithms, meaning it does not accept any job submission timing requirements. To improve real-time performance, the current method is priority preemptive scheduling, ensuring that high-priority tasks are executed first. Thus, under certain computational resource conditions, appropriate task partitioning and setting of task priorities can meet real-time requirements.
Note: Some people online claim that the VxWorks kernel is a hard real-time kernel, which is inaccurate. To be precise, the VxWorks system is a real-time system, but its real-time performance is achieved under certain computational resource conditions through appropriate task partitioning and setting of task priorities. Other systems such as uC/OS and FreeRTOS are the same.
2.1 Wind Kernel Structure
To improve system real-time performance, most operating systems provide various mechanisms, such as preemptible kernels and interrupt handling. VxWorks has constructed a layered structure with a microkernel to enhance real-time performance.
In the early days of computing, most operating systems were unified entities (monolithic). In such systems, different functional modules such as processor management, memory management, and file management were considered independently, with little regard for the relationships between modules. This type of operating system structure is clear and simple to construct, but due to the complexity of the operating system itself, it is difficult to delineate the boundaries between preemptible and non-preemptible portions in such large-grained operating systems, making it hard to avoid executing redundant operations in the non-preemptible parts of the operating system, resulting in poor real-time performance and making it unsuitable for real-time application environments.
In contrast, operating systems with a microkernel layered structure better address this issue. In such operating systems, the kernel serves as the starting point of the hierarchy, with each layer encapsulating the functions of the lower layers. The kernel only needs to include the most critical operating instructions, providing an abstraction layer between higher-level software and lower-level hardware, forming the minimal operation set required for the other parts of the operating system. This makes it easier to accurately define the boundaries between preemptible and non-preemptible parts, reducing the operations that need to be executed in the non-preemptible portion of the kernel, facilitating faster kernel preemption and improving system real-time performance.
The best internal structure model for an operating system is a hierarchical structure, with the kernel at the bottom. These layers can be viewed as an inverted pyramid, with each layer built upon the functions of the lower layers. The kernel only contains the most important low-level functions executed by the operating system. Just like a unified structured operating system, the kernel provides an abstraction layer between high-level software and low-level hardware. However, the kernel only provides the minimal operation set required to construct the other parts of the operating system.
The Wind kernel of VxWorks is such a microkernel that facilitates the construction of a layered structure. It consists of the kernelLib, taskLib, semLib, tickLib, wdLib, schedLib, workQLib, windLib, windAlib, semAlib, and workQAlib libraries. Among them, the kernelLib, taskLib, semLib, tickLib, and wdLib form the basic functions of the VxWorks kernel and are also the most fundamental and core functions of the Wind kernel. In such a kernel, it is easy to achieve strong real-time performance. The higher-level encapsulation of the kernel allows users developing VxWorks applications to only need to call the top-level functions without worrying about the underlying implementation, making program design very convenient. The VxWorks kernel structure can be logically divided into three layers, as shown in Figure 2.1.
2.1 VxWorks Kernel Hierarchical Structure
The routine contained in the global variable kernelState constitutes the core state of the Wind kernel. When kernelState is set to TRUE, it indicates that code is currently running in kernel mode. The essence of VxWorks’ kernel mode is to protect kernel data structures to prevent multiple codes from simultaneously accessing the kernel data structures, which is different from the concept of kernel mode in general operating systems.
Entering kernel mode only requires simply setting the global variable kernelState to TRUE. From this point, all kernel data will be protected to avoid contention. When the kernel operation ends, VxWorks resets kernelState to FALSE through the windExit() routine. Let us consider who might compete for the use of kernel data structures when in kernel mode. Clearly, interrupt service routines are the only possible initiators requesting additional work from the kernel while the Wind kernel is in kernel mode. This means that once the system enters kernel mode, the only way for application programs to request kernel services is through interrupt service routines.
Before setting kernelState, it is always checked whether it has already been set. This is the essence of the mutual exclusion mechanism. The VxWorks kernel uses delayed work as a means of implementing mutual exclusion. When kernelState is TRUE, the work to be done will not be executed immediately but will be placed in a delayed job queue. The kernel delayed job queue can only be cleared by windExit() before restoring the context of the previous task to be executed (or when the interrupt ISR first enters kernel mode, intExit() clears it). At the moment the kernel mode is about to end, interrupts are turned off, and windExit() checks whether the work queue is empty and enters a selected task context.
As mentioned earlier, the Wind kernel uses the global variable kernelState to software simulate a privileged state, prohibiting task preemption while in privileged state. The functions in the privileged state routines are in the windLib library. Executing routines in the windLib library will enter kernel mode (kernelState=1), gaining mutual exclusive access to all kernel queues. Routines in the windLib library have the ability to freely manipulate kernel data structures. Kernel mode (kernelState=1) is a powerful mutual exclusion tool; during the time in kernel mode, preemption is prohibited. High preemption latency undermines the responsiveness of real-time systems, so this mechanism must be used very conservatively (currently, the only open-source RTOS using this design mechanism is RTEMS). In fact, the design philosophy of microkernels is to keep the kernel small enough while still having the capability to support higher-level applications. Do you remember what I said in the previous chapter? “A beautiful kernel is not about what functions can be added, but what functions can be reduced.”
When in privileged state, the Wind kernel has interrupts enabled, which means that the kernel can still respond to external interrupts. The innovation of the Wind kernel lies in the introduction of the concept of work queues. Since kernel mode can only be mutually accessed by higher-level applications, when an interrupt occurs, if no program accesses the kernel mode routine, the current interrupt ISR will first enter kernel mode (setting kernelState to TRUE) and then execute the corresponding interrupt ISR; if kernel mode is already occupied (kernelState=FALSE), the current interrupt ISR will be placed in the kernel work queue and return immediately. When the program occupying kernel mode exits kernel mode (calling the windExit() routine), the work (i.e., job) in the kernel work queue will be executed, and the interrupt ISR will be processed (I will analyze this in detail with code in subsequent blog posts).
The kernel state routines contained in kernelState in the Wind kernel are in the windLib library, with routines starting with wind*, as shown in Figure 2.2.
Figure 2.2 Schematic Diagram of VxWorks Kernel State Routines
Among them:
* indicates that the interrupt ISR cannot call this routine and can only be called at the task level;
@* indicates that this routine can be called in the interrupt ISR;
# indicates that it will be used internally in the Wind kernel;
@ indicates that it can run in kernel mode;
The components of the VxWorks system are shown in Figure 2.3.
Figure 2.3 Composition of the VxWorks System
2.2 Classes and Objects of the Wind Kernel
The Wind kernel of VxWorks organizes the five components of the wind kernel: task management module, memory management module, message queue management module, semaphore management module, and watchdog management module using the concepts of classes and objects.
In the Wind kernel, all objects are components of classes, with classes defining the methods (Method) for operating on the objects and maintaining records of operations on all objects. The Wind kernel adopts the semantics of C++ but is implemented in C language. The entire Wind kernel is implemented through explicit coding, and its compilation process does not depend on specific compilers. This means that the Wind kernel can be compiled not only on the built-in diab compiler of VxWorks but also using the open-source GNU/GCC compiler. VxWorks designed a meta-class (Meta-class) for the Wind kernel, and all object classes (Obj-class) are based on this meta-class. Each object class is responsible for maintaining its own object’s (Object) operation methods (such as creating objects, initializing objects, unregistering objects, etc.) and managing statistical records (such as data for creating objects and the number of destroyed objects, etc.). The class management model is not a feature of the VxWorks kernel; it is a component of the operating system. However, all kernel objects depend on it. The relationships between the object classes, objects, and meta-classes of various components of the Wind kernel are shown in Figure 2.4.
Figure 2.4 Schematic Diagram of the Relationship Between Object Classes, Objects, and Meta-Classes
Note: By adopting the design philosophy of objects and classes, the components of the VxWorks Wind kernel can be organically organized, making it easy to verify the correctness of instance types when creating instances of the same component, and all component object classes originate from the base class, maintaining records of operations on all objects.
2.3 Features of the Wind Kernel
Multitasking: The basic function of the kernel is to provide a multitasking environment. Multitasking allows many programs to appear to execute concurrently, while in fact, the kernel segments their execution based on basic scheduling algorithms. Each clearly independent program is called a task. Each task has its own context, which includes the CPU environment and system resources it sees when the kernel schedules that task for execution.
Task State: The kernel maintains the current state of each task in the system. State transitions occur when applications invoke kernel service functions. The states of the Wind kernel are defined as follows:
-
Ready State —- A task is currently not waiting for any resources except for the CPU.
-
Blocked State —- A task is blocked due to some resources being unavailable.
-
Delayed State —- A task sleeps for a period of time.
-
Suspended State —- An auxiliary state mainly used for debugging, suspending prohibits task execution.
Once a task is created, it enters the suspended state and must be activated into the ready state through specific operations. This operation is very fast, allowing applications to create tasks in advance and activate them in a quick manner.
Scheduling Control: Multitasking requires a scheduling algorithm to allocate CPU to ready tasks. In VxWorks, the default scheduling algorithm is priority-based preemptive scheduling, but applications can also choose to use time-slice round-robin scheduling.
Priority-Based Preemptive Scheduling: In priority-based preemptive scheduling, each task is assigned a priority, and the kernel allocates CPU to the task with the highest priority that is in the ready state. The scheduling is preemptive because when a task with a higher priority becomes ready, the kernel immediately saves the current task’s context and switches to the higher-priority task’s context. VxWorks has a total of 256 priority levels, ranging from 0 to 255. When created, a task is assigned a priority, and during the execution of the task, its priority can be dynamically modified to track real-world event priorities. External interrupts are assigned a higher priority than any task, allowing them to preempt a task at any time.
Time-Slice Round-Robin: Priority-based preemptive scheduling can be augmented with time-slice round-robin scheduling. Time-slice round-robin scheduling allows tasks that are in the ready state and have the same priority to share the CPU fairly. Without time-slice round-robin scheduling, when multiple tasks share the processor at the same priority, one task may monopolize the CPU and will not be blocked until preempted by a higher-priority task, thus not giving other tasks of the same priority a chance to run. If time-slice round-robin is enabled, the execution task’s time counter increments with each clock tick. When the specified time slice is exhausted, the counter is reset, and the task is placed at the end of the queue of tasks with the same priority. New tasks added to the specific priority group are placed at the end of that group and their execution counter is initialized to zero.
Basic Task Functions: Basic task functions for state control include creating, deleting, suspending, and waking up a task. A task can also make itself sleep for a specific time interval without running. Many other task routines provide state information obtained from the task context. These routines include accessing the current processor register control of a task.
Task Deletion Issues: The Wind kernel provides mechanisms to prevent tasks from being accidentally deleted. Typically, a task executing in a critical section or accessing critical resources needs to be specially protected. Imagine the following scenario: a task gains mutual access to some data structure and is deleted by another task while it is executing in the critical section. Since the task cannot complete its operations in the critical section, the data structure may be left in a damaged or inconsistent state. Moreover, if the hypothetical task does not have a chance to release that resource, then no other task can now obtain that resource, and the resource is frozen.
Any task attempting to delete or terminate a task that has deletion protection will be blocked. Once the protected task completes its critical section operation, it will cancel its deletion protection to allow itself to be deleted, thereby unblocking the deletion task.
As shown above, task deletion protection is typically accompanied by mutual exclusion operations. Thus, for convenience and efficiency, mutual exclusion semaphores include a deletion protection option (I will elaborate on this in subsequent blog posts).
Inter-Task Communication: To provide complete multitasking system functionality, the Wind kernel offers a rich set of inter-task communication and synchronization mechanisms. These communication functions allow independent tasks within an application to coordinate their activities.
Shared Address Space: The foundation of the inter-task communication mechanism in the Wind kernel is the shared address space of all tasks. Through shared address space, tasks can freely communicate using pointers to shared data structures. Pipes do not require mapping a memory area into the addressing space of two communicating tasks.
Note: Unfortunately, while shared address space has the above advantages, it also brings the danger of unprotected memory reentrant access. UNIX and Linux operating systems provide such protection through process isolation, but this comes with a significant performance penalty for real-time operating systems.
Mutual Exclusion Operations: When a shared address space simplifies data exchange, it becomes necessary to avoid resource contention through mutual exclusion access. Many mechanisms used to obtain mutual exclusion access to a resource only differ in the scope of these mutual exclusions. Methods to implement mutual exclusion include disabling interrupts, prohibiting task preemption, and locking resources through semaphores.
-
Disabling Interrupts: The strongest mutual exclusion method is to mask interrupts.
This locking guarantees mutual access to the CPU. This method can certainly resolve the mutual exclusion issue, but it is inappropriate for real-time systems because it prevents the system from responding to external events during the locking period. Long interrupt latencies are unacceptable for applications that require determinable response times.
-
Prohibiting Preemption: Prohibiting preemption provides a weaker form of mutual exclusion.
During the execution of the current task, no other tasks are allowed to preempt, while interrupt service routines can execute. This can also lead to poor real-time response, just like disabling interrupts, as blocked tasks may experience significant preemption latency, and high-priority tasks in the ready state may be forced to wait an unacceptable amount of time before being able to execute. To avoid this situation, semaphores should be used for mutual exclusion whenever possible.
-
Mutual Exclusion Semaphores: Semaphores are the basic means for locking shared resource access.
Unlike disabling interrupts or prohibiting preemption, semaphores limit mutual exclusion operations only to the relevant resources. A semaphore is created to protect a resource. VxWorks’ semaphores follow Dijkstra’s P() and V() operation model.
When a task requests a semaphore, the P() operation will result in two scenarios based on the semaphore’s set or cleared state at the time of the call. If the semaphore is set, it will be cleared, and the task will continue executing immediately. If the semaphore is cleared, the task will be blocked waiting for the semaphore.
When a task releases a semaphore, the V() operation can result in several scenarios. If the semaphore is already set, releasing it has no effect. If the semaphore is cleared and no tasks are waiting for that semaphore, it is simply set. If the semaphore is cleared and one or more tasks are waiting for that semaphore, the highest priority task is unblocked, and the semaphore remains cleared.
By associating some resources with semaphores, mutual exclusion operations can be implemented. When a task wants to operate on a resource, it must first obtain the semaphore. As long as the task holds the semaphore, all other tasks are blocked from accessing that resource due to requesting that semaphore. When a task is done using that resource, it releases the semaphore, allowing another task waiting for that semaphore to access the resource.
The Wind kernel provides binary semaphores to resolve issues arising from mutual exclusion operations. These issues include deletion protection for resource owners and priority inversion caused by resource contention:
-
Deletion Protection: One issue related to mutual exclusion involves task deletion.
In critical sections protected by semaphores, it is necessary to prevent the executing task from being accidentally deleted. Deleting a task executing in a critical section is catastrophic. Resources may be damaged, and the semaphore protecting the resource may become unavailable, making that resource inaccessible. Typically, deletion protection is provided in conjunction with mutual exclusion operations. For this reason, mutual exclusion semaphores often provide options to implicitly provide the deletion protection mechanism mentioned earlier.
-
Priority Inversion/Priority Inheritance: Priority inversion occurs when a high-priority task is forced to wait an uncertain amount of time for a lower-priority task to complete execution. Consider the following scenario (which I have previously introduced and will re-explain here).
T1, T2, and T3 are high, medium, and low-priority tasks, respectively. T3 obtains the relevant resource by holding a semaphore. When T1 preempts T3 while requesting the same semaphore for competing for that resource, it is blocked. If we assume that T1 is only blocked until T3 finishes using that resource, the situation is not too bad. After all, the resource cannot be preempted. However, lower-priority tasks cannot avoid being preempted by medium-priority tasks, and a preempting task like T2 will prevent T3 from completing its operation on the resource. This situation can potentially block T1 for an indefinite amount of time. This situation is called priority inversion because, although the system is based on priority scheduling, it causes a high-priority task to wait for a lower-priority task to complete execution. Mutual exclusion semaphores have an option to implement priority inheritance algorithms. Priority inheritance solves the issues arising from priority inversion by raising T3’s priority to that of the highest priority task waiting for that resource while T1 is blocked. This prevents T3, and indirectly T1, from being preempted by T2. In simple terms, the priority inheritance protocol allows a task holding a resource to execute at the priority of the highest priority task waiting for that resource. When the execution is complete, the task releases that resource and returns to its normal or standard priority. Thus, the task inheriting priority avoids being preempted by any intermediate priority tasks.
Synchronization: Another common use of semaphores is as a synchronization mechanism between tasks. In this case, the semaphore represents a condition or event that a task is waiting for. Initially, the semaphore is in a cleared state. A task or interrupt indicates the occurrence of an event by setting that semaphore. The task waiting for that semaphore will be blocked until the event occurs and the semaphore is set. Once unblocked, the task executes the appropriate event handling routine. The application of semaphores in task synchronization is useful for freeing interrupt service routines from lengthy event processing to shorten interrupt response times.
Message Queues: Message queues provide a lower-level mechanism for exchanging variable-length messages between tasks and interrupt service routines or other tasks. This mechanism is functionally similar to pipes but has less overhead.
Pipes, sockets, remote procedure calls, and many higher-level VxWorks mechanisms provide higher-level abstractions for inter-task communication, including pipes, TCP/IP sockets, remote procedure calls, and more. To maintain the design goal of trimming the kernel to only include a minimal set of functions sufficient to support higher-level functionalities, these features are based on the kernel synchronization methods described above.
2.4 Advantages of Wind Kernel Design
One important design feature of the Wind kernel is minimal preemption latency. Other major design advantages include unprecedented configurability, scalability for unforeseen application demands, and portability across various microprocessor application developments.
Minimal Preemption Latency: As discussed earlier, prohibiting preemption is the usual means of achieving mutual exclusion operations for critical resources. The unintended negative impact of this technique is high preemption latency, which can be minimized by using semaphores for mutual exclusion and keeping critical sections as tight as possible. However, even widespread use of semaphores cannot address all potential causes of preemption latency. The kernel itself is a source of preemption latency. To understand why, we must better comprehend the mutual exclusion operations required by the kernel.
Kernel-Level and Task-Level: In any multitasking system, a significant amount of application activity occurs in the context of one or more tasks. However, some CPU time slices occur outside of any task context. These time slices occur when the kernel modifies internal queues or decides task scheduling. During these time slices, the CPU executes at the kernel level rather than at the task level.
To safely operate its internal data structures, the kernel must have mutual exclusion operations. At the kernel level, there is no relevant task context, and the kernel cannot use semaphores to protect internal linked lists. The kernel uses delayed work as a means of achieving mutual exclusion. When kernel involvement occurs, functions called by interrupt service routines are not activated directly but are placed in the kernel’s work queue. The kernel completes the execution of these requests by clearing the kernel work queue.
When the kernel is executing requested services, the system will not respond to function calls arriving at the kernel. One can simply think of kernel state as akin to prohibiting preemption. As previously discussed, preemption latency is undesirable in real-time systems because it increases the response time to events that would cause application tasks to be rescheduled. Although it is impossible to completely avoid consuming time when the operating system is at the kernel level (at this point, preemption is prohibited), it is very important to minimize that time. This is the main reason for reducing the number of functions executed by the kernel and is also the reason for not adopting a unified structured system design approach.
VxWorks demonstrates that a minimal kernel designed for task-level operating system services can meet demands. VxWorks is now an independent, fully functional hierarchical real-time operating system with a relatively small kernel that can run on any processor.
The VxWorks system provides a wealth of functionalities on top of the Wind kernel. It includes memory management, a complete BSD4.3 network stack, TCP/IP, network file systems (NFS), remote procedure calls (RPC), UNIX-compatible link loading modules, C language interpretive interfaces, various types of timers, performance monitoring components, debugging tools, additional communication tools such as pipes, signals, and sockets, I/O and file systems, and many functional routines. None of these run at the kernel level, so they do not prohibit interrupts or task preemption.
Configurability: Real-time applications have various kernel requirements. No single kernel has an optimal design compromise to meet every requirement. However, a kernel can be adjusted through configuration to optimize specific performance characteristics and trim real-time systems to best suit application demands. Unforeseen kernel configurability is provided to applications in the form of user-selectable kernel queuing algorithms.
Queuing Strategies: The queuing library in VxWorks executes independently of the kernel queue functions that utilize them, providing flexibility for adding new queuing methods in the future.
In VxWorks, there are various kernel queues. The ready queue is a priority-indexed queue of all tasks waiting to be scheduled. The tick queue is used for timing functions. The semaphore queue is a linked list of blocked tasks waiting for semaphores. The active queue is a first-in-first-out (FIFO) list of all tasks in the system. Each of these queues requires a different queuing algorithm. These algorithms are not embedded in the kernel but are extracted into an autonomous, convertible queuing library. This flexible organizational form is the basis for meeting specific configuration demands.
Scalability: The ability to support unforeseen kernel expansion is as important as having functional configurability. Simple kernel interfaces and mutual exclusion methods make it relatively easy to extend kernel-level functionality; in some cases, applications can leverage kernel hooks to achieve specific extensions.
Internal Hook Functions: To add additional task-related functionality to the system without modifying the kernel, VxWorks provides hook functions for task creation, switching, and deletion. These allow additional routines to be executed when tasks are created, context switched, and deleted. These hook functions can utilize free areas in the task context to create task characteristics of the Wind kernel.
Future Considerations: Many system functions are becoming increasingly important and will impact preemption latency during kernel design. Although a complete discussion of these issues is beyond the scope of this blog post, it is worth briefly mentioning.
Designing a CPU-independent operating system has always been a challenge. As new RISC (Reduced Instruction Set) processors become popular, these difficulties increase. To execute effectively in a RISC environment, the kernel and operating system need flexibility in executing different strategies.
For example, consider the routines executed by the kernel during task switching. In CISC (Complex Instruction Set, such as 680×0 or 80×86) CPUs, the kernel stores a complete set of registers for each task, swapping these registers in and out while running tasks. On a RISC machine, this is unreasonable due to the number of registers involved. Thus, the kernel needs a more sophisticated strategy, such as caching registers for tasks, allowing applications to specify certain registers for special tasks.
Portability: To make the Wind kernel run on their underlying structures, a portable kernel version is required. This makes portability feasible, but not optimal.
Multiprocessing: Supporting tightly coupled multiprocessing demands that the internal functions of the real-time kernel include, ideally, remote requests for kernel calls, such as from one processor to another. This involves semaphore calls (for inter-processor synchronization) and task calls (to control tasks on another CPU). This complexity will undoubtedly increase the overhead of kernel-level function calls, but many services, such as object identification, can be executed at the task level. The advantage of maintaining a minimal kernel in multiprocessor systems is that interlocks between processors can have better time granularity. A large kernel will consume extra time at the kernel level, only achieving coarse-grained interlock timing.
Important Metrics of Real-Time Kernels: Many performance characteristics are used to compare existing real-time kernels, including:
-
Fast Task Context Switching —- Due to the multitasking nature of real-time systems, the system’s ability to quickly switch from one task to another is crucial. In time-sharing systems like UNIX, context switching occurs at the millisecond level. The Wind kernel executes raw context switching measured at the microsecond level.
-
Minimal Synchronization Overhead —- Since synchronization is the fundamental method of achieving mutual access to resources, minimizing the overhead caused by these operations is crucial. In VxWorks, requesting and releasing binary semaphores is also measured at the microsecond level.
-
Minimal Interrupt Latency —- Since events from the external world usually arrive in the form of interrupts, it is important for the operating system to quickly handle these interrupts. The kernel must disable interrupts when operating on critical data structures. To reduce interrupt latency, this time must be minimized. The interrupt latency of the Wind kernel is also at the microsecond level.
Note: Specific numerical performance metrics can only be obtained after direct measurement on specific target boards.
The impact of preemption latency on performance metrics: As many real-time solutions are submitted to application engineers, performance metrics become increasingly important for evaluating vendor products. Unlike context switching and interrupt latency, preemption latency is difficult to measure. Therefore, it is rarely mentioned in specifications. However, considering that the kernel typically prohibits context switching for hundreds of microseconds, claiming a fixed context switching time of 50 microseconds (regardless of the number of tasks) is meaningless. In addition to being difficult to measure, preemption latency may undermine the validity of many performance metrics. The Wind kernel minimizes preemption latency by reducing the size of the kernel. A kernel with numerous functions will inevitably lead to longer preemption latency.
In conclusion, I have provided a coarse-grained introduction to the Wind kernel. In the next article, I will elaborate on specific aspects of the Wind kernel, combined with code.