Introduction to VxWorks
Before learning VxWorks, readers are expected to have a foundation in computer networks, operating systems, C/C++, and network communication. It is recommended to read CSAPP to build a solid foundation. VxWorks is a high-performance real-time operating system launched by Wind River Systems in the USA, which is now widely used in various large-scale projects. Companies like Tplink also use VxWorks as the operating system for certain routers. Real-time systems are divided into hard real-time and soft real-time, with VxWorks supporting hard real-time, while Linux generally supports soft real-time. In a real-time operating system, the primary concern is how long each task can be completed within a defined time frame. In simple terms, the biggest difference between real-time and time-sharing operating systems is the concept of a deadline. It does not allow any errors that exceed the deadline. Timeout errors can cause damage or even lead to system failure or prevent the system from achieving its intended goals. The deadlines in soft real-time systems are flexible and can tolerate occasional timeout errors. The consequences of failure are not severe; for example, in a network, it may only slightly reduce the system’s throughput.
1. Multitasking
Modern real-time systems are built on the foundation of multitasking and inter-task communication. A multitasking environment allows real-time applications to be constructed as a set of independent tasks, each with its own thread and a set of system resources. To coordinate the behavior between tasks, inter-task communication mechanisms allow these tasks to synchronize and communicate their activities. In the VxWorks operating system, inter-task communication mechanisms include semaphores, message queues, pipes, and network sockets. In real-time systems, handling interrupts is another major function, as interrupts are an important way to notify the system of external events. To achieve faster interrupt response, interrupt service routines (ISRs) in the VxWorks operating system execute in a specialized context, outside of the task context. The VxWorks real-time kernel, Wind, provides a basic multitasking environment and corresponding scheduling algorithms. Each task has its own context, which is stored in the task control block (TCB). The TCB includes the following content:
Note: The VxWorks system supports virtual memory.
1.1 Task States
The VxWorks task state table is as follows:
1.2 Wind Task Scheduling
The default algorithm of the Wind kernel is a priority-based preemptive scheduling algorithm, which also supports the RR scheduling algorithm; both algorithms rely on task priority. The Wind kernel has 256 priority levels, ranging from 0 to 255, with 0 being the highest priority. The function is as follows:
1.2.1 Priority-Based Preemptive Task Scheduling
Literally, a higher priority task will seize CPU resources to execute, while lower priority tasks will be preempted.
Advantages: Urgent tasks can be executed immediately.
Disadvantages: When multiple tasks with the same priority share a CPU, if one task never blocks, it will monopolize the CPU, preventing other tasks from executing. Task scheduling can lead to context switches, and frequent preemption can waste resources due to context switching.
This leads to the round-robin scheduling algorithm.
1.2.2 Round-Robin Scheduling Algorithm (RR)
This averages CPU usage, with tasks of the same priority receiving equal CPU processing time. When a time slice is exhausted, the task voluntarily relinquishes CPU usage, leading to a context switch.
Note: In round-robin scheduling, if a task is preempted by a higher priority task, the remaining time slice for that task is preserved, and after the higher priority task completes, the lower priority task resumes execution and uses up its time slice.
1.2.3 Preemption Locking
taskLock(), taskUnlock() can prevent kernel scheduling. If a task has disabled scheduling but becomes blocked or suspended during execution, the kernel is qualified to perform scheduling and choose a higher priority task for execution. When that task is unblocked or suspended, preemption prohibition will take effect again.
Note: The application program’s priority should be set between 100-250, while driver program priorities are between 51-99.
1.3 Task Control
1.3.1 Task Creation Function:
id = taskSpawn(name, priority, options, stacksize, main, arg1, arg10); taskInit(); taskActive(); //id is a 4-byte int //name: task name //stacksize: task stack //options: task options //main: entry function address //arg10: startup parameters for the entry function //Example: tid = taskSpawn("tMyTask", 90, VX_FP_TASK, 20000, myFunc, 2387, 0, 0 0, 0, 0, 0, 0, 0);
1.3.2 Task Deletion
exit(); //Terminate task call, release memory taskDelete(); //Terminate specified task, release memory taskSafe(); //Protect the calling task from deletion taskUnsafe(); //Remove task deletion protection
Note that before deleting a task, it is necessary to release the shared resources occupied by that task. It should also be noted that if a task needs to access a critical section and deletes the semaphore it holds at the same time, this will prevent other tasks from accessing that critical section. In this case, taskSafe() should be called to protect the task from deletion.
1.3.3 Task Control
1.3.4 Task Extension Functions
1.3.4 Task Error errno
The potential global variable errno in the operating system is defined and can be directly used by application code that connects to the operating system. During context switching, errno is also saved.
1.3.5 Task Exception Handling
For the operating system, CSAPP mentions that exceptions are divided into interrupts, traps, and exceptions. Interrupts can also be divided into hardware interrupts and software interrupts, with hardware interrupts being asynchronous (unpredictable) and software interrupts being synchronous (predictable), with debugging implemented through traps.
1.3.6 Shared Code and Reentrancy
-
Code sharing must be reentrant, meaning multiple tasks can call the same function simultaneously without conflict;
-
I/O and drivers in Vx are reentrant;
VxWorks functions use the following reentrancy techniques:
-
Dynamic stack variables: When multiple tasks call a function, each task has its own stack;
-
Globally and statically protected variables by signals: Using mutex or mutual exclusion semaphores provided by semMLib;
-
Task variables: Multiple tasks call a program, but each task uses different global or static variables.
2. Inter-Task Communication
-
Shared memory: This can be considered the fastest and simplest communication method.
-
Semaphores: For synchronization and mutual exclusion.
-
Mutexes and condition variables.
-
Message queues and pipes: Used for message passing between tasks within the same CPU.
-
Sockets and Remote Procedure Calls: Networking programming.
-
Signals: For exception handling.
2.1 Mutual Exclusion Methods
Interrupt locking:
int lock = intLock(); // Code critical section that disables interrupts intUnlock(lock);
Preemption locking:
taskLock(); // Code critical section that disables interrupts taskUnlock();
2.2 Semaphores
#include "vxWorks.h" #include "semLib.h" SEM_ID semMutex; semMutex = semBCreate(SEM_Q_PRIORITY, SEM_FULL); semTake(semMutex, WAIT_FOREVER); // Critical section semGive(semMutex);
Semaphores can also be used for synchronization. Specifically, Task 1 waits for a semaphore, and after Task 2 completes a task, it releases that semaphore. At this point, Task 1 captures that signal, unblocks, and starts executing the task.
Consistency
The consistency issues related to semaphores mainly include:
-
Deleting a semaphore does not cause the tasks blocked on it to enter an infinite wait.
-
A task holding a semaphore will not cause other tasks to enter an infinite wait due to unexpected termination.
Consistency issues may cause some tasks to be unable to run, and in severe cases, the entire system may not achieve the expected results or even crash. The first point is guaranteed by the system; the system automatically unblocks all blocked tasks in the semaphore blocking queue when deleting the semaphore.
2.2.1 Binary Semaphore
Can be used for synchronization and mutual exclusion.
2.2.2 Mutex Semaphore
2.2.3 Counting Semaphore
2.3 Priority Inversion
Operating systems based on priority scheduling face a problem.
Solution: Priority inheritance.
This leads to the use of mutex semaphores.
semID = semMCreate(SEM_Q_PRIORITY | SEM_INVERSION_SAFE);
The Wind standard semaphore interface includes two specific options that are not compatible with POSIX.
-
Timeout: NO_WAIT, WAIT_FOREVER;
-
Queue:
2.4 Message Queues
The primary communication method between tasks within a single CPU.
2.4.1 Full-Duplex Communication Requires Two Message Queues
2.4.2 Select Task Priority Order and FIFO Order
2.4.3 Accept Timeout and Urgent Message Options
Recommended Model:
Why Use Message Queues
The “Semaphore + Shared Buffer” method does not require copying data between the user buffer (i.e., shared buffer) and the system kernel buffer, making it highly efficient and suitable for scenarios with large data volumes. In contrast, using message queues requires both parties to exchange data through the system kernel buffer, resulting in lower efficiency compared to the “Semaphore + Shared Buffer” method.
The advantage is simplicity in programming.
When the data volume is small, the simplification effect of the program is significant.
The priority issues of semaphores only consider the priority ordering of a blocking task queue. In contrast to semaphore priorities, the priority issues of message queues are considered from two aspects.
-
The priority of the message itself: Determines the order in which multiple messages submitted to the receiving task are processed;
-
The priority of the blocking task queue: Determines which of multiple sending tasks or multiple receiving tasks gets executed first.
Message priorities are divided into two categories: normal and urgent, with urgent messages placed at the front of the queue and normal messages at the back.
Semaphore and Message Queue Experiment
/* msgorder.c: test message processing order*/ #include "vxworks.h" #include "msgQLib.h" #include "semLib.h" #define MAxMSG 3 #define MAX_MSG_LEN 50 #define TEST_NUM 8 MSG_Q_ID msgQueue = NULL; SEM_ID semMutex = NULL; SEM_ID semCounter = NULL; char logBuf[ TEST_NUM*3 ][MAX_MSG_LEN+20]; int nLog = 0; void sendMsg( int ); void rcvMsg ( void ); void msgOrder(){ /* Definition: Sending task priority message self-priority receiving task priority */ const int senderPri[TEST_NUM]= {50, 51, 52, 49, 53, 56, 55, 54}; const int msgPri[TEST_NUM]={ 0, 1, 1.1, 0, 1, 0, 1 }; const int rcvPri = 60; int i, j; if( msgQueue || semMutex || semCounter ) { printf ('last running not exit properly !
Leave a Comment
Your email address will not be published. Required fields are marked *