1 Compiling a Debuggable VxWorks
First, create a new VSB project with the configuration as shown in Figure 1-1:
The next step is to add DEBUG configuration options to the BSP project, with the following steps:
-
Expand the BSP project in the Project Explorer view
-
Right-click on Source Build Configuration and select
Edit Source Build Configuration
-
Search for the debug keyword, select
Global Debug Flag
and change thevalue
toy
The final effect is shown in Figure 1-2:
Finally, build the VSB. After building the VSB, create a VIP, as shown in Figure 1-3:
In the VIP, configure INCLUDE_DEBUG_AGENT
and INCLUDE_DEBUG_AGENT_START
. You can search for DEBUG_AGENT
to configure it, as shown in Figure 1-4:
You also need to add INCLUDE_SHELL INCLUDE_USB_INIT INCLUDE_USB_XHCI_HCD_INIT INCLUDE_USB_GEN2_STORAGE_INIT
, with some configuration items as shown in Figure 1-5:
Ensure the above configurations are bolded, and after the configuration is complete, build the VxWorks image.
2 Starting VxWorks with QEMU
This time, we use qemu 6.0.1 for startup, compiled and installed from the qemu source code, with the following steps:
wget https://download.qemu.org/qemu-6.0.1.tar.xz
tar -xvf qemu-6.0.1.tar.xz
cd qemu-6.0.1/
./configure
make
make install
After compilation, check the qemu version, as shown in Figure 2-1:
Enter the VIP/default directory and find the compiled VxWorks, as shown in Figure 2-2:
Next, use qemu-img to create a simulated storage device with the following command:
qemu-img create file.img 512M
Place VxWorks and file.img in the same folder, as shown in Figure 2-3:
Use the following command to start VxWorks
qemu-system-x86_64 -machine q35 -m 2048 -smp 8 -serial stdio -kernel vxWorks -nographic -monitor none -device nec-usb-xhci,id=usb0,msi=off,msix=off -drive if=none,id=stick,file=file.img -device usb-storage,bus=usb0.0,drive=stick
Startup successful as shown in Figure 2-4:
3 Debugging VxWorks
Next, use qemu to debug VxWorks, with the startup command as follows:
qemu-system-x86_64 -machine q35 -m 2048 -smp 8 -serial stdio -kernel vxWorks -nographic -s -S -monitor none -device nec-usb-xhci,id=usb0,msi=off,msix=off -drive if=none,id=stick,file=file.img -device usb-storage,bus=usb0.0,drive=stick
Use GDB to link to qemu, as shown in Figure 3-1:
First, GDB pauses at address 0x000000000000fff0
, corresponding to the source code location in the <vsb_project>/krnl/configlette/dataSegPad.c
file. The purpose of dataSegPad is to ensure that the memory management unit (MMU) page size boundary is aligned. When connecting VxWorks, it is explicitly specified as the first module loaded on the loading line. This ensures that the data structure in the data segment is the first item in the data segment to avoid overlapping the data segment with the page occupied by the text segment.
Once the MMU is initialized, VxWorks begins its boot process. The first function executed is sysInit.
The sysInit function is the entry point for VxWorks startup. Its main functions include disabling interrupts, setting up the stack, and calling the usrInit()
function. The initial stack is set to grow downwards from the address of sysInit()
. This stack is only used by the usrInit()
function and is no longer used after this. Subsequently, the program enters the usrInit function, as shown in Figure 3-3:
Through further debugging, the following functions are called in total in the usrInit function:
sysStart (startType); // Clear BSS and set the base address of the vector table.
cacheLibInit (USER_I_CACHE_MODE, USER_D_CACHE_MODE); // Initialize cache.
gpDtbInit = dt_blob_start; // Initialize DTB
usrFdtInit ((void*)DTB_RELOC_ADDR, (int)DTB_MAX_LEN); // Initialize flat device tree library
usrBoardLibInit(); // Initialize board-level subsystems, providing BSP access API
usrAimCpuInit (); // Initialize CPU
excVecInit (); // Initialize exception vector
vxCpuLibInit (); // Initialize CPU recognition function
usrCacheEnable (); // Enable cache
objOwnershipInit (); // Initialize objOwnerLib library, which contains object ownership functions.
objInfoInit (); // Initialize object lookup functions
objLibInit ((OBJ_ALLOC_FUNC)FUNCPTR_OBJ_MEMALLOC_RTN, (OBJ_FREE_FUNC)FUNCPTR_OBJ_MEMFREE_RTN, OBJ_MEM_POOL_ID, OBJ_LIBRARY_OPTIONS); // Initialize objLib library, which provides interfaces for VxWorks user object management tools.
vxMemProbeInit (); // Initialize vxMemProbe() exception handling
classListLibInit (); // Initialize object list
semLibInit (); // Initialize semaphore
condVarLibInit (); // Initialize condition variables library
classLibInit (); // Initialize class library
kernelBaseInit (); // Initialize kernel objects
taskCreateHookInit (); // Initialize task hook related
sysDebugModeInit (); // Set debug flag to let the system be in debug mode
umaskLibInit(UMASK_DEFAULT); // Provide support for POSIX file mode creation mask in the kernel environment (support unmask())
usrKernelInit (VX_GLOBAL_NO_STACK_FILL); // Initialize kernel
It is particularly important to note the last function in the sysInit function, usrKernelInit
, which initializes and starts the system’s first task, then enters usrRoot
, as shown in Figure 3-4:
The usrRoot function is the entry point of the system’s first task, mainly responsible for post-kernel initialization. This function has a large number of initialization functions. The specific functions are as follows:
usrKernelCoreInit (); // Initialize Event signals, message queues, watchdog, hook dbg
poolLibInit (); // Initialize memory pool, the block size in the pool is specified when the pool is created and is consistent for each block
memInit (pMemPoolStart, memPoolSize, MEM_PART_DEFAULT_OPTIONS); // Initialize memLib library, which mainly provides APIs for allocating memory blocks for RTP heap
memPartLibInit (pMemPoolStart, memPoolSize); // Initialize core memory blocks
kProxHeapInit (pMemPoolStart, memPoolSize); // Initialize kernel proximity heap, mainly for core accessory heap allocation
pgPoolLibInit(); // Initialize Page Pool
pgPoolVirtLibInit(); // Initialize Page Pool virtual space
pgPoolPhysLibInit(); // Initialize Page Pool physical space
usrMmuInit (); // Initialize global MMU mapping based on BSP's sysPhysMemDesc table
pmapInit(); // Provide the function of mapping physical addresses to kernel/RTP
kCommonHeapInit (KERNEL_COMMON_HEAP_INIT_SIZE, KERNEL_COMMON_HEAP_INCR_SIZE); // Initialize kernel heap for dynamic memory allocation for kernel and kernel applications, managed using ANSI standard malloc, free
usrKernelCreateInit (); // Initialize Object creation, for example: message queues, watchdog, signals
usrNetApplUtilInit (); // Initialize application/stack logging
envLibInit (ENV_VAR_USE_HOOKS); // Initialize envLib to be compatible with UNIX environment variables, can use putenv to create and modify environment variables
edrStubInit (); // Record ED&R in the BOOT record
usrSecHashInit (); // Initialize secHash, such as: MD5, SHA1, SHA256
usrDebugAgentBannerInit (); // Debug agent banner
usrShellBannerInit (); // Shell banner
vxbDmaLibInit(); // Initialize VxBus DMA subsystem
vxbIsrHandlerInit (VXB_MAX_INTR_VEC, VXB_MAX_INTR_CHAIN); // Initialize VxBus ISR handler methods
vxbIntLibInit (VXB_MAX_INTR_DEFER); // Initialize VxBus interrupts
vxDyncIntLibInit(); // Initialize VxBus dynamic interrupt controller supporting message interrupts
vxIpiLibInit (); // Initialize symmetric multiprocessing (SMP) and asymmetric multiprocessing (AMP) interrupts.
miiBusFdtLibInit(); // Initialize MII bus FDT subsystem
miiBusLibInit(); // Initialize MII bus system
vxbPciInit (); // Initialize VxBus PCI subsystem library, which provides PCI host controller driver
vxbPciMsiInit (); // Handle MSI and MSI-X interrupts of PCI devices
vxbParamLibInit (); // Initialize driver parameter mechanism, the default values of driver parameters can be overridden by BSP(DST)
usrIaPciUtilsInit(); // Initialize intel PCI
sysHwInit1 (); // Additional system initialization, such as PIC, IPI vector
boardInit(); // Board-level initialization
kernelIdleTaskActivate(); // Add support for Idle Tasks (SMP Only)
usrIosCoreInit (); // Kernel I/O
usrNetworkInit0 (); // Initialize network
vxbLibInit (); // Initialize VxBus subsystem
intStartupUnlock (); // Unlock interrupts
sysIntEnableFlagSet(); // Mark interrupt enable
usrSerialInit (); // Set standard input and output devices
usrClkInit (); // Initialize clock
cpcInit (); // CPUs Cross-Processor Call (SMP Only)
vxdbgCpuLibInit (); // Initialize VxDBG control for CPU
pgMgrBaseLibInit(); // Initialize Basic Page Manager
usrKernelExtraInit (); // Initialize other mechanisms of the kernel, such as: Signal, POSIX
usrIosExtraInit (); // Initialize other mechanisms of the IO system, such as: system logging, standard IO library
usrHostnameSetup (TARGET_HOSTNAME_DEFAULT); // Set hostname to TARGET_HOSTNAME_DEFAULT, generally for target
sockLibInit (); // Socket interface
selTaskDeleteHookAdd (); // Initialize select mechanism
cpuPwrLightMgrInit ();cpuPwrMgrEnable (TRUE); // CPU power management during idle
cplusCtorsLink (); // Ensure that compiler-generated initialization functions are called at kernel startup, including C++ static object initialization functions.
usrSmpInit (); // Multiprocessing support
miiBusMonitorTaskInit(); // MII bus monitoring task.
usrNetworkInit (); // Complete network system initialization
usrBanner (); // Display Wind River banner at startup
usrToolsInit (); // Software development tools, such as target loader, symbol table, debug library, kernel shell, etc.
usrAppInit (); // Call the initialization function of the application program in the project file usrAppInit() after the system starts, user program entry
In the usrAppInit function, the user-defined program is mainly started after VxWorks starts, and we won’t delve into it. Finally, let’s summarize the VxWorks boot process in a figure, as shown in Figure 3-5:
4 Kernel Applications
The usrAppInit function will start the kernel application after VxWorks starts. So how do we add the program to the auto-start function? Before that, let’s briefly understand kernel applications.
In VxWorks, kernel applications run in kernel space, which is different from Unix/Linux. Kernel applications can be:
-
Downloaded and dynamically linked to the operating system by the object module loader.
-
Statically linked to the operating system, making it part of the kernel image.
First, find the usrAppInit.c file, go to the usrAppInit function in the c file, and its function content is shown in Figure 4-1:
Write a function and use taskSpawn to start it, with the following code:
#include <taskLib.h>
#include <stdio.h>
#include <string.h>
void helloWorld () {
printf("hello vxworks!");
}
void usrAppInit (void)
{
#ifdef USER_APPL_INIT
USER_APPL_INIT; /* for backwards compatibility */
#endif
/* TODO: add application specific code here */
taskSpawn("hello", 100, 0, 8192, (FUNCPTR)helloWorld, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0);
}
Compile the image and start VxWorks, as shown in Figure 4-2:
Next, modify the code as follows:
void test () {
char buf[8];
gets(buf);
}
This is a classic stack overflow. Let’s take a look at the status of various registers in VxWorks through GDB. Similarly, compile and start VxWorks. Set a breakpoint at the test function and conduct a test, as shown in Figure 4-3:
First, the overflow situation, as shown in Figure 4-4:
Here, we can see that the return address has already been pointed to an unknown address. Then, check the stack situation, as shown in Figure 4-5:
In the case of no overflow, it will jump to the shellInternalFunctionCall function, as shown in Figure 4-6:
Stack data without overflow, as shown in Figure 4-7:
Let’s take a look at VxWorks’ protection mechanism, as shown in Figure 4-8:
VxWorks does not have much protection mechanism, so it is relatively easy to exploit vulnerabilities, allowing for direct execution of shellcode. Additionally, due to the characteristics of VxWorks, programs will restart when they crash, so care must be taken to ensure that the program does not crash and exit during exploitation.
5 Comparison with Linux Memory Layout
In Linux, the operating system maps the virtual addresses of different processes to different physical addresses in memory. The virtual addresses held by the process are converted to physical addresses through the mapping relationship in the memory management unit (MMU) in the CPU chip, and then accessed through the physical addresses. As shown in Figure 5-1:
The mapping of virtual addresses to physical addresses can be done through segmentation, paging, or a combination of both. In Linux, memory paging divides the virtual and physical spaces into fixed-size pages.
Virtual memory is divided into kernel space and user space, and the range of address space varies based on bitness, as shown in Figure 5-2:
In VxWorks, there is also virtual memory managed by the MMU, but there are multiple partitions in VxWorks. The current memory usage can be displayed using the adrSpaceShow command, as shown in Figure 5-3:
For 32-bit and 64-bit CPUs, the memory management mechanism provided by VxWorks 7 is the same. Virtual memory is managed in partitions, each with specific purposes and corresponding allocation mechanisms.
The overall structure of VxWorks virtual memory is as follows, as shown in Figure 5-4:
-
Shared User Virtual Memory: The shared user virtual memory area is used to allocate virtual memory for shared mappings, such as shared data areas, shared libraries, and memory mapping using
mmap()
with theMAP_SHARED
option. -
RTP Private Virtual Memory: The RTP private virtual memory area is used to create private mappings for RTP: code and data segments, RTP heap space, and memory mapping using
mmap()
with theMAP_PRIVATE
option. All RTPs in the system can access the entire RTP private memory area, so RTP uses overlapping address space management. -
Kernel System Virtual Memory: The kernel system virtual memory area contains kernel system memory. From here, the kernel image (text, data, bss), kernel proximity heap can be located.
-
Kernel Virtual Memory Pool: The kernel virtual memory pool is used for dynamic memory management in the kernel. This area is used for on-demand allocation of virtual memory, such as creating and expanding kernel applications, memory-mapped devices, DMA memory, user-reserved memory, and consistent memory.
On this basis, there is also a Global RAM Pool used for internal allocation of dynamically allocated RAM space. This memory pool is used to create or expand: kernel common heap, RTP private memory, and shared memory. The global RAM memory pool also provides memory for the following objects: VxWorks kernel image, user-reserved memory, persistent memory, DMA32 heap space, etc.
It is important to note that VxWorks uses little-endian byte order. In network programs, the port must be converted to network byte order using htons().
In VxWorks, architecture-independent interfaces can be configured for the processor’s MMU to provide virtual memory support. Search for MMU-related content in the BSP, as shown in Figure 5-5:
The default page size for virtual memory can be configured using VM_PAGE_SIZE
, with a default value of 0x1000
, which is 4KB, as shown in Figure 5-6:
In VxWorks, you can troubleshoot using the vmContextShow()
and rtpMemShow()
functions. The following needs to be added in the BSP:
-
vmContextShow needs to add
INCLUDE_VM_SHOW
andINCLUDE_VM_SHOW_SHELL_CMD
-
rtpMemShow needs to add
INCLUDE_MEM_EDR_RTP_SHOW
andINCLUDE_MEM_EDR_RTP_SHOW_SHELL_CMD
6 Conclusion
This time, we mainly familiarized ourselves with the VxWorks boot process through debugging. The reason for doing this is to deepen our impression. The startup process of the VIP project is given in the form of source code, and the main files are in <VIP_Project>/orhConfig.c
, sysLib.c
, and sysAlib.s
files.
In the compilation process, debugging does not need to be enabled. The debugging mode of VxWorks is mainly for WorkBench. The version used for this experiment is VxWorks 2018, and the corresponding WorkBranch support for GDB debugging is not very good.
As an industry-leading real-time operating system, VxWorks has many contents worth learning.
Another point to note is that WorkBench has improved support for GDB in the new version, and there is no need to use this method for debugging.
7 Reference Links
[1]https://www.vxworks.net/app/907-vxworks-7-programmer-guide-memory-management
[2] https://mp.weixin.qq.com/s/SUhkdP9i7ie-ZESsCVRWmA

Author Card
