Memory Management Algorithms in Embedded Operating Systems

[Paid] STM32 Embedded Resource Package

This article mainly introduces the basic concepts of memory and the memory management algorithms of operating systems.

1 Basic Concepts of Memory
Memory is the most important resource in a computer system apart from the processor, used to store programs and data currently being executed. Memory refers to the storage space that the CPU can directly address, while external storage is accessed through drivers.
2 ROM, RAM, and Flash
Memory generally uses semiconductor storage units and is divided into Read Only Memory (ROM), Random Access Memory (RAM). ROM can generally only be read and not written, and its data is not lost after power-off. RAM can be both read from and written to, but its data will be lost after power-off. Memory generally refers to RAM.
ROM is generally used in embedded systems to store BootLoader and operating system or program code, or directly as hard disk storage. In recent years, Flash has completely replaced ROM in embedded systems, combining the advantages of both ROM and RAM. It not only has the characteristics of being electronically erasable and programmable but also retains data even when power is off, and allows for fast data reading.
3 Two Types of Memory Management
The memory management module manages the system’s memory resources and is one of the core modules of the operating system. It mainly includes memory initialization, allocation, and release.
Based on whether the allocated memory is contiguous or not, memory management can be divided into two major categories.
  • Contiguous Memory Management
The memory space allocated for processes is contiguous, but this allocation method is prone to memory fragmentation (fragmentation refers to free memory that is difficult to utilize, usually small memory), reducing memory utilization. Contiguous memory management can be mainly divided into single contiguous memory management and partitioned memory management.
  • Non-contiguous Memory Management
Processes are scattered across multiple non-contiguous memory spaces, which can reduce memory fragmentation and improve memory utilization. If the basic allocation unit is a page, it is called page memory management; if the basic unit is a segment, it is called segmented memory management.
Current operating systems generally adopt non-contiguous memory management. However, due to the larger allocation granularity, embedded systems with smaller memory typically use contiguous memory management. This article mainly introduces the commonly used partitioned memory management in embedded systems.
4 Partitioned Memory Management
Partitioned memory management is divided into fixed partitioning and dynamic partitioning.
  • Fixed Partitioning
    The memory is pre-divided into several fixed-size areas. The partition sizes can be equal or unequal. Fixed partitioning is easy to implement but can cause internal fragmentation waste within partitions, and the total number of partitions is fixed, limiting the number of processes that can be executed concurrently.
  • Dynamic Partitioning
Memory is dynamically allocated to processes based on their actual needs.
5 Dynamic Partition Memory Management
Operating Mechanism
Dynamic partition management generally uses the free list method, which is based on a doubly linked list to maintain free partitions. In the initial state, the entire memory block is added to the free list as a large free partition. When a process requests memory, a free partition that meets the size requirement will be found from this free list. If the partition is larger than the required memory, the required size of memory will be split from this partition and allocated to the process, and this split memory will be removed from the free list, while the remaining memory still remains a free partition linked in the free list.
Data Structure
The free list method can be implemented using various data structures; here we introduce a relatively simple one. Each free partition’s data structure contains the size of the partition and pointers to the previous and next partitions, allowing all free partitions to be linked into a doubly linked list.
Memory Management Algorithms in Embedded Operating Systems
Memory Allocation Algorithms
  • First Fit Algorithm
The First Fit algorithm requires the free partition list to be linked in ascending order of address. When allocating memory, it starts searching from the first free partition in the list and assigns the first free partition that meets the requirements to the process.
  • Next Fit Algorithm
The Next Fit algorithm evolved from the First Fit algorithm. When allocating memory, it starts searching from the next free partition after the last one that was allocated, continuing until it finds a free partition that meets the requirements. The search is done in a circular manner, meaning if no free partition satisfies the requirements until the end of the list, it returns to the first free partition to start searching.
  • Best Fit Algorithm
This algorithm finds the smallest free partition that meets the requirements from all free partitions. To speed up the search, the Best Fit algorithm links all free partitions in ascending order of their size, ensuring that the first memory found that meets the size requirement is the smallest free partition.
  • Worst Fit Algorithm
This algorithm finds the largest free partition that meets the requirements from all free partitions. The Worst Fit algorithm links all free partitions in descending order of their size.
  • Two Level Segregated Fit (TLSF)
This method uses two layers of linked lists to manage free memory, categorizing free partitions by size, with each category represented by a free linked list. An index list is also used to manage these free linked lists, where each entry corresponds to a free linked list and records the head pointer of that category’s free linked list.
Memory Management Algorithms in Embedded Operating Systems
In the figure, the first layer of the linked list categorizes free memory blocks based on powers of 2. The second layer of the linked list is a linear segment of each category of free memory blocks within a certain range. For example, the category of 25 is divided into 4 memory intervals: [25, 25+8), [25+8, 25+16), [25+16, 25+24), [25+24, 25+32); the category of 216 is divided into 4 smaller intervals: [216, 216+214), [216+214, 216+2*214), [216+2*214, 216+3*214), [216+3*214, 216+4*214). To quickly retrieve free blocks, each layer of the linked list has a bitmap to indicate whether there are free blocks in the corresponding linked list; for example, the bitmap of the first layer shows 010 in the last 3 bits, indicating that there are free blocks in the memory interval of 25. The corresponding second layer bitmap is 0100, indicating that there is a free block in the interval [25+16, 25+24), which is 52 Bytes below.
  • Buddy Systems
This is a variant of the Segregated Fit algorithm, which has better memory split and recovery merge efficiency. There are many types of buddy algorithms, such as Binary Buddies, Fibonacci Buddies, etc. Binary Buddies is the simplest and most popular one, classifying all free partitions based on their size, with each category represented by a free doubly linked list. In Binary Buddies, all memory partitions are powers of 2.
Since both allocated and free partitions are powers of 2, even if the memory requested by the process is smaller than the allocated memory block, the excess memory will not be split and given to other processes, which can easily cause internal fragmentation.
When a process requests a memory of size n, the allocation steps are as follows:
1. Calculate a value i such that 2^i-1 < n ≤ 2^i
2. Search in the free list of size 2^i
3. If a free block is found, allocate it to the process
4. If the free partition of size 2^i is exhausted, search in the free list of size 2^(i+1)
5. If there is a free partition of size 2^(i+1), split this free block into two equal-sized partitions; one is allocated to the process, and the other is added to the free list of size 2^i
6. If no free partition of size 2^(i+1) is found, continue searching for size 2^(i+2). If found, two splits are performed. The first split results in two blocks of size 2^(i+1), one of which is added to the free list of size 2^(i+1), and the other continues to be split into two blocks of size 2^i, one allocated to the process, and the other added to the free list of size 2^i.
7. If no free partition of size 2^(i+2) is found, continue searching for size 2^(i+3), and so on.
During memory recovery, if the memory block to be reclaimed is a buddy of a block in the free list, they are merged into a larger memory block. If the merged memory block still has a buddy in the free list, continue merging until no further merging is possible, and add the merged memory block to the corresponding free list.
Memory Management Algorithms in Embedded Operating Systems
The following table compares the advantages and disadvantages of the above six algorithms:
Memory Management Algorithms in Embedded Operating Systems

Leave a Comment

×