Introduction
Main Content
-
In a multi-tasking system, some (or all) tasks have real-time requirements;
-
For these tasks with real-time requirements, if any task has even a slight probability of not meeting real-time performance in any situation, the entire system is deemed unable to meet real-time requirements;
-
Due to the overly stringent conditions mentioned above, in engineering practice, we generally seek to find situations that definitely cannot meet real-time performance, namely:
-
If under extremely ideal conditions, it can be mathematically proven that these tasks’ real-time performance cannot be satisfied, then it is necessary to adjust the hardware environment or replan the tasks, lower the real-time requirements;
-
If under extremely ideal conditions, it is proven that the system’s real-time performance can be guaranteed, then we can only assume: there may exist a way to guarantee the current system’s real-time performance – at this point, we can move on to the next stage of discussion – how to design the system to turn what is theoretically possible into a reality.
-
If mathematics has proven that real-time performance cannot be guaranteed, then let’s not bother;
-
If mathematics proves there is hope, let’s continue discussing implementation methods – ultimately whether it can be achieved – the result is up to us, the outcome is another matter.

First, let’s state the conclusion:
-
We need to calculate the maximum CPU resources each real-time task may occupy, expressed as a percentage;
-
Calculate the total CPU resources occupied by all real-time tasks (accumulating the percentages);
-
If it exceeds 100%, then the entire real-time performance is guaranteed not to be met;
-
If it does not exceed 100%, then it can be determined that under ideal conditions, the system’s real-time performance may be guaranteed;
-
In practice, the further away from 100%, the greater the possibility. If it’s hovering around 100% or 99%, it is quite dangerous, and can even be safely determined as not meeting the requirements.
How’s that? Isn’t the reasoning quite simple? Now, how do we calculate it?
-
Observing the previously introduced real-time model, we can find that both “real-time window” and “the time required to process events” are measures of duration;
-
Among them, the real-time window is determined by the specific application needs, dictated by the time requirements of the objective physical world, which translates to: “If tasks are not completed within a certain time, Newton will come after you!”
-
The real-time window also implies another important assumption, that in the worst-case scenario, the event may occur periodically within the time interval represented by the real-time window – just as one wave calms, another rises (gentlemen, I won’t provide images here).
-
“The time required for event processing” refers to the time needed for the CPU to execute the event handling program. This actually involves another very critical issue – determinism: in simple terms, it means“at the very least” you must be able to guarantee – the time taken to execute this task has a maximum value (upper bound), and that this upper bound is stable and reliable – this is just the minimum standard for determinism; sometimes certain applications have high requirements for determinism, for example, some vehicle systems require that the execution time only allows for minimal fluctuations within a very small range, and if this cannot be achieved, it is directly deemed to not meet the “determinism” requirement (for example, many ECUs used in vehicle systems do this), thus the entire system’s real-time performance becomes an illusion.
-
It is worth emphasizing that if the event handling program’s code is the same, it is easy to understand:when the CPU frequency increases (the number of instructions the CPU can execute per unit time increases), the time required for event processing becomes shorter.
Based on the above facts, we can envision a strict ideal condition:
-
Some event occurs periodically within the time interval represented by the “real-time window” (Tw);
-
During this period, it takes time (Th) to process this event;
Then the percentage of CPU resources consumed by the current real-time task is:
Here is the
which is the CPU resource occupation of “event n”.
Cost of Frequent Context Switching
Do you remember the question we tried to discuss at the beginning of this article: does time-slicing have any significance for guaranteeing real-time performance? After the theoretical preparations above, we now have all the conditions needed to clearly and precisely answer this question:
Known facts are as follows:
-
Under constant CPU frequency, the available CPU resources are fixed;
-
There are various ways to implement time-slicing: for example, pure cooperative time-slicing (such as state machines in bare metal, or function pointer-based cooperative schedulers); or in operating systems, using preemptive time-slicing among tasks of the same priority, i.e., Round-robin mode (for details, please refer to 《【Resolving Confusion】 Is it “Time-Slice” or “Time-Sharing Polling”?》).
-
Regardless of the time-slicing method used, task switching has its costs. For example, in bare metal, the cost of entering and exiting functions, the cost of reconstructing local variables in the stack (for details, refer to 《Talking about C Variables – Summer Insects Cannot Speak of Ice》); in operating systems, the cost of task scheduling, etc.
-
Under the premise of fixed resources, the more frequent the task switching, the more CPU time consumed for switching, thus the less actual CPU resources available for real-time task processing
Conclusion: Frequent task switching is harmful to the system’s real-time performance; since frequent time-slicing can lead to a large number of unnecessary task switches, it is overall detrimental to real-time performance.
Inference: Task switching is necessary for real-time systems, but it must be kept to a minimum – reject unnecessary and flashy task switches, only perform those that are truly necessary.
Conclusion
The conclusion of this article essentially conveys a message: whether in bare metal or operating system environments, multitasking is achievable – this is determined by the essence of concurrent technology. Time-slicing is merely a common and “mindless” way to implement concurrency in bare metal and operating system environments – or rather, the role of time-slicing is just to achieve concurrency, it is not related to ensuring real-time performance, and can even be harmful.
So, assuming that it has been mathematically proven that “there may exist a solution to meet the real-time requirements of the system,” what specific methods can be used to achieve it? For more details, please stay tuned for the next installment.
1.Microkernel Architecture in the Minds of Embedded Engineers~
2.Is do{}while(0) Meaningless? You Might Really Not Understand!
3.Bug Terminator, Caught on the Scene! | Subverting Cognition
4.MATLAB is Just the Tip of the Iceberg!
5.“STM32CubeMonitor” Tapped You on the Shoulder
6.How to Rewrite C++ Source Code into C Code?
Disclaimer: This article is a network repost, and the copyright belongs to the original author. If there are copyright issues, please contact us, and we will confirm the copyright based on the copyright certificate you provide and pay for the manuscript or delete the content.