Understanding Real-Time Systems: The Illusion of Time-Slicing

Understanding Real-Time Systems: The Illusion of Time-Slicing

Introduction

In the previous article, we introduced the basic model of real-time systems and analyzed the value of time at different positions within the real-time window, concluding that the earlier the time in the real-time window, the more valuable it is for other tasks in the system. When a real-time event occurs, if one is “selfish” and considers only to complete the current event processing as quickly as possible, it can lead to devastating results for the system’s real-time performance. In fact, if all tasks adopt this strategy, no task’s real-time performance can be guaranteed. If you haven’t read the previous article or still have doubts about this conclusion, you can click 《Real-Time Dilemma (1) – Is Fast a Virtue?》 to read.
In the comment section of the last article, many friends not only discussed passionately but also proposed a solution of “splitting tasks into small chunks for time-slicing” to resolve the real-time dilemma presented in the text. Can time-slicing ensure real-time performance? I believe after reading this article, you will be able to make your own judgment. Feel free to leave your thoughts in the comments.

Main Content

When discussing the real-time performance of a system, we often skip a very important step – proving whether the current system theoretically has a solution – and directly jump to discussing “how to ensure the real-time performance of specific tasks.” This is akin to seeing a murky pond and starting to fish without investigating whether there are fish in it. Perhaps there are fish and you’re lucky, and everyone is happy; or perhaps the pollution is severe and the fish have long since died due to lack of oxygen, and you think it’s just bad luck or poor skills, returning empty-handed – that would be quite ridiculous.
So, let’s clarify the problems that need to be faced and solved here:
  • In a multi-tasking system, some (or all) tasks have real-time requirements;

  • For these tasks with real-time requirements, if any task has even a slight probability of not meeting real-time performance in any situation, the entire system is deemed unable to meet real-time requirements;

  • Due to the overly stringent conditions mentioned above, in engineering practice, we generally seek to find situations that definitely cannot meet real-time performance, namely:

    • If under extremely ideal conditions, it can be mathematically proven that these tasks’ real-time performance cannot be satisfied, then it is necessary to adjust the hardware environment or replan the tasks, lower the real-time requirements;

    • If under extremely ideal conditions, it is proven that the system’s real-time performance can be guaranteed, then we can only assume: there may exist a way to guarantee the current system’s real-time performance – at this point, we can move on to the next stage of discussion – how to design the system to turn what is theoretically possible into a reality.

Understanding Real-Time Systems: The Illusion of Time-Slicing

If the above description leaves you confused, it can also be said simply:
  • If mathematics has proven that real-time performance cannot be guaranteed, then let’s not bother;

  • If mathematics proves there is hope, let’s continue discussing implementation methods – ultimately whether it can be achieved – the result is up to us, the outcome is another matter.

So what kind of mathematical model is this? Please refer back to your elementary school textbooks, the grade where you learned division and percentages:
Understanding Real-Time Systems: The Illusion of Time-Slicing

First, let’s state the conclusion:

  • We need to calculate the maximum CPU resources each real-time task may occupy, expressed as a percentage;

  • Calculate the total CPU resources occupied by all real-time tasks (accumulating the percentages);

    • If it exceeds 100%, then the entire real-time performance is guaranteed not to be met;

    • If it does not exceed 100%, then it can be determined that under ideal conditions, the system’s real-time performance may be guaranteed;

  • In practice, the further away from 100%, the greater the possibility. If it’s hovering around 100% or 99%, it is quite dangerous, and can even be safely determined as not meeting the requirements.

How’s that? Isn’t the reasoning quite simple? Now, how do we calculate it?

Understanding Real-Time Systems: The Illusion of Time-Slicing

  • Observing the previously introduced real-time model, we can find that both “real-time window” and “the time required to process events” are measures of duration;

  • Among them, the real-time window is determined by the specific application needs, dictated by the time requirements of the objective physical world, which translates to: “If tasks are not completed within a certain time, Newton will come after you!”

Understanding Real-Time Systems: The Illusion of Time-Slicing

  • The real-time window also implies another important assumption, that in the worst-case scenario, the event may occur periodically within the time interval represented by the real-time window – just as one wave calms, another rises (gentlemen, I won’t provide images here).

  • “The time required for event processing” refers to the time needed for the CPU to execute the event handling program. This actually involves another very critical issue – determinism: in simple terms, it means“at the very least” you must be able to guarantee – the time taken to execute this task has a maximum value (upper bound), and that this upper bound is stable and reliable – this is just the minimum standard for determinism; sometimes certain applications have high requirements for determinism, for example, some vehicle systems require that the execution time only allows for minimal fluctuations within a very small range, and if this cannot be achieved, it is directly deemed to not meet the “determinism” requirement (for example, many ECUs used in vehicle systems do this), thus the entire system’s real-time performance becomes an illusion.

Why is determinism so important? Just think about it, if a person who is full of hot air guarantees you: “The stock market will definitely surge tomorrow, you should go all in,” would you really dare to make decisions based on such information?
In real-time systems, the execution time of tasks is a very critical indicator, directly related to the percentage of system resources occupied by the task. If this data is not “deterministic,” how can we confidently say: the system can definitely meet real-time requirements?
There is an important conclusion here that everyone can jot down:
Real-time performance does not necessarily require the system to run as fast as possible, butit must require the system to have a high degree of determinism.
This is why both low-frequency low-performance Cortex-M and high-frequency high-performance Cortex-R can be used in real-time systems; while high-frequency high-performance Cortex-A cannot meet the requirements of “hard real-time” (because Cortex-A uses MMU, theoretically leading to uncertain memory access times due to virtual address space, thus tasks based on MMU cannot meet determinism requirements).
  • It is worth emphasizing that if the event handling program’s code is the same, it is easy to understand:when the CPU frequency increases (the number of instructions the CPU can execute per unit time increases), the time required for event processing becomes shorter.

Based on the above facts, we can envision a strict ideal condition:

  • Some event occurs periodically within the time interval represented by the “real-time window” (Tw);

  • During this period, it takes time (Th) to process this event;

Then the percentage of CPU resources consumed by the current real-time task is:

Understanding Real-Time Systems: The Illusion of Time-Slicing

Here is the

Understanding Real-Time Systems: The Illusion of Time-Slicing

which is the CPU resource occupation of “event n”.

Cost of Frequent Context Switching

Do you remember the question we tried to discuss at the beginning of this article: does time-slicing have any significance for guaranteeing real-time performance? After the theoretical preparations above, we now have all the conditions needed to clearly and precisely answer this question:

Understanding Real-Time Systems: The Illusion of Time-Slicing

Known facts are as follows:

  • Under constant CPU frequency, the available CPU resources are fixed;

  • There are various ways to implement time-slicing: for example, pure cooperative time-slicing (such as state machines in bare metal, or function pointer-based cooperative schedulers); or in operating systems, using preemptive time-slicing among tasks of the same priority, i.e., Round-robin mode (for details, please refer to 《【Resolving Confusion】 Is it “Time-Slice” or “Time-Sharing Polling”?》).

  • Regardless of the time-slicing method used, task switching has its costs. For example, in bare metal, the cost of entering and exiting functions, the cost of reconstructing local variables in the stack (for details, refer to 《Talking about C Variables – Summer Insects Cannot Speak of Ice》); in operating systems, the cost of task scheduling, etc.

  • Under the premise of fixed resources, the more frequent the task switching, the more CPU time consumed for switching, thus the less actual CPU resources available for real-time task processing

Understanding Real-Time Systems: The Illusion of Time-Slicing

Conclusion: Frequent task switching is harmful to the system’s real-time performance; since frequent time-slicing can lead to a large number of unnecessary task switches, it is overall detrimental to real-time performance.

Inference: Task switching is necessary for real-time systems, but it must be kept to a minimum – reject unnecessary and flashy task switches, only perform those that are truly necessary.

To put it bluntly, many people have long mistaken concurrency, and even (merely one of the implementation methods of concurrency) “time-slicing” as “the sand that ensures real-time performance” – not only diving headlong into it without self-awareness, but also imparting their so-called successful experiences to those around them – it is truly lamentable.

Conclusion

The conclusion of this article essentially conveys a message: whether in bare metal or operating system environments, multitasking is achievable – this is determined by the essence of concurrent technology. Time-slicing is merely a common and “mindless” way to implement concurrency in bare metal and operating system environments – or rather, the role of time-slicing is just to achieve concurrency, it is not related to ensuring real-time performance, and can even be harmful.

  

So, assuming that it has been mathematically proven that “there may exist a solution to meet the real-time requirements of the system,” what specific methods can be used to achieve it? For more details, please stay tuned for the next installment.

Understanding Real-Time Systems: The Illusion of Time-Slicing

1.Microkernel Architecture in the Minds of Embedded Engineers~

2.Is do{}while(0) Meaningless? You Might Really Not Understand!

3.Bug Terminator, Caught on the Scene! | Subverting Cognition

4.MATLAB is Just the Tip of the Iceberg!

5.“STM32CubeMonitor” Tapped You on the Shoulder

6.How to Rewrite C++ Source Code into C Code?

Understanding Real-Time Systems: The Illusion of Time-Slicing

Disclaimer: This article is a network repost, and the copyright belongs to the original author. If there are copyright issues, please contact us, and we will confirm the copyright based on the copyright certificate you provide and pay for the manuscript or delete the content.

Leave a Comment

×