First, there is no communication issue between threads within the same process. You can access data however you want; thus, we actually need to do something to actively “isolate” different threads to avoid dirty reads and writes.
Second, multi-threaded programming (as well as multi-process programming) requires a foundation in operating systems. Without understanding the operating system, multi-threaded collaboration cannot be done well.
Specifically for your case, simply put, do not use polling.
The action of polling itself determines that your program will have a high CPU usage, generate a lot of heat, and run slowly.
This is still a case of overly simple program logic; if it gets a bit more complex, your writing style will inevitably lead to “CPU usage running full, program logic not advancing at all,” which is no different from a garbage dead loop.
Step 1: First, set up a global, standard lock (mutex).
Note,
the first thread that wants to modify memory data must first apply for the lock to ensure that the second thread is not reading the data;
the second thread that finds the data available must also apply for the lock first to ensure that the first thread will not continue to modify it.
This is similar to the role of the “global variable” you had in the past; but you must use a standard lock and use the standard acquire system call to apply for lock data read/write permissions.
This is because the standard mutex is provided by the operating system; when your thread fails to apply for the mutex, the operating system will place it in a waiting queue, and it will not continue to allocate time slices until the mutex is available, which avoids busy waiting; once the mutex is available, this thread will be moved back to the ready queue, and then it may obtain a time slice.
This avoids a lot of ineffective CPU usage,
Step 2: Carefully analyze the business logic, draw a state transition diagram for the two threads, determine how many locks there should be and what their states are (for example, whether read/write locks are needed); ensure that “when a thread applies for a lock, it can definitely execute; if a thread cannot execute, it must enter a suspended state.”
Note, you cannot determine when the second thread is reading data or where it is blocked for a long time without reading. Therefore, you must use enough flags to ensure that states such as “data uninitialized, data initializing, data initialization completed and waiting for reading, data reading, data reading completed” are clearly distinguishable. Otherwise, data may be lost (after thread one produces data, thread two has not yet been scheduled, and thread one overwrites the previous data with new data) or dirty reads and writes may occur.
Of course, depending on business needs, a lock with only true/false states may be enough, but you must carefully evaluate and fully discuss before doing so—your problem description is too brief to determine if it will work.
Step 3: Redesign the shared data structure to minimize “lock time”.
From your description, thread 1 cannot stop and needs to “constantly generate computation results”; but in this case….
the default behavior of the mutex is: if it cannot apply for the lock, it will suspend the thread that applied for the lock.
Thus, while thread 1 generates computation results, thread 2 can only wait; and while thread 2 processes the computation results for 5ms, thread 1 can only wait…
If the operating system cannot allocate a time slice, then thread 1 may have to wait 200ms before thread 2 gets the execution right; when thread 2 executes, thread 1 enters the waiting queue, and it will only return to the ready queue after thread 2 releases the lock, waiting another 200ms to execute… This leads to a delay of over 400ms.
If you do it this way, you actually should not use multi-threading at all. Just run everything in the same thread, collect data for 50ms, and then execute processing for 5ms—simple, less error-prone, efficient, and responsive.
If you want to use multi-threading to increase throughput, then you must create a more usable data structure.
For example, a linked list.
Each node of the linked list is enough to hold 50ms of data; thread 1 first applies for a node, writes the data into it, and after 50ms, applies for the global linked list lock to add the data to the list—during the lock period, it only needs to execute an operation that points the next of the end of the list to the new node (it may also need to maintain the head and tail pointers, so as not to always traverse the list to find the end).
Similarly, after being scheduled, thread 2 applies for the lock on the linked list, then removes the first node from the chain, records the pointer locally, and immediately releases the lock; then it can process the data carried by this node without interruption.
Note, at this time, if you still use the simplest mutex, because all operations related to the data structure (linked list) need to be locked first and then checked for data; then thread 2 may end up in a dead loop, constantly locking, checking for no data, releasing the lock, and then immediately locking again… which means that most of the execution time is spent on locking and unlocking.
Therefore, at this point, we have to create a more complex mechanism, such as allowing the mutex to contain multiple values.
When the mutex is non-zero, thread 2 can take out nodes from the linked list while reducing the mutex value by one; when it reaches zero, thread 2 must sleep and no longer access the linked list; while thread 1 adds a node to the linked list successfully, it adds one to the mutex value…
However, at this time, because the read/write by threads 1 and 2 may be very frequent, if the data is read and written only after locking, then the locking time will be 50ms/5ms, allowing another thread to access time will be particularly short; at this time, the other thread may not actually get the data—unless one of the threads fills the buffer or processes all buffered data and then goes into blocking; otherwise, the other thread may never get an execution opportunity. This is what is termed “starvation,” which must be avoided.
Therefore, please place data preparation outside the locking time; after locking, only process the next/head/tail pointers, add/remove nodes, and then release the lock immediately. Find ways to make shared data “available most of the time” so that the two threads can cooperate.
You can implement this shared data structure as a general-purpose queue that supports multi-threaded access, only allowing data access through pop/push interfaces; while placing the locking and unlocking in these two interfaces, thus simplifying the usage logic and preventing erroneous access.
In fact, your case may be further optimized,
for example, if there are only these two threads, and thread 1 is the producer while thread 2 is the consumer (single producer/consumer model), then you may not even need locks, and can implement a standard circular array instead—this is also a classic, simplified example of lock-free programming.
But if there are more participants and the logic is more complex, then locks are necessary; even read/write locks, flags, events, etc., must be fully utilized.

RECOMMEND
– Click to see your best view –