Asynchronous programming can create efficient programs that are fast and resource-saving. It allows high concurrency in a single-threaded environment and can implement TCP/IP protocol stacks without an operating system. Fast and resource-efficient, it can keep power consumption at the lowest level, making asynchronous programming the best programming model for low power design.
Three Realms
There are three realms in Zen: seeing mountains as mountains and water as water; seeing mountains as not mountains and water as not water; seeing mountains as mountains and water as water again.
Programming also has three realms: front and back-end systems; real-time operating systems; asynchronous front and back-end systems.
Asynchronous Programming vs Front and Back-end Systems
Asynchronous programming does not block indefinitely; it fully utilizes every clock cycle of the system and can efficiently implement TCP/IP protocol stacks by making full use of concurrently working peripherals in the system.
Asynchronous Programming vs Operating Systems
Asynchronous programming can achieve concurrency in a single-threaded environment, saving a significant amount of thread stack space, and eliminating the need for thread context switching, thus saving runtime and avoiding issues such as mutexes and deadlocks found in multi-threading.
Synchronization and Asynchrony
Synchronization and asynchrony pertain to I/O operations. Synchronous I/O operations wait until I/O is complete after initiating an operation request. In the absence of an operating system (i.e., in front and back-end systems), the I/O API continuously polls the I/O status until the I/O completes or times out; with an operating system, the I/O API suspends the current thread until the I/O completes or a timeout timer wakes the suspended thread.
Asynchronous I/O operations return immediately after initiating a request without waiting for I/O completion. The program can continue executing other tasks, including initiating other I/O requests. There are two ways for a program to determine if an asynchronous I/O request has completed: either by polling the I/O status during idle time or by registering a callback function when requesting the I/O operation, which is called when the I/O operation completes.
The distinction between synchronous and asynchronous in I/O APIs is: synchronous initiates a request and waits for completion, while asynchronous returns immediately after initiating a request. Synchronous I/O data processing follows closely after the API call, while asynchronous I/O data processing often occurs in different locations from the API call. In simple terms, synchronous means the I/O request code and data processing code are adjacent, typically within the same function; asynchronous means they are loosely coupled, not in the same place.
Synchronous I/O aligns with human cognitive habits, making it easy to understand. Asynchronous I/O can achieve high concurrency even without multi-threading support, is more resource-efficient than multi-threading, and avoids issues such as mutexes and deadlocks.
Polling Asynchronous
In implementation, asynchronous I/O can be divided into polling asynchronous and callback asynchronous. Polling asynchronous requires the program to check the I/O completion status at an appropriate time after initiating the I/O request, such as during idle time. Polling asynchronous may require the program to repeatedly check the I/O until it completes or times out. If supported by an operating system, the thread can be suspended until an I/O operation completes or times out.
Callback Asynchronous
Callback asynchronous requires registering a callback function when initiating an I/O request, which is called when the I/O completes. Using callbacks eliminates the need for the program to frequently check the I/O status; from an execution efficiency perspective, callback asynchronous is superior to polling asynchronous, but the I/O request and data processing are completely separate, located in two different functions.
Interrupts and Callbacks
Interrupts are a special case of callbacks; interrupt service functions are triggered by hardware when the hardware completes an operation, which can trigger an interrupt to call the interrupt service function.
Binding Callback Functions
Callback functions can be bound at compile time or runtime. If the callback function is known, it can be bound at compile time, where the caller and callee agree on a function prototype, and the caller implements that function, usually with the callee providing an empty implementation of the function as a weak symbol. With a weak symbol empty function, there will be no compilation errors regardless of whether the caller uses the callback function. Runtime binding is more flexible, allowing different callback functions for the same operation, and can be implemented using function pointers.
Event-Driven Model
The event-driven model is essentially equivalent to the callback asynchronous programming model. It registers callback functions for events of interest, calling the callback function when that event occurs. The event-driven model has a framework in place that implements a main loop and predefines various events, which may include an event scheduler. It can be said that the event-driven model is a concrete application of the callback asynchronous programming model.
Classification of Events
Events can be classified into primitive events and derived events. Primitive events are generated by hardware, such as I/O events or timer events. Derived events are those created by the program based on the current state and primitive events. The program can combine multiple primitive events into a single derived event or derive multiple derived events from a single primitive event.
Events can also be classified into asynchronous events and synchronous events. Asynchronous events can occur at any time and in an uncertain quantity, such as user input, serial port input, and network input, which are generally asynchronous input events. Synchronous events only occur within a specified time after initiating an I/O request and in a determined quantity, such as active SPI data transmission or reception; synchronous communication usually generates synchronous events. Asynchronous events occur beyond the control of the program, often lacking a clock signal or having a passive input clock signal; synchronous events are generated under the program’s control, usually having an active clock output signal.
Interrupts and Events
Interrupts are a type of event, hardware-generated events that notify the program when an event occurs.
Sources of Events
Primitive events can be generated by hardware interrupts or by the program polling the I/O status. Derived events are generated by the program based on the current state while processing primitive events.
Interrupt-Driven Model
The interrupt-driven model is a special case of the event-driven model, using a hardware interrupt controller as the event scheduler. After initialization, the main program enters a sleep state and does not perform any operations until an interrupt occurs, entering the interrupt service function. Each interrupt is an event, and each interrupt service function is a callback function. For simple systems, the interrupt-driven model is the most efficient and power-saving implementation method, making it very suitable for IoT sensors.