Click the above “Chuangxue Electronics” to follow us with one click and easily learn electronic knowledge.
In microcontroller programming, setting a good clock interrupt can enable a single CPU to perform the tasks of two CPUs, greatly simplifying program development and improving system efficiency and operability. We can place some routine and time-sensitive tasks in the clock interrupt, and also use the clock interrupt to assist the main program in completing timing and delay operations.
Below, we will use the 6MHz clock of the AT89C51 system as an example to illustrate the application of clock interrupts.
The timer initial value and interrupt period for clock interrupts do not need to be too frequent; generally, 20ms (50Hz) is sufficient. If a 0.1-second timing signal is needed, 10ms (100Hz) can be used. Here we take 20ms, using timer T0 in 16-bit timer mode (mode 1). The working mode of T0 is: it automatically increments by 1 after each machine cycle, and when it reaches 0FFFFh, an overflow occurs, generating an interrupt, with the hardware setting the corresponding flag bit for software querying. This means that N+1 machine cycles have passed since the interrupt was triggered. Therefore, we just need to pre-load T0 with a number smaller than 0FFFFh by N, and then start the timer, which will generate an interrupt after N machine cycles. This number is referred to as the “initial value.” Below we calculate the initial value we need: with a clock of 6MHz, 12 clock cycles equal one machine cycle, and there are 10000 machine cycles in 20ms. (10000)10=(2710)16, so 0FFFFh-2710h+1=0D8F0h. Since responding to the interrupt, protecting the context, and reloading the initial value requires an additional 7-8 machine cycles, we add this value to 7, so the initial value to load into T0 is 0D8F7h. When entering each interrupt, first push the values of A and W onto the stack, then load 0D8F7h into T0.
Setting a unit to increment by 1 on each interrupt, we can take an internal RAM unit, named INCPI (Increase Per Interrupt). In the interrupt, after loading the initial value of T0, we can use the INC INCPI instruction to increment it. From this unit, both the interrupt program and the main program can obtain any integer multiple of 1 to 256 for a 20ms signal. For example, if there is a program that sends data to a digital tube and needs to execute every 0.5 seconds to refresh the display, we can set a unit (called the wait unit) W_DI, using the /MOV A,INCPI/ADD A,#25/MOV W_DI ,A/ statements to make it 25 greater than the current INCPI value, and then check in each interrupt whether it equals the INCPI value. If equal, it indicates that 25 interrupt cycles have passed, and we execute the display program and let W_DI increment by 25 again, waiting for the next 0.5 seconds. We can set multiple wait units to obtain various timing signals. The interrupt program can check each wait unit in each interrupt to see if it equals INCPI; if equal, execute the corresponding processing and reset that wait unit’s value; otherwise, skip it. For example, use a 0.5-second signal to refresh or blink the display, use a 1-second signal to generate a real-time clock, or output a square wave of a certain frequency to periodically query input devices.
Reading keys in an interrupt, typically we read the keyboard in the main program with the steps: scan the keyboard, and if a key is pressed, delay a few milliseconds for debouncing, then confirm that this key is indeed pressed, and finally handle the corresponding task for that key, repeating the steps again. However, there are two shortcomings: 1. While processing the corresponding task, it cannot latch the key input, which may lead to missed keys. 2. During the delay for debouncing, the CPU cannot perform other tasks, leading to inefficiency. If we place key reading in the clock interrupt, we can avoid these shortcomings. The method is: if the same key is detected pressed in two consecutive interrupts, then this key is valid (achieving the purpose of debouncing), and it is latched into a first-in-first-out (FIFO) keyboard buffer for the main program to process. This way, while the main program processes the key, it can still respond to keyboard input. The buffer depth can typically be set to 8 levels, and if the number of latched keys exceeds 8, new keys will be ignored, and a warning will prompt the user that new keys will be invalid. If the keyboard buffer queue is stagnant for a time significantly longer than the maximum time needed for the main program to process keys, it indicates that the main program has encountered an error or is stuck, and we can use an instruction in the interrupt to reset the system, acting as a watchdog.
Delays in the main program, due to the always-on clock interrupt, when the main program requires a short delay with high precision, the clock interrupt should be temporarily disabled. For longer delays with lower precision, we can mimic the writing method below to avoid multi-layer nested loop delays.
Example: Output a 1-second high-level pulse on P1.1
MOV A,INCPI
INC A
CJNE A,INCPI$ ;Wait for one interrupt handling to complete
SET P1.1 ;Set P1.1 to H, pulse starts
ADD A,#50 ;50 times 20ms equals 1 second
CJNE A,INCPI,$ ;Wait for the interrupt to increment INCPI 50 times
CLR P1.1 ;Set P1.1 to L, pulse ends
From the above, it can be seen that to flexibly apply clock interrupts, tasks should be reasonably allocated between interrupts and the main program, with clear divisions of labor and simple interfaces. The techniques involved require practice and exploration. Additionally, it is important to minimize the execution time of interrupt handling programs, avoiding exceeding 20ms.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
==> Go to www.eeskill.com to learn more!