Advanced Software Architecture Design for Embedded Systems: Part 2

Click belowLearn Embedded Together to follow, learn, and grow together

Continuing from the previous part, the software architecture design of embedded systems (Part 1) continues to explain, keep it up~

5. Language-Oriented Programming (LOP)

5.1. Advancing Automated Code Generation

The popular definition of language-oriented programming is: integrating domain-specific knowledge into a specialized computer language to improve the efficiency of communication between humans and computers.

Automated code generation is essentially language-oriented programming. Language does not equate to programming languages; it can be graphs, tables, or any medium that establishes a communication channel between humans and machines. A leap in productivity in software development history was the invention of high-level languages. It allowed us to implement more complex functions in a simpler way. However, high-level languages also have their drawbacks, as the process from problem domain to program instructions is quite complex. Because high-level languages are designed for general purposes, they are far removed from the problem domain. For instance, to create a graphical interface, I can tell another engineer: place a button here, an input box there, and when the button is pressed, display ‘Hello World’ in the input box. I could even sketch it out for him.

For our direct communication, this is sufficient, taking just 5 minutes. But how long does it take to convert that into a language that a computer can understand?

If it’s assembly language? (Telling the computer how to operate registers and memory)

If it’s C++? (Telling the computer how to draw on the screen and respond to mouse and keyboard events)

If there’s a good graphics library? (Telling the computer to create Button and Label objects, manage these objects, place them, and handle messages)

If there’s a good development framework + IDE? (Using WYSIWYG tools to design classes, member variables, and write message response functions)

If there were a language specifically for graphical interface development?

It could look something like:

Label l {Text=""}
Button b{Text="ok",action=l.Text="hello world"}

General computer languages are based on concepts like variables, classes, branches, loops, linked lists, and messages. These concepts are quite distant from the problem itself and have very limited expressive power. Natural language has strong expressive capabilities, but it is rife with ambiguity and redundancy, making it difficult to standardize and format. Traditional thinking tells us that computer languages are just a series of instructions, and programming is writing down these instructions. However, the idea of language-oriented programming is to describe problems in ways that are as close to the problem and human thinking as possible, thereby reducing the difficulty of translating human thoughts into computer software.

Consider an example from game development. Nowadays, network games commonly use C++ or C to develop game engines. The specific game content is created using a series of secondary development tools and languages. A map editor is one such domain-specific language. Lua or similar scripts are embedded within the game to write weapons, skills, quests, etc. Lua itself does not have the capability to develop standalone applications; however, game engine designers provide Lua with a series of interfaces at various levels, embedding domain knowledge into the script, significantly improving the efficiency of game secondary development. The ancestor of network games, MUD, designed LPC as its development language. The relationship between the MUD engine MudOS and LPC is illustrated below:

Creating an NPC with LPC looks something like this:

inherit NPC;
void create()
{
  set_name("菜花蛇", ({ "caihua she", "she" }));
  set("race", "野兽");
  set("age", 1);
  set("long", "一只青幽幽的菜花蛇,头部呈椭圆形。n");
  set("attitude", "peaceful");
  set("str", 15);
  set("cor", 16);
  set("limbs", ({ "头部", "身体", "七寸", "尾巴" }));
  set("verbs", ({ "bite" }));
  set("combat_exp", 100+random(50));
  set_temp("apply/attack", 7);
  set_temp("apply/damage", 4);
  set_temp("apply/defence",6);
  set_temp("apply/armor",5);
  setup();
}
void die()
{
  object ob;
  message_vision("$N抽搐两下,$N死了。n", this_object());
  ob = new(__DIR__"obj/sherou");
  ob->move(environment(this_object()));
  destruct(this_object());
}

LPC has nurtured a large number of amateur game developers and has even become the starting point for many entering the IT industry. The reason is its simplicity, ease of understanding, and 100% design for game development. This is the charm of LOP.

5.2. Advantages and Disadvantages

The most important advantage of LOP is that it embeds domain knowledge into the language, thereby:

  1. Improving development efficiency.

  2. Optimizing team structure, lowering communication costs, allowing domain experts and programmers to collaborate better.

  3. Reducing coupling and making maintenance easier.

Secondly, since LOP is not a general-purpose language, its scope is much narrower, which means:

  • Easier to achieve stable systems

  • Easier to port

Correspondingly, LOP also has its disadvantages:

  1. LOP requires a higher level of abstraction of domain knowledge than frameworks.

  2. The cost of developing a new language itself. Fortunately, designing a new language is not too difficult now, especially with the support of languages like Lua for “specialized secondary development”.

  3. Performance loss. However, compared to the savings in development costs, using LOP in non-performance-critical parts is still worthwhile.

5.3. Applications in Embedded Systems

For example, the Web server of embedded devices. Many devices provide web services for configuration, such as routers, ADSL modems, etc. A typical use case for the web services provided by these devices is that users fill in some parameters, submit them to the web server, which then writes these parameters into the hardware and generates a page with the operation results or other information to return to the browser. Due to the typical Apache, MySQL, PHP combination being too large and difficult to port, web services in embedded systems are usually written directly in C/C++. From socket management, HTTP protocol to specific hardware operations and page generation, everything is handled in one piece. However, for complex functionalities with high web interface requirements, writing pages in C becomes inefficient.

shttpd is a small web server, compact enough to consist of only one .c file with over 4000 lines of code. Despite its small size, it has the most basic functionalities, such as CGI. It can run independently or be embedded into other applications. shttpd compiles and runs smoothly on most platforms. Lua is a lightweight scripting language specifically designed for embedding and extension. It has good interoperability with C/C++ code.

Embedding the Lua engine into shttpd and using C to write one or more hardware driver extensions registered as Lua functions results in the following system structure:

This type of application is quite representative in embedded systems, where core functionalities are implemented in C, while the variable parts of the system are implemented in scripts. Everyone can consider whether they can use this technique in their development process. This is a specific application model of LOP. (Not creating a brand new language, but using Lua)

6. Testing

6.1. Testability as a Metric of Software Quality

Good software is designed, and good software must also be easy to test. The quality of software that is difficult to test is hard to guarantee. In today’s trend of increasing software scale, the following issues are common:

  1. Testing can only be done manually, regression testing is extremely costly, and practically only point testing can be executed, making quality hard to guarantee.

  2. Each module can only be tested once integrated.

  3. Code is integrated without any unit testing.

The root of these problems lies in the lack of good software design. A good software design should make unit testing, module testing, and regression testing easier, thus ensuring both the breadth and depth of testing, ultimately leading to high-quality software. In addition to functionality, non-functional requirements must also be testable. Therefore, testability is an important metric in software design that system architects need to consider seriously.

6.2. Test-Driven Software Architecture

This section discusses test-driven software architecture, not test-driven development. TDD (Test Driven Development) is a development methodology and a coding practice. In contrast, test-driven architecture emphasizes architectural design from the perspective of improving testability. Software testing is divided into multiple levels:

6.3. System Testing

System testing refers to the testing performed by testers to verify that the software correctly implements the requirements. In this type of testing, testers act as users and test through the program interface. In most cases, this work is done manually. In a standard process, this process usually accounts for more than one-third of the entire software development time. When a new version is released, even if only a part of the software is involved, the testing department still needs to conduct a complete test of the entire software. This is determined by the “side effects” characteristic of code. Sometimes fixing a bug can trigger more bugs, breaking previously functioning code. This is called regression testing. For larger software, regression testing can take a long time; in cases where new features and bug fixes are minimal, regression testing can account for more than half of the entire software development process, severely impacting software delivery and making the software testing department a bottleneck in the software development process. Automating the testing process is one way to partially solve this problem.

As an architect, it is necessary to consider how to achieve the automated testability of software.

6.3.1. Automated Interface Testing

Before graphical interfaces existed, character-based interfaces were relatively easy to automate testing. A well-written script could implement input and output checks. However, for graphical interfaces, human involvement seems indispensable. There are some automated testing tools for interface testing, such as WinRunner, which can record the actions of testers and convert them into scripts, then replay these scripts to achieve automation. For embedded devices, there is Test Quest, which can be used by running a remote desktop-like agent on the device, allowing PC-side testing tools to use image recognition to identify different components and send corresponding user inputs. The basic working principle of such tools is illustrated in the figure below:

However, this process faces three issues in practice:

  1. Poor reliability, often interrupted during execution. Writing a reliable script can be even more difficult than developing software. For example, pressing a button may sometimes immediately trigger a dialog box, or it may take several seconds, or it may not appear at all; the operation recording tool cannot automatically make these judgments and requires manual modifications.

  2. Judging the results of operations is difficult, especially for non-standard controls.

  3. When the interface is modified, the existing code can easily become invalid.

To apply graphical interface automation testing tools, architects should consider during the architecture design:

  1. How to maintain consistency in interface style. This should be determined by the architecture, not by the programmers. This includes layout, control sizes, relative positions, text, response methods to operations, timeout durations, etc.

  2. How to compromise between the interface that is most suitable for testing tools and the interface preferred by users. For example, Test Quest is based on image recognition, so a black-and-white interface is most favorable, whereas users prefer gradient colors, which are very unfavorable. Perhaps having the interface possess automatic switching capabilities is the best approach.

For products that have already been completed, if the architecture did not consider automated testing, the applicable range will be very limited; however, there are still some ideas worth considering:

  1. Implement small-scale automated scripts. Test a specific operation process rather than trying to use one script to test the entire software. A series of small test scripts form a collection covering part of the system’s functionality. These test scripts can all use the state at software startup as a baseline, making state handling relatively simple.

  2. “Monkey testing” has certain value. Monkey testing refers to randomly operating the mouse and keyboard. This type of testing does not understand the software’s functionality and can uncover errors that normal testing cannot find. According to internal Microsoft data, 15% of errors in some Microsoft products were discovered through “monkey testing.”

In general, interface-based automated testing is still immature. Architects must avoid making functionality accessible only through the interface. The interface should merely be the interface, while the majority of the software’s functions should be independent of the interface and accessible in other ways. The design in the example of the above framework reflects this point.

Consider: how to enable the interface to have self-testing capabilities?

6.3.2. Message-Based Automated Testing

If the software provides message-based interfaces, automated testing becomes much simpler. The TL1 interface for firmware has already been mentioned. For the interface part, it should be designed to separate the pure “interface” as much as possible, keeping it thin, while other parts continue to provide services based on messages.

Based on messages, wrapping them into function forms using scripting languages makes it easy to call and cover various parameter combinations of messages, thereby increasing testing coverage. For how to wrap messages as scripts, you can refer to the implementation of SOAP. If XML is not used, similar automatic code generation can be implemented.

These test scripts should be written by developers; whenever a new interface (i.e., a new message) is implemented, corresponding test scripts should be written and stored in the codebase as part of the project. When regression testing is needed, simply run the test scripts, greatly improving the efficiency of regression testing.

Thus, to achieve automated testing for software, providing message-based interfaces is a good approach, allowing us to independently write test scripts outside the software. This factor can be considered in the design, appropriately increasing the software’s message support. Of course, TL1 is just an example; depending on project needs, any suitable protocol such as SOAP can be chosen.

6.3.3. Automated Testing Framework

When writing automated testing scripts, there are many repetitive tasks, such as establishing socket connections, logging, error handling, report generation, etc. Moreover, for testers, these tasks can be quite challenging. Therefore, designing a framework to implement and hide these repetitive and complex technologies allows test script writers to focus on specific testing logic.

A framework like this should implement the following functions:

  1. Complete the initialization of connections and other foundational work.

  2. Capture all errors, ensuring that errors in Test Cases do not interrupt the execution of subsequent Test Cases.

  3. Automatically detect and execute Test Cases. New Test Cases are independent script files that do not require modifications to the framework’s code or configuration.

  4. Message encoding and decoding, providing them as callable functions for Test Case writers.

  5. Convenient tools, such as reports, logs, etc.

  6. Automatically count the results of Test Case executions and generate reports.

The idea behind an automated testing framework is consistent with that of general software frameworks: to avoid repetitive work and reduce development difficulty.

The following diagram illustrates the structure of an automated testing framework:

Each Test Case must define a specified Run function, which the framework will call in sequence, providing corresponding library functions for Test Cases to send commands and obtain results. This way, test case writers can focus solely on the testing itself. For example:

def run():
  open_laser()
  assert(get_laser_state() == ON)
  insert_error(BIT_ERROR)
  assert(get_error_bit() == BIT_ERROR)

The knowledge held by the test case writer is, “the laser must be opened first before an error can be inserted into the line.” What the architect can provide is message sending and receiving, encoding and decoding, error handling, report generation, etc., isolating these from the test case writers.

Questions: Who wrote the functions open_laser, get_laser_state? How can we further achieve knowledge decoupling? Can there be a more convenient language for writing Test Cases?

6.3.4. Regression Testing

With automated test scripts and frameworks, regression testing becomes quite simple. Whenever a new version is released, just run the existing Test Cases, analyze the test reports, and if there are any failed test cases, the regression testing fails, requiring modifications until all cases pass completely. Complete regression testing is an important guarantee of software quality.

6.4. Integration Testing

Integration testing aims to verify whether the interfaces of various components of the system work correctly. This is a lower-level test than system testing, usually completed jointly by developers and testers.

For example, in a typical embedded system, FPGA, firmware, and interface are common modules. Each module can also be divided into smaller modules to reduce complexity. A common issue in embedded software module testing is that hardware cannot operate without firmware, and firmware cannot drive the interface; conversely, the interface cannot run completely without firmware, and firmware cannot run without hardware. Thus, modules that have not been tested can only run completely at integration, and once problems arise, all modules need to be considered, making locating and resolving issues costly. Assuming there are modules A and B, each with ten bugs. If neither has undergone module testing and they are directly integrated, the debugging workload can be estimated as 10 * 10, which equals 100.

For firmware, because it is in the middle of the system, the problem is more complex. On the one hand, the firmware must be callable through means other than the GUI; on the other hand, it must simulate the functionality of the hardware. Regarding the first point, during the design phase, one should separate the interface from the implementation. The interface can be changed freely; for example, the interface with the GUI might be JSON, while also providing telnet’s TL1 interface, but the implementation remains identical. Thus, before integrating with the GUI, the firmware can be thoroughly tested via TL1. For the second point, the hardware abstraction layer should be extracted during design, isolating the main implementation of the firmware from factors like registers and memory addresses. A hardware simulation layer can be implemented when there is no hardware or the hardware design is undecided, ensuring the firmware can run and be tested completely.

6.5. Unit Testing

Unit testing is the most basic unit of software testing, executed by developers to ensure the correctness of the code they developed. Developers should submit tested code. Code that has not undergone unit testing, once integrated into the software, not only makes it difficult to locate issues but also makes it hard to achieve complete coverage of code branches through system testing. TDD is a development model based on this level.

The granularity of unit testing is generally at the function or class level; for example, the following commonly used function:

int atoi(const char *nptr);

This is a function with a very single function, so unit testing is very effective for it. Unit testing can verify the following situations:

  1. General normal calls, such as “9”, “1000”, “-1”, etc.

  2. Empty nptr pointer

  3. Non-numeric strings, “abc”, “@#!123”, “123abc”

  4. Strings with decimal points, “1.1”, “0.111”, “.123”

  5. Excessively long strings

  6. Excessively large numbers, “999999999999999999999999999”

  7. Multiple negative signs and incorrectly positioned negative signs, “–1”, “-1-“, “-1-2”

If atoi passes these tests, we can confidently integrate it into the software. The probability of it causing further issues is very low (not completely absent, as we cannot traverse all possibilities; we only test representative exceptional situations).

The above example can be considered a model for unit testing, but in practice, this is often not the case. We often find that well-written functions are difficult to unit test, not only requiring significant effort but also yielding poor results. The fundamental reason is that the functions do not adhere to certain principles:

  1. Single function

  2. Low coupling

In contrast to the example of atoi, which has a clear single function and almost no coupling with other functions (I have not written the implementation code of atoi; you can implement it yourself, hoping it has 0 coupling).

Now I will provide a practical example.

This is a simple TL1 command sending and parsing software, with the functional requirements described as follows:

ü Communicate with TL1 server via telnet

ü Send TL1 commands to TL1 server

ü Parse responses from TL1 server

TL1 is a widely used protocol in the communications industry. To simplify matters for those unfamiliar with TL1, I define a simplified format:

CMD:CTAG:PAYLOAD;

CMD - The name of the command, which can be any string starting with a letter and composed of letters and underscores.

CTAG - A number used to mark the command's sequence.

PAYLOAD - Can be any format of content.

; - End delimiter

Correspondingly, the TL1 server's response also has a format:

DATE

CTAG COMPLD

PAYLOAD

;

DATE – Date and time

CTAG – A number, matching the CTAG carried by the TL1 command

COMPLD – Indicates that the command was executed successfully

PAYLOAD - The result returned, which can be any format of content.

; - End delimiter

For example:

Command: GET-IP-CONFIG:1:;

Result:

2008-7-19 11:00:00
1 COMPLD
ip address: 192.168.1.200
gate way: 192.168.1.1
dns: 192.168.1.3
;

Command: SET-IP-CONFIG:2:172.31.2.100,172.31.2.1,172.31.2.3;

Result:

2008-7-19 11:00:05
2 COMPLD
;

The software’s top layer might look like this:

Dict* ipconf = GET_IP_CONFIG();
ipconf->set("ipaddr","172.31.2.100")
ipconf->set("gateway","172.3.2.1")
ipconf->set("dns","172.31.2.1")
SET_IP_CONFIG(ipconf);

Taking GET_IP_CONFIG as an example, the functionalities this function should accomplish include:

ü Establishing a telnet connection if it has not yet been established

ü Constructing the TL1 command string

ü Sending

ü Receiving feedback

ü Parsing feedback and copying it into the IP_CONF structure

ü Returning

We certainly do not want to repeat these functions for each such function, so we define several modules:

  1. Telnet connection management

  2. TL1 command construction

  3. TL1 result parsing

Here we analyze TL1 result parsing, assuming it is designed as a function with the following prototype:

Dict* TL1Parse(const char* tl1response)

This function’s role is to accept a string, and if it is a valid and known TL1 response, extract the results and place them into a dictionary object.

This would normally be a very unit-test-friendly example: input various strings and check whether the returned results are correct. However, there is a very specific problem in this software:

TL1Parse must know which command’s response it is processing when parsing a string. But note that the TL1 response does not include the command name. The only way is to use CTAG, which is a number corresponding to each command and response. TL1Parse first extracts CTAG and then looks up which command corresponds to this CTAG. This introduces an external call, which is coupling.

An object maintains a table mapping CTAGs to command names, allowing the lookup of command names based on CTAGs, thus knowing how to parse this TL1 response.

As a result, TL1Parse cannot be unit tested, or at least not easily. Typically, the stubbing method is ineffective.

What to do?

Redesign to eliminate coupling.

Split TL1Parse into two functions:

Tl1_header TL1_get_header(const char* tl1response)
Dict* TL1_parse_payload(const char* tl1name ,const char* tl1payload)

Both of these functions can be independently and thoroughly unit tested. The code of these two functions is essentially a split of TL1Parse, but their testability is greatly improved, significantly increasing the likelihood of obtaining a reliable parser.

This example demonstrates how design can enhance the testability of code—here referring to unit testing. Arbitrary design and implementation of software will make unit testing a nightmare; only by considering the need for unit testing during design can true unit testing be conducted.

6.5.1. Cyclomatic Complexity Measurement

The complexity of a module directly affects the coverage of unit testing. The most well-known method for measuring code complexity is cyclomatic complexity measurement.

The calculation formula is: V(F)=e-n+2, where e is the number of edges in the flowchart, and n is the number of nodes. A simple algorithm counts the number of if, while, do, and switch case statements plus 1. Code complexity that is suitable for unit testing is generally considered not to exceed 10.

6.5.2. Fan-In and Fan-Out Measurement

Fan-in refers to the number of other modules that reference a module. Fan-out refers to the number of other modules that a module references. We all know that good design should be high cohesion and low coupling, meaning high fan-in and low fan-out. A module with a fan-out exceeding 7 is generally considered poorly designed. Modules with excessive fan-out are challenging to unit test, whether in terms of stubbing or coverage. Combining the number of outgoing and incoming couplings of the system forms another metric: instability.

Instability = Fan-out / (Fan-in + Fan-out)

This value ranges from 0 to 1. The closer the value is to 1, the more unstable it is. When designing and implementing architecture, one should try to rely on stable packages, as these packages are less likely to change. In contrast, relying on unstable packages increases the likelihood of being affected when changes occur.

6.5.3. The Significance of Frameworks for Unit Testing

The application of frameworks can greatly assist in unit testing. This is because secondary developers are limited to implementing specific interfaces, which must be clear, simple, and low-coupling. The previously provided example code for a framework also illustrates this point. This again emphasizes that frameworks designed by high-level engineers can compel junior engineers to produce high-quality code.

7. Maintaining Architectural Consistency

In actual development, it is common for code to deviate from the carefully designed architecture. For example, the following diagram illustrates an MVC pattern designed within an embedded device:

View depends on Controller and Model, Controller depends on Model, while Model serves as the underlying service provider, not depending on View or Controller. This is a suitable architecture that can largely separate business, data, and interface. However, a programmer might implement a call from Model to View, thereby breaking the architecture.

This phenomenon often occurs during the maintenance phase of a product and sometimes during the implementation phase of the architecture. To add a feature or fix a bug, programmers, due to misunderstanding the original architecture or simply taking shortcuts, may take a “shortcut”. If such implementations are not discovered and corrected in time, a well-designed architecture can gradually be undermined, which we refer to as “architectural decay.” Typically, an aging software product has this problem. How to monitor and prevent such issues involves both technical and managerial measures.

Technically, tools can be used to analyze the dependencies of system components, as the external manifestation of architecture is mainly the coupling relationships between various parts. Some tools can count the fan-in and fan-out of software components. Testing code can be written to check the fan-out of components, and if a test fails, it indicates that the architecture has been compromised. This check can be integrated into some IDEs, performed synchronously during compilation, or conducted during check-ins. More advanced tools can perform reverse engineering to generate UML, providing further information. However, checking fan-in and fan-out is usually sufficient.

By establishing a code review process, the code checked in by programmers can also be prevented from such issues. Code review is a very important part of development, serving as a crucial measure in the later stages of development to prevent bad code from entering the system. Code reviews typically focus on the following issues:

  1. Whether the requirements have been correctly and completely fulfilled

  2. Whether the system architecture has been followed

  3. The testability of the code

  4. Whether error handling is complete

  5. Code standards

Code reviews are usually conducted in a meeting format, set at project milestones when code needs to be checked in. For iterative development, this can be organized before the end of an iteration cycle. Participants may include architects, project managers, project members, and senior engineers from other projects. Generally, the meeting should not last too long, ideally not exceeding 2 hours. Notifications and relevant document codes should be sent out about two days before the meeting, and participants must familiarize themselves with the content and prepare. During the meeting, the code author first explains the functionality that the code needs to implement and their implementation ideas. Then, the code is showcased. Participants can raise various questions and improvement suggestions based on their experiences. The most taboo aspect of such meetings is making the author feel blamed or belittled; thus, the meeting organizer should first define the tone of the meeting: the success of the meeting is not based on the quality of the author’s code but on whether the participants provided beneficial suggestions. After the meeting, the author rates the participants, rather than the other way around.

8. The Evolution of an Actual Embedded System Architecture

In the 1990s, the rapid development of the internet greatly advanced communication testing equipment. During that time, the ability to achieve certain measurements in hardware was the core of competition, and the purpose of software was merely to drive the hardware and provide a simple interface. Thus, the initial software structure of products was very simple, resembling the aforementioned urban rail gate control system.

Advantages: The program straightforwardly fulfills user needs, and a single programmer can handle everything.

Disadvantages: There is no module division at all, leading to severe coupling between the lower and upper layers.

8.1. Data Processing

Users requested the ability to save measurement results and reopen them. The data storage module and interface were separated.

The main logic above is maintained, but the interface can now not only display real-time data but also read data from ResultManager for display.

Advantages: The embryonic form of separating data and interface begins to emerge.

Disadvantages: ResultManager only exists as a tool, responsible for saving and loading historical data. The interface and data source are still tightly coupled, with different interfaces requiring different data being hard-coded.

8.2. Window Management

As functionalities became increasingly complex, the original reliance on a single class to draw various interfaces could no longer sustain. Thus, the concept of windows was introduced. Each interface was viewed as a window, with the elements within the window being controls. The opening, closing, and hiding of windows were managed by a window manager.

Advantages: Interface functionality is separated by window units, no longer forming a massive collection.

Disadvantages: Although a window manager has been established, the interface is still directly coupled with the underlying layers, maintaining a large loop structure.

8.3. MVC Pattern

As the scale further expanded, the initial large loop structure could no longer meet the increasingly complex requirements. The standard MVC pattern was introduced, leading to a significant restructuring.

The data center served as the Model, storing the most current data. The View was placed in an independent task, periodically polling data from the DataCenter. User operations were sent from the View to the Controller, which further invoked hardware drivers for execution. The results of the hardware execution were updated from the driver to the Controller and then to the DataCenter. The interface, data, and commands were essentially decoupled. ResultManager became a component of the DataCenter, with the View no longer communicating directly with it.

The introduction of the MVC pattern marked the first time that this product had a truly clear division of responsibilities and independent functionality in its architecture.

8.4. Numerous Similar Modules, Inefficient Reuse

At this stage, as a standalone embedded device, the architecture could basically meet the requirements. However, as the market expanded, more and more devices were designed. Although these devices performed different measurement tasks, they all shared the same operational methods, similar interfaces, and, more importantly, faced the same problem domain. For a long time, copying and pasting was the only method of reuse, with class names and variable names often going unchanged. An error in one device could be fixed, but the same code error in other devices might not be corrected in time. As team sizes grew, even the basic MVC architecture was not adhered to in some new devices.

Ultimately, a framework was introduced for this series of products. The framework established the following:

  1. The basic architecture of the MVC pattern

  2. Window manager and component layout algorithms

  3. Multi-language solutions (string manager)

  4. Logging system

  5. Memory allocator and memory leak detection

8.5. Remote Control

Clients wished to position the device fixedly at a certain location on the network, using it as a “probe” and accessing it via remote control from the office. This posed a challenge for a system originally designed to be purely handheld. Fortunately, the MVC architecture exhibited considerable flexibility, and the early investments paid off.

The TL1 Server provides a remote control interface based on Telnet. Within the system, it functions similarly to the View, only communicating with the existing Controller and DataCenter.

8.6. Automated TL1 Interpreter

As TL1 commands became numerous, and TL1 was often not the client’s primary requirement, many devices’ TL1 commands started to become incomplete. The underlying reason was that manually writing TL1 command interpreters was too laborious. Later, the introduction of Bison and Flex improved this issue somewhat, but it was still insufficient. Automated code generation was introduced at this stage. By defining TL1 in the following format, tools could automatically generate TL1 encoding and decoding code.

CMD_NAME
{
  cmd = "SET-TIME-CONFIG::<ctag>::<year>,<month>,<day>,<hour>,<minute>,[<second>]"
  year = 1970..2100
  month = 1..12
  day = 1..31
  hour = 0..23
  minute = 0..59
  second = 0..59
}

8.7. Testing Challenges

After decades of accumulation, the product has become a series with dozens of devices. Most devices have entered the maintenance phase, frequently prompting clients to request minor improvements or defect fixes. The heavy manual regression testing has become a nightmare.

Automated testing based on TL1 has greatly liberated testers. Through test scripts running on PCs, regression testing has become simple and reliable. The only downside is that the interface portion cannot be validated.

Automated tools based on Test Quest require a similar remote desktop software to be developed on the device running the pSOS system, which is not an easy task on pSOS. However, the good news is that since the framework has fixed the style and layout algorithms of the interface, the automated tools based on Test Quest will have high recognition efficiency.

8.8. Summary

This actual reconstruction journey of embedded products illustrates that the introduction of the MVC pattern in the third step and the framework in the fourth step are crucial. The mature MVC pattern ensures a series of extensibility, while the framework guarantees the accurate reuse of this architecture across all products.

9. Conclusion

This article discusses the architectural design thoughts and methods tailored to the characteristics of embedded software development. It attempts to provide a mindset and inspire thought among readers. Frameworks, automated code generation, and test-driven architecture are core content, with frameworks being a consistent element throughout.

Someone asked me, what is an architect, and how can one become an architect? I replied: Code, code, and code again; correct errors, correct errors, and correct errors again. When you feel bored, stop and think about how to complete this work faster and better. Architects emerge from practice, and they come from those who think diligently and are lazy about repetition.

Source:Embedded Application Research Institute

This article is sourced from the internet, and all rights belong to the original author. If there is any infringement, please contact for deletion.

Advanced Software Architecture Design for Embedded Systems: Part 2

Add me on WeChat by scanning the code to join the high-quality embedded group chat

Advanced Software Architecture Design for Embedded Systems: Part 2

Follow me【Learn Embedded Together】, learn and grow together.

If you find the article good, click “Share”, “Like”, or “View”!

Leave a Comment