Software testing is important in embedded systems

Electrical system design usually starts with a state machine and understands the different modes of operation for a specific product. Engineers can often map state machine functionality to logic very quickly and easily. If the state machine becomes more complex, it is usually converted into software.

In the world of embedded systems, it’s not just technology that continues to evolve, the tools and methods used to develop that technology are maturing and improving.

In the early 1980s, I developed software for a small metrology company that applied engineering mathematics to coordinate measuring machines (CMMs). Our development lifecycle essentially treats production software as a sandbox. We’ll start with production code, add functionality, perform some fairly basic functional testing, and deliver.

In such a small company, our engineering team naturally includes software and hardware experts. In hindsight, it’s surprising that while the software we developed did require extensive customer support, there was little that same culture for the hardware it ran on.

Software development is an engineering discipline

Some of the differences between hardware and software support are the result of the original development process. However, the scalability of the software and the consequent expansion of functionality also plays an important role. In short, there are far more wrong ways than right ways, and this feature requires that it be considered an engineering discipline.

None of this is new. This approach has been required for many years by leading aerospace, automotive and industrial functional safety standards such as DO-178, ISO 26262 and IEC 61508. However, to benefit from today’s state-of-the-art development and testing tools, having an engineering discipline mindset is critical.

The importance of software testing has been shown recently with the development of ISO/IEC/IEEE 29119, a set of international software testing standards that can be used in any software development life cycle or organization.


Electrical system design usually starts with a state machine and understands the different modes of operation for a specific product. Engineers can often map state machine functionality to logic very quickly and easily. If the state machine becomes more complex, it is usually converted into software.

High demands are essential to ensure the proper functioning of the system. Such requirements characterize the business logic and expected functionality, and can assess whether the system is working as expected. Best practices follow a process from high-level requirements to analysis to coverage, and naturally, requirements traceability tools are designed for this.

In a state machine model, the requirements that characterize each state are examples of high-level requirements. Tracing the execution path through the code to ensure that each requirement is interpreted correctly is one way to check for a correct implementation.

Functional safety standards extend this to the concept of requirements traceability. They typically require the user to execute all code according to high-level requirements and pass low-level tests to explain and test all undiscovered cases. More recently, this message has been echoed by a “shift left” in cybersecurity, as shown in the V-model in Figure 1.

Software testing is important in embedded systems

Figure 1 As the name suggests, the V-Model embodies the product development process that shows the connections between test specifications at each development stage. Source: LDRA

Test components, then test systems

In any engineering discipline, it is important to ensure that components work properly on their own before being integrated into a system. To apply this thinking to software, engineers need to define lower-level requirements and ensure that each feature set is working. Engineers also need to ensure proper interfaces are provided to the rest of the system.

Unit testing involves parameterizing inputs and outputs at the function and Module level, checking to ensure proper connections between inputs and outputs, and following coverage logic. Unit testing tools can provide validated test tools and graphical representations that connect individual inputs and outputs to execution paths and can verify their correctness.

It is also important to understand the interface at the function and module level. Static analysis tools can expose these interfaces and connect logic at different levels.

Identify problems early

An engineer in any discipline will tell you that the sooner problems are identified, the less money it will cost to fix them.

Static analysis performs source code analysis to model the execution of a system without actually running the system. Available as soon as code is written, static analysis helps developers maximize code clarity, maintainability, and testability. The main features of static analysis tools include:

Code-complexity analysis: Understand where your code is unnecessarily complex so engineers can perform appropriate mitigations.

Program Flow Analysis: Draw a design-review flowchart of program execution to ensure that the program executes as expected.

Predictive runtime error detection: Model code execution through as many executable paths as possible and look for potential errors such as array bounds overflow and division by zero.

Adherence to Coding Standards: Coding standards are usually chosen to ensure a focus on cybersecurity, functional safety, or in the case of MISRA standards, one or both. Coding standards help ensure that your code follows best programming practices, regardless of your application, and is definitely a good idea.

Software testing is important in embedded systems

Figure 2 Activities like static analysis are an overhead early in the development lifecycle, but they pay off in the long run. Source: LDRA

code of sufficient quality

It’s no surprise that high-quality engineered products are more expensive, sticking to any development process comes with a price, and developing the best product may not always be commercially viable.

Where safety is important, functional safety standards often require an analysis of cost and likelihood of failure. This risk assessment is required for every system, subsystem and component to ensure appropriate mitigations are implemented. It makes sense whether the system is for functional safety or information security. If you test every part of your system with the same level of rigor, you will overinvest in less risky parts of the system, and in higher risk situations you will not be able to adequately mitigate failures.

Software security practices begin with understanding what happens if a component or system fails, and then trace potential failures into appropriate activities to mitigate the risk of doing so. For example, consider a system that controls the guidance of an aircraft, where failure can be catastrophic. Strict mitigation activities must be performed within the subcondition coverage to ensure correct code generation.

However, if the in-flight entertainment system fails, the plane will not crash, so testing an in-flight entertainment system is less demanding than a system that could cause immediate loss of life.

Software plasticity is both a blessing and a curse. It is very easy to make the system do almost anything within reason. But that same flexibility can be an Achilles’ heel when it comes to ensuring software doesn’t fail.

Even in the business world, while not all software failures are catastrophic, they are not desirable. Many developers work in security-critical industries and have little choice but to adhere to the strictest standards. But the principles advocated by these standards exist because they have been shown to make the final product function better. So it makes perfect sense to apply these principles at scale, no matter how important the application is.

Despite the confusing plethora of functional safety standards that apply to software development, the similarities between them outweigh the differences. All of this is based on the fact that software development is an engineering discipline that requires us to establish requirements, design and develop to implement them, and test requirements as early as possible.

Adopting this mindset will open the door to support tools across the industry, allowing for higher quality software to be developed more efficiently.

Mark Pitchford is a technologist at LDRA Software Technology who works with development teams looking to enable compliant software development in environments where functional safety and information security are paramount.

The Links:   LM32P10 SKM200GB124D

Related Posts