COTS Journal

Security/Safety Analysis Tools Smooth Path to MISRA-C Compliance

By: Jay Thomas, Director of Field Engineering; Chris Tapp, Field Applications Engineer LDRA

MISRA C compliance standard and its new security amendment helps provide the assurance of protection from attack. Multiple analysis approaches are needed for complex systems.

Today's military embedded systems are increasingly rich in functionality while existing on small, low-power multicore processors that are almost universally connected to the Internet of Things. These applications are vital for the operation of military systems both on the ground and in aerospace operations, making them critical for the safety of their operators and the general public as well as for the success of the mission. For these reasons, the requirements for safety, reliability and security are interdependent. These requirements must be followed and adhered to, and often they must also be proven and certified. In the case of military systems, this means adherence to and certifiablity of compliance with DO-178C, which the FAA describes as an, "acceptable means, but not the only means, for showing compliance with the applicable airworthiness regulations for the software aspects of airborne systems and equipment certification." (Figure 1).

While DO-178C does not specify software security and is not itself a coding standard, it does require that a coding standard be used. The most attractive and up-to-date C coding standard is that developed for the automotive industry by the Motor Industry Software Reliability Association (MISRA), which is a subset of the C language for developing applications with high integrity and high reliability requirements called MISRA C. MISRA guidelines aim to facilitate code safety, security, portability and reliability in the context of embedded systems. Recently, MISRA-C: 2012 has been enhanced to ensure greater security for embedded systems in general. This push recognizes that while software may be functionally correct and reliable, it is neither safe nor reliable if it is vulnerable to outside attack.

MISRA Security Enhancements

The MISRA coding standard is a great help for developers trying to avoid errors and oversights that can quickly compromise security and safety. Closely adhering to them is a vital necessity and they must be part of the design process from the very beginning. They cannot be brought in as an afterthought. Automated tools are a necessary and important element to help programmers follow such guidelines as well as certify that the code is compliant with the safety and security provisions of MISRA C and other standards.

Because the software running these systems is also incredibly large and complex, even one coding error can bring down an entire system or allow access to malicious hackers. The well-publicized Heartbleed vulnerability is a case in point. A seemingly minor error failed to verify the length of a specific piece of data and allowed hackers access in the "heartbeat" of the SSL exchange that takes place prior to encryption and normal data exchange. The hackers took advantage of this lack of verification to specify a very long piece of memory to be returned. By doing this repeatedly, they were able to pull over practically all the data on the server as cleartext and analyze it to gain access to millions of customers' personal data. It is not a leap to imagine the effects such an error could have in a military context.

Variety of Standards

To address this, the software world and various industrial segments have developed standards specifications for coding practice, security and functional safety, which, if adhered to, do indeed go a long way toward providing the needed assurance and protection. Standards such as CERT C, MISRA C, CWE, ISO26262, DO-178C and others address standard coding practices, security and safety. Approaches such as automated requirements traceability can aid in the generation of tests for these things both automatically and for manually created tests. MISRA C, for example, is designed to eliminate unsafe and insecure coding practices and undefined behaviors that can lead to exploitable vulnerabilities and unreliable applications.

There is a pressing need for systems to be impenetrable whether from IT-based hackers or from devices and equipment under the control of those with malicious intent. It is also important to guard against the unintentional exposure of code that could end up in unpredictable places, including a misplaced mobile device. MISRA-C: 2012 has 142 rules that can be checked using static analysis tools and 16 directives, which are somewhat more open to interpretation. The recent 2012 Amendment 1 adds 13 rules and 1 directive aimed specifically at security concerns.

However, these standards specifications are themselves very complex and quite detailed so that a human programmer alone cannot be expected to thoroughly comply with them, nor could he/she prove compliance to an outside person or agency. For this they must have the help of automated tools. An automated tool suite that incorporates standards including CERT C, MISRA and others for use in static analysis can help developers identify and correct coding errors and also certify compliance with the functional standards. DO-178C, for example requires that for certification, verification must be done by someone other than the person who authored the item.

Beyond that, the tools are also needed to perform extensive foundational tests that are based on static analysis, dynamic coverage analysis and unit/integration testing. The ideas that wrap around this suite of testing tools are the need to assure both security and functional safety, compliance with the standards mentioned above, and the ability to trace requirements from the requirements document down into the code and to trace from the code back to the individual requirements (Figure 2).

Secure from the Start

In assuring security, static and dynamic analysis often go hand in hand, since one of the main concerns of security is data and the other is control. The questions include: "Who has access to what data? Who can read from it and who can write from it? What is the flow of data to and from what entities and how does access affect control?" Here the "who" can refer to persons such as developers and operators-and hackers-and it can also refer to software components either within the application or located somewhere in a network architecture.

On the static analysis side, the tools work with the uncompiled source code to check the code against the selected rules, which can be any combination of the supported standards as well as any custom rules and requirements that the developer or a company may specify. It can also look for certain types of software constructs that can compromise security as well as check memory protection such as determining who has access to which memory and by tracing pointers that may traverse a memory location. Results are presented in graphical screen displays for easy review and assessment to assure coding standards compliance.

Dynamic analysis tests the compiled code, which is linked back to the source code using the same type of symbolic data used by source level debuggers. Dynamic analysis, especially code coverage analysis, requires extensive testing. Often developers may be able to manually generate and manage their own test cases. This is the typical method of generating test cases-working from a requirements document. And they may stimulate and monitor sections of the application with varying degrees of effectiveness. But that will probably not be enough to achieve certain required certifications.

If the coverage analysis requirements include statement or branch/decision coverage, procedure/function call coverage, or in more rigorous environments, MC/DC coverage, that can often require both source and object code analysis and the need to automate test generation as well as a means of measuring the effectiveness of the testing. Security requires rigorous and thorough test for functional vulnerabilities as well as for adherence to the MISRA C rules and directives in the running application. The latter depends on automatic test generation for completeness.

Benefits of Static Analysis

Automatic test generation is based on the static analysis of the code-in other words: before compilation. The information provided by static analysis helps the automatic test generator create the proper stimuli to the software components in the application. In addition, functional tests can be created manually using the requirements document. These should include any functional safety tests such as not letting a robot arm swing into an area that might endanger humans.

One of the things that static and dynamic analysis can reveal is areas of "dead code," which can possibly be a source of danger or may be just inconveniently taking up space. It is necessary to properly identify such code and deal with it-usually by eliminating it. This is important for a number of reasons. First, although it is ideal to start implementing security from the "ground up," most projects include pre-existing code that may not have been subjected to the same rigorous testing as the current project. These may contain segments that are simply not needed by the application under development. More ominously, areas of "dead" code may be lying in wait to be activated by a hacker or some obscure event in the system for malicious purposes.

Two-Way Requirements Traceability

Of course, there is the need to distinguish between truly dead code and seldom-used code. This is one good reason why it is important to have two-way requirements traceability-to be able to check that requirements are met by code in the application, but also the ability to trace code back to actual requirements from the actual code. If neither of those routes shows a connection, the code pretty definitely does not belong there (Figure 3).

Static analysis, therefore, functions to analyze source code for proper programming practices and also to help dynamic analysis set up for coverage testing, functional testing, control, and data flow analysis. The latter is essential in order to highlight and correct potential problem areas and produce software quality metrics. Companies developing to comply with stringent industrial or medical standards may be required to demonstrate analysis of data flow and control flow coupling for software certification. In the case of DO-178C, verification is required at the object level. For example, an optional tool in the LDRA tool suite Version 10 called TBobjectBox provides object code verification (OCV) capability, as described in DO-178B/C. This offers a direct way to relate code coverage at the source code level with that achieved at the object code level. The tool also provides the mechanism to extend the code coverage at the object code level where necessary. This can be especially helpful for certification at DO-178C Level A, where a software failure could be catastrophic-such as result in loss of aircraft and/or loss of life.

Unit Testing and Integration

Due to the previously mentioned characteristics of code size, processor power and connectivity-primarily in terms of the Internet of Things-a great deal of initial software development is done on host workstation systems with different project members working on different components of the overall application. Much of this gets started well before the target hardware is available. The development can start on the host OS environment or on a target that is simulated on the host, and ultimately on the actual target hardware.

Analysis tools need to provide the means to perform functional testing, as well as static analysis at the unit and integration levels on the host as well as on the target hardware when it becomes available. Developers can perform test generation (test harness, test vectors, code stubs) and result-capture support for a wide range of host and target platforms. This means that moving from workstation to target and integrating components developed by different team members can come together in the larger system configuration with a major part of the requirements, test generation, coverage analysis, data and control flow work already underway.

In order for systems to be safe, they must also be secure. For that, they must be coded to comply not only with language rules but also to adhere to the standards that assure safety and security. And this must all be verifiable, which means the ability to trace the flow of data and control from requirements to code and back again.

LDRA
Monks Ferry, Wirral, UK
+0151 649 9300
www.ldra.com

 

 

© 2009 RTC Group, Inc., 905 Calle Amanecer, Suite 150, San Clemente, CA 92673