The purpose of this article is to help developers to create RTOS based embedded software with a minimum of frustration. While there is some technical material here, this is very much a process-oriented discussion which does not require a deep knowledge of embedded system programming to understand.
The rest of this section covers the scope of the embedded development process and the kinds of frustrations that developers and their employers can encounter there. Pitfalls encountered by other developers and working with reviewers as part of the development process are also described in this article.
Any code change should undergo stringent review process before and after merging into the mainline. No matter how strong the original developer’s skills are, the review process should find ways in which the code can be improved. Often review finds severe bugs and security problems. This is especially true for code which has been developed by a single developer or in an environment with few members; such code benefits strongly from review by outside developers.
About the development process:
A solid understanding of how the process works is required in order to be an effective part of a dynamic embedded development project. Embedded software development can be executed in a rolling development model which will be continually integrating major changes.
A relatively straightforward discipline will be followed with regard to the merging of patches for each release. At the beginning of each development cycle, the “merge window” is to be open. At that time, code should be sufficiently stable (after being accepted by the development team) can be merged into the mainline project.
(As an aside, it is worth noting that the changes ready-to-integrate during the merge window do not come out of thin air; they should be collected, tested, and staged ahead of time.)
The merge window shall be opened for a particular time period as required for the project. At the end of this time, the project leader can declare that the window is closed and release the first of the “rc” (release candidate). Once the merge window is closed, no more significant changes are to be allowed till the final release of the software. As fixes make their way into the mainline, the patch rate will slow over time. The incremental rc’s can be released depending upon the severity of the patches.
The developers’ goal shall be to fix all known regressions before the stable software release is made. In the real world, this kind of perfection is hard to achieve. At the same time it will be make the situation even worse if the final release is delayed, as the pile of changes waiting for the next merge window will grow larger and can lead to even more regressions the next time around.
The life cycle of a patch to the release candidate:
Patches shall not go directly from the developer’s keyboard into the mainline kernel. There is, instead, a somewhat involved (if somewhat informal) process designed to ensure that each patch is reviewed for quality and that each patch implements a change which is desirable to have in the mainline. This process can happen quickly for minor fixes.
The stages that a patch shall through are, generally:
– Design. This is where the real requirements for the patch – and the way those requirements will be met – are laid out.
– Early review. Patches should go through an early review by cognizant developers within the team. This process should turn up any major problems with a patch if all go well.
– Wider review. When the patch is getting close to ready for mainline inclusion, it should be accepted by a relevant subsystem maintainer – though this acceptance is not a guarantee that the patch will make it all the way to the mainline. When the process works, this step leadsto more extensive review of the patch and the discovery of any problems resulting from the integration of this patch with work being done by others.
– Merging into the mainline. Eventually, a successful patch will be merged into the mainline repository. More comments and/or problems may surface at this time; it is important that the developer be responsive to these and fix any issues which arise.
– Long-term maintenance. While it is certainly possible for a developer to forget about code after merging it, that sort of behavior tends to leave a poor impression in the development team. Merging code eliminates some of the maintenance burden, in that others will fix problems caused by API changes. But the original developer should continue to take responsibility for the code if it is to remain useful in the longer term.
At times it is required to keep track of any pending work that needs to be done to a particular fucntionality or feature which is already in stage towards an interim release. It is advisable to keep a “TODO” folder conaining a text file depicting the activities or which are pending for a particular functionality or feature. This practise will help other developers which may be working at a future time on the feature.
Choose the right project management tools for easy management and tracking of the project. Few of the common tools include SVN, Git, Rational Team concert, Rational Synergy, Rational CM, Rational DOORS etc. Consider the best option based on the cost, maintainability, support availability, project size and/or complexity.
When contemplating a RTOS based embedded development project, it can be tempting to jump right in and start coding. As with any significant project, though, much of the groundwork for success is best laid before the first line of code is written. Some time spent in early planning and communication can save far more time later on.
Specifying the problem:
Like any engineering project, a successful software enhancement starts with a clear description of the problem to be solved. In some cases, this step is easy: when a driver is needed for a specific piece of hardware, for example. In others, though, it is tempting to confuse the real problem with the proposed solution, and that can lead to difficulties.
Consider an example from Linux kernel development:
some years ago, developers working with Linux audio sought a way to run applications without dropouts or other artifacts caused by excessive latency in the system. The solution they arrived at was a kernel module intended to hook into the Linux Security Module (LSM) framework; this module could be configured to give specific applications access to the realtime scheduler. This module was implemented and sent to the linux-kernel mailing list, where it immediately ran into problems.
To the audio developers, this security module was sufficient to solve their immediate problem. To the wider kernel community, though, it was seen as a misuse of the LSM framework (which is not intended to confer privileges onto processes which they would not otherwise have) and a risk to system stability. Their preferred solutions involved realtime scheduling access via the rlimit mechanism for the short term, and ongoing latency reduction work in the long term.
The audio community, however, could not see past the particular solution they had implemented; they were unwilling to accept alternatives. The resulting disagreement left those developers feeling disillusioned with the entire kernel development process; one of them went back to an audio list and posted this:
The reality of the situation was different; the kernel developers were far more concerned about system stability, long-term maintenance, and finding the right solution to the problem than they were with a specific module. The moral of the story is to focus on the problem – not a specific solution – and to discuss it with the development community before investing in the creation of a body of code.
So, when contemplating a RTOS based embedded development project, one should obtain answers to a short set of questions:
– What, exactly, is the problem which needs to be solved?
– Who are the users affected by this problem? Which use cases should the solution address?
– How does the existing implementaion fall short in addressing that problem?
Only then does it make sense to start considering possible solutions.
When planning a software implementaion, it makes great sense to hold discussions with before launching into coding. Early communication can save time and trouble in a number of ways:
– It may well be that the problem is addressed by the system already in ways which the have not understood.
– There may be elements of the proposed solution which will not be acceptable for mainline merging. It is better to find out about problems like this before writing the code.
– It’s entirely possible that other developers have thought about the problem; they may have ideas for a better solution, and may be willing to help in the creation of that solution.
Embedded code which is designed and developed behind closed doors invariably has problems which are only revealed when the code is released into the team for reviews.
It is worth stating few examples from the Linux kernel development community:
– The Devicescape network stack was designed and implemented for single-processor systems. It could not be merged into the mainline until it was made suitable for multiprocessor systems. Retrofitting locking and such into code is a difficult task; as a result, the merging of this code (now called mac80211) was delayed for over a year.
– The Reiser4 filesystem included a number of capabilities which, in the core kernel developers’ opinion, should have been implemented in the virtual filesystem layer instead. It also included features which could not easily be implemented without exposing the system to user-caused deadlocks. The late revelation of these problems – and refusal to address some of them – has caused Reiser4 to stay out of the mainline kernel.
– The AppArmor security module made use of internal virtual filesystem data structures in ways which were considered to be unsafe and unreliable. This concern (among others) kept AppArmor out of the mainline for years.
In each of these cases, a great deal of pain and extra work could have been avoided with some early discussion with the Linux kernel developers.
Getting the code right:
It is the code which will be examined by other developers and merged (or not) into the mainline tree. So it is the quality of this code which will determine the ultimate success of the project. The coding style shall be clearly defined in the Documentation/CodingStyle folder within the project.
Adding new code to the project is very difficult if that code is not coded according to the standard; many developers will request that the code be reformatted before they will even review it. A code base can be very large and requires some uniformity of code to make it possible for developers to quickly understand any part of it. So there is no longer room for strangely-formatted code.
* Abstraction layers:
Computer Science professors teach students to make extensive use of abstraction layers in the name of flexibility and information hiding. Certainly a complex embedded project makes extensive use of abstraction; no project involving several million lines of code could do otherwise and survive. But experience has shown that excessive or premature abstraction can be just as harmful as premature optimization. Abstraction should be used to the level required and no further.
At a simple level, consider a function which has an argument which is always passed as zero by all callers. One could retain that argument just in case somebody eventually needs to use the extra flexibility that it provides. By that time, though, chances are good that the code which implements this extra argument has been broken in some subtle way which was never noticed – because it has never been used. Or, when the need for extra flexibility arises, it does not do so in a way which matches the programmer’s early expectation. Developers will routinely submit patches to remove unused arguments; they should, in general, not be added in the first place.
On the other hand, if you find yourself copying significant amounts of code from another subsystem, it is time to ask whether it would, in fact, make sense to pull out some of that code into a separate library or to implement that functionality at a higher level. There is no value in replicating the same code throughout the project.
* #ifdef and preprocessor use in general:
The C preprocessor seems to present a powerful temptation to some C programmers, who see it as a way to efficiently encode a great deal of flexibility into a source file. But the preprocessor is not C, and heavy use of it results in code which is much harder for others to read and harder for the compiler to check for correctness. Heavy preprocessor use is almost always a sign of code which needs some cleanup work.
Conditional compilation with #ifdef is, indeed, a powerful feature, and it can be used within the project. But there is little desire to see code which is sprinkled liberally with #ifdef blocks. As a general rule, #ifdef use should be confined to header files whenever possible. Conditionally-compiled code can be confined to functions which, if the code is not to be present, simply become empty. The compiler will then quietly optimize out the call to the empty function. The result is far cleaner code which is easier to follow.
C preprocessor macros present a number of hazards, including possible multiple evaluation of expressions with side effects and no type safety. If you are tempted to define a macro, consider creating an inline function instead. The code which results will be the same, but inline functions are easier to read, do not evaluate their arguments multiple times, and allow the compiler to perform type checking on the arguments and return value.
* Inline functions:
Inline functions present a hazard of their own, though. Programmers can become enamored of the perceived efficiency inherent in avoiding a function call and fill a source file with inline functions. Those functions, however, can actually reduce performance. Since their code is replicated at each call site, they end up bloating the size of the compiled kernel. That, in turn, creates pressure on the processor’s memory caches, which can slow execution dramatically. Inline functions, as a rule, should be quite small and relatively rare. The cost of a function call, after all, is not that high; the creation of large numbers of inline functions is a classic example of premature optimization.
In general, kernel programmers ignore cache effects at their peril. The classic time/space tradeoff taught in beginning data structures classes often does not apply to contemporary hardware. Space *is* time, in that a larger program will run slower than one which is more compact.
More recent compilers take an increasingly active role in deciding whether a given function should actually be inlined or not. So the liberal placement of “inline” keywords may not just be excessive; it could also be irrelevant.
Any resource (data structures, hardware registers, etc.) which could be accessed concurrently by more than one thread must be protected by a lock. New code should be written with this requirement in mind; retrofitting locking after the fact is a rather more difficult task. Developers should take the time to understand the available locking primitives well enough to pick the right tool for the job. Code which shows a lack of attention to concurrency will have a difficult path into the mainline.
One final hazard worth mentioning is this: it can be tempting to make a change (which may bring big improvements) which causes something to break for existing users. This kind of change is called a “regression,” and regressions have become most unwelcome in the mainline release. With few exceptions, changes which cause regressions will be backed out if the regression cannot be fixed in a timely manner. Far better to avoid the regression in the first place.
It is often argued that a regression can be justified if it causes things to work for more people than it creates problems for. Why not make a change if it brings new functionality to ten systems for each one it breaks? The best answer to this question was expressed by Linus in July, 2007 (Linux kernel development community):
So we don’t fix bugs by introducing new problems. That way lies madness, and nobody ever knows if you actually make any real progress at all. Is it two steps forwards, one step back, or one step forward and two steps back?
An especially unwelcome type of regression is any sort of change to the user-space API. Once an interface has been exported to user space, it must be supported indefinitely. This fact makes the creation of user-space interfaces particularly challenging: since they cannot be changed in incompatible ways, they must be done right the first time. For this reason, a great deal of thought, clear documentation, and wide review for user-space interfaces is always required.
Code checking tools:
For now, at least, the writing of error-free code remains an ideal that few of us can reach. What we can hope to do, though, is to catch and fix as many of those errors as possible before our code goes into the mainline release. Any problem caught by the computer is a problem which will not afflict a user later on, so it stands to reason that the automated tools should be used whenever possible.
The first step is simply to heed the warnings produced by the compiler. Contemporary versions of gcc can detect (and warn about) a large number of potential errors. Quite often, these warnings point to real problems. Code submitted for review should, as a rule, not produce any compiler warnings. When silencing warnings, take care to understand the real cause and try to avoid “fixes” which make the warning go away without addressing its cause.
Locking checker tools shall be used to detect any locking issues within the RTOS multithread system. One such debugging tool is the “lockdep.” This tool will track the acquisition and release of every lock (spinlock or mutex) in the system, the order in which locks are acquired relative to each other, the current interrupt environment, and more. It can then ensure that locks are always acquired in the same order, that the same interrupt assumptions apply in all situations, and so on. In other words, lockdep can find a number of scenarios in which the system could, on rare occasion, deadlock. This kind of problem can be painful (for both developers and users) in a deployed system; lockdep allows them to be found in an automated manner ahead of time. Code with any sort of non-trivial locking should be run with lockdep enabled before being submitted for inclusion.
As a diligent embedded programmer, you will, beyond doubt, check the return status of any operation (such as a memory allocation) which can fail. The fact of the matter, though, is that the resulting failure recovery paths are, probably, completely untested. Untested code tends to be broken code; you could be much more confident of your code if all those error-handling paths had been exercised a few times.
Other kinds of errors can be found with the static analysis tools. Some time spent installing and using these tools will help avoid embarrassment later.
Documentation has often been more the exception than the rule with embedded software development. Even so, adequate documentation will help to ease the merging of new code into the mainstream, make life easier for other developers, and will be helpful for your users. In many cases, the addition of documentation has become essentially mandatory.
The first piece of documentation for any patch is its associated changelog. Log entries should describe the problem being solved, the form of the solution, the people who worked on the patch, any relevant effects on performance, and anything else that might be needed to understand the patch. Be sure that the changelog says *why* the patch is worth applying; a surprising number of developers fail to provide that information.
Any new configuration options must be accompanied by help text which clearly explains the options and when the user might want to select them.
Anybody who reads through a significant amount of existing embedded code will note that, often, comments are most notable by their absence. Once again, the expectations for new code are higher than they were in the past; merging uncommented code will be harder. That said, there is little desire for verbosely-commented code. The code should, itself, be readable, with comments explaining the more subtle aspects.
Certain things should always be commented. Uses of memory barriers should be accompanied by a line explaining why the barrier is necessary. The locking rules for data structures generally need to be explained somewhere. Major data structures need comprehensive documentation in general. Non-obvious dependencies between separate bits of code should be pointed out. Anything which might tempt a code janitor to make an incorrect “cleanup” needs a comment saying why it is done the way it is. And so on.
Internal API Changes:
The binary interface provided by the RTOS kernel to user space cannot be broken except under the most severe circumstances. The RTOS’s internal programming interfaces, instead, are highly fluid and can be changed when the need arises. If you find yourself having to work around a kernel API, or simply not using a specific functionality because it does not meet your needs, that may be a sign that the API needs to change.
There are, of course, some catches. API changes can be made, but they need to be well justified. So any patch making an internal API change should be accompanied by a description of what the change is and why it is necessary. This kind of change should also be broken out into a separate patch, rather than buried within a larger patch.
The other catch is that a developer who changes an internal API is generally charged with the task of fixing any code within the kernel tree which is broken by the change. For a widely-used function, this duty can lead to literally hundreds or thousands of changes – many of which are likely to conflict with work being done by other developers. Needless to say, this can be a large job, so it is best to be sure that the justification is solid.