Common interview questions for embedded software developer

Let’s see some of the common interview questions asked during an embedded C developer interview.

EMBEDDED SYSTEM

1. What changes are to be done on software if the embedded software is ported from a 32 bit architecture to a 64 bit architecture.

2. Can you call a static function which belongs to a different file, if possible How?

3. Volatile qualifier and its use in embedded software.

4. How interrupts are working in embedded software.

5. Explain how you will implement an Ethernet receiver driver using interrupt.

6. Explain about ADCs

7. Explain interrupt latency.

8. Difference between ARM and PowerPC architecture.

9. Explain how stack is working.

10. Memory layout of an executable.

11. Explain the protocols I2C, SPI and CAN.

12. What is the mechanism which determines the baud rate of a CAN network.

RTOS

1. Priority inversion, priority inheritance, priority ceiling.

2. What is deadlock and livelock.

3. Difference between Semaphore and Mutex.

4. Explain a situation where you have used Semaphore and Mutex in you past projects.

5. Process synchronization methods in RTOS.

GENERAL

1. Why are you looking for a change?

2. What is your expectation from the new job, if offered?

PROJECT

1. Explain your recent project.

2. Draw the software architecture of your recent project.

3. Challenges you faced during your project.

C CODING

1. How to find the Nth node from the end in a singly linked list.

2. Code to modify the even indexed members in an array.

3. Implement XOR without using the operator.

4. Reverse a string.

5. Convert little endian to big endian.

6. Write a C code to print a string without using a semicolon in the program.

7. Write a program to find a merge in a singly linked list.

A Quick refresher on the CAN protocol

This article is intended to give you a quick refresher on the protocol. The following paragraphs are organized in such a way that the reader is expected to have a basic understanding of the CAN protocol.

The Controller Area Network (CAN) is a serial communications protocol which efficiently supports distributed realtime control with a very high level of security. Upto 1Mbps of communication speeds can be achieved in a CAN network.

CAN protocol with refernce to the OSI model:

1. The object layer:
The scope of the object layer includes finding which messages are to be transmitted, deciding which messages received by the transfer layer are actually to be used and providing an interface to the application layer related hardware.
2. The transfer layer:
The scope of the transfer layer mainly is the transfer protocol, i.e. controlling the framing, performing arbitration, error checking, error signalling and fault confinement. Also some general features of the bit timing are regarded as part of the transfer layer. It is in the nature of the transfer layer that there is no freedom for modifications.
3. The physical layer:
The scope of the physical layer is the actual transfer of the bits between the different nodes with respect to all electrical properties. Within one network the physical layer, of course, has to be the same for all nodes.

The CAN bus is a broadcast type of bus. There is no way to send a message to just a specific node; all nodes will invariably pick up all traffic.

Layered Structure of a CAN Node:

Basic Configuration:

The physical layer of CAN is a differential transmission on a twisted pair bus. The bus uses non-return-to-zero (NRZ) with bit stuffing. The nodes are connected to the network in a Wired-AND fashion.

A maximum of 8 Bytes can be transfered per message in CAN. The messages are protected by CRC type checksum. CAN bus access is handled via serial communication scheme of Carrier Sense Multiple Access/Collition Detection with Non-Destructive Arbitration.

There are no explicit address used in the CAN messages, instead each message carries a numeric value which contains its priority and also serve as an identification of the message.

CAN uses an elaborate error handling scheme that results in re-transmitted messages if they are not properly received. Also there are effective means for isolating faults and removing faulty nodes from the bus.

About the higher layer protocols using CAN:
Higher layer protocols such as Devicenet, CANKingdom, CANopen etc. are used to standardize network startup process, to distribute ‘addresses’ among participating nodes, to determine the layout of the messages, to provide routines for error handling at the system level. But the use of the higher layer protocols are entirely optional.

Message transfer is manifested and controlled by four different frame types:
1. DATA FRAME – Carries data from a transmitter to the receivers.
2. REMOTE FRAME – To request the transmission of the DATA FRAME with the same IDENTIFIER.
3. ERROR FRAME – Transmitted by any unit on detecting a bus error.
4. OVERLOAD FRAME – To provide for an extra delay between the preceding and the succeeding DATA or REMOTE FRAMEs.

When the bus is free any unit may start to transmit a message. The unit with the message of higher priority to be transmitted gains bus access. If 2 or more units start transmitting messages at the same time, the bus access conflict is resolved by bitwise arbitration using the IDENTIFIER. The mechanism of arbitration guarantees that neither information nor time is lost.

There are two different formats which differ in the length of the IDENTIFIER field: Frames with the number of 11 bit IDENTIFIER are denoted Standard Frames. In contrast, frames containing 29 bit IDENTIFIER are denoted Extended Frames.

CAN Standard Data Frame:

CAN Extended Data Frame:

Some of the interesting features of the C Language

Some of the interesting features of the C Language

Here in this article we are listing out few of the interesting features with the C language. Some of the points listed out here are not used normally during the implementation of a production code. But these tips are worthy of noting down. The more we understand the C language the more confident we would become in coding with this magnificent language.

The features listed out below will be updated and added with more information and points going forward. Also feel free to post your views as comments to this article..

So let’s start with….  🙂

 

1. Function called with multiple arguments

In C if a function definition does not specify any parameters, then this function can be called with any number of arguments or without any. For example, see the below sample code:

2. The terms “parameters” and “arguments”

The term ‘parameter’ refers to any declaration within the parentheses following a function name in a function declaration or definition; the term ‘arguments’ refers to any expression within the parentheses of a function call.

3. Order of evaluation of function parameters

There is no guarantee about the evaluation of the order of the function parameters in C. The order of evaluation is purely implementation dependent, so while writing programs we should not use expressions which can cause side-effects. This would be better explained with the below example:

In the above example, the result values of x and y will be implementation dependent.

4. Multiple declaration of global variable

C allows a global variable to be declared multiple times; having said so, there should only be one definition for the variable. See the below example:

Here in the above example you will be getting the answer as 10.
But if you try to re-declare a local variable, then you would be getting the redeclaration of ‘x’ with no linkage error during compilation.

A Guide to the RTOS based Embedded Development Process

A Guide to the RTOS based Embedded Development Process

The purpose of this article is to help developers to create RTOS based embedded software with a minimum of frustration. While there is some technical material here, this is very much a process-oriented discussion which does not require a deep knowledge of embedded system programming to understand.

The rest of this section covers the scope of the embedded development process and the kinds of frustrations that developers and their employers can encounter there. Pitfalls encountered by other developers and working with reviewers as part of the development process are also described in this article.

Any code change should undergo stringent review process before and after merging into the mainline. No matter how strong the original developer’s skills are, the review process should find ways in which the code can be improved. Often review finds severe bugs and security problems. This is especially true for code which has been developed by a single developer or in an environment with few members; such code benefits strongly from review by outside developers.

About the development process:

A solid understanding of how the process works is required in order to be an effective part of a dynamic embedded development project. Embedded software development can be executed in a rolling development model which will be continually integrating major changes.

A relatively straightforward discipline will be followed with regard to the merging of patches for each release. At the beginning of each development cycle, the “merge window” is to be open. At that time, code should be sufficiently stable (after being accepted by the development team) can be merged into the mainline project.

(As an aside, it is worth noting that the changes ready-to-integrate during the merge window do not come out of thin air; they should be collected, tested, and staged ahead of time.)

The merge window shall be opened for a particular time period as required for the project. At the end of this time, the project leader can declare that the window is closed and release the first of the “rc” (release candidate). Once the merge window is closed, no more significant changes are to be allowed till the final release of the software. As fixes make their way into the mainline, the patch rate will slow over time. The incremental rc’s can be released depending upon the severity of the patches.

The developers’ goal shall be to fix all known regressions before the stable software release is made. In the real world, this kind of perfection is hard to achieve. At the same time it will be make the situation even worse if the final release is delayed, as the pile of changes waiting for the next merge window will grow larger and can lead to even more regressions the next time around.

The life cycle of a patch to the release candidate:

Patches shall not go directly from the developer’s keyboard into the mainline kernel. There is, instead, a somewhat involved (if somewhat informal) process designed to ensure that each patch is reviewed for quality and that each patch implements a change which is desirable to have in the mainline. This process can happen quickly for minor fixes.

The stages that a patch shall through are, generally:

– Design. This is where the real requirements for the patch – and the way those requirements will be met – are laid out.

– Early review. Patches should go through an early review by cognizant developers within the team. This process should turn up any major problems with a patch if all go well.

– Wider review. When the patch is getting close to ready for mainline inclusion, it should be accepted by a relevant subsystem maintainer – though this acceptance is not a guarantee that the patch will make it all the way to the mainline. When the process works, this step leadsto more extensive review of the patch and the discovery of any problems resulting from the integration of this patch with work being done by others.

– Merging into the mainline. Eventually, a successful patch will be merged into the mainline repository. More comments and/or problems may surface at this time; it is important that the developer be responsive to these and fix any issues which arise.

– Long-term maintenance. While it is certainly possible for a developer to forget about code after merging it, that sort of behavior tends to leave a poor impression in the development team. Merging code eliminates some of the maintenance burden, in that others will fix problems caused by API changes. But the original developer should continue to take responsibility for the code if it is to remain useful in the longer term.

Documenting “TODO”:

At times it is required to keep track of any pending work that needs to be done to a particular fucntionality or feature which is already in stage towards an interim release. It is advisable to keep a “TODO” folder conaining a text file depicting the activities or which are pending for a particular functionality or feature. This practise will help other developers which may be working at a future time on the feature.

Tools:

Choose the right project management tools for easy management and tracking of the project. Few of the common tools include SVN, Git, Rational Team concert, Rational Synergy, Rational CM, Rational DOORS etc. Consider the best option based on the cost, maintainability, support availability, project size and/or complexity.

Early-stage planning:

When contemplating a RTOS based embedded development project, it can be tempting to jump right in and start coding. As with any significant project, though, much of the groundwork for success is best laid before the first line of code is written. Some time spent in early planning and communication can save far more time later on.

Specifying the problem:

Like any engineering project, a successful software enhancement starts with a clear description of the problem to be solved. In some cases, this step is easy: when a driver is needed for a specific piece of hardware, for example. In others, though, it is tempting to confuse the real problem with the proposed solution, and that can lead to difficulties.

Consider an example from Linux kernel development:

some years ago, developers working with Linux audio sought a way to run applications without dropouts or other artifacts caused by excessive latency in the system. The solution they arrived at was a kernel module intended to hook into the Linux Security Module (LSM) framework; this module could be configured to give specific applications access to the realtime scheduler. This module was implemented and sent to the linux-kernel mailing list, where it immediately ran into problems.

To the audio developers, this security module was sufficient to solve their immediate problem. To the wider kernel community, though, it was seen as a misuse of the LSM framework (which is not intended to confer privileges onto processes which they would not otherwise have) and a risk to system stability. Their preferred solutions involved realtime scheduling access via the rlimit mechanism for the short term, and ongoing latency reduction work in the long term.

The audio community, however, could not see past the particular solution they had implemented; they were unwilling to accept alternatives. The resulting disagreement left those developers feeling disillusioned with the entire kernel development process; one of them went back to an audio list and posted this:

(http://lwn.net/Articles/131776/).

The reality of the situation was different; the kernel developers were far more concerned about system stability, long-term maintenance, and finding the right solution to the problem than they were with a specific module. The moral of the story is to focus on the problem – not a specific solution – and to discuss it with the development community before investing in the creation of a body of code.

So, when contemplating a RTOS based embedded development project, one should obtain answers to a short set of questions:

– What, exactly, is the problem which needs to be solved?

– Who are the users affected by this problem? Which use cases should the solution address?

– How does the existing implementaion fall short in addressing that problem?

Only then does it make sense to start considering possible solutions.

Early discussion:

When planning a software implementaion, it makes great sense to hold discussions with before launching into coding. Early communication can save time and trouble in a number of ways:

– It may well be that the problem is addressed by the system already in ways which the have not understood.

– There may be elements of the proposed solution which will not be acceptable for mainline merging. It is better to find out about problems like this before writing the code.

– It’s entirely possible that other developers have thought about the problem; they may have ideas for a better solution, and may be willing to help in the creation of that solution.

Embedded code which is designed and developed behind closed doors invariably has problems which are only revealed when the code is released into the team for reviews.

It is worth stating few examples from the Linux kernel development community:

– The Devicescape network stack was designed and implemented for single-processor systems. It could not be merged into the mainline until it was made suitable for multiprocessor systems. Retrofitting locking and such into code is a difficult task; as a result, the merging of this code (now called mac80211) was delayed for over a year.

– The Reiser4 filesystem included a number of capabilities which, in the core kernel developers’ opinion, should have been implemented in the virtual filesystem layer instead. It also included features which could not easily be implemented without exposing the system to user-caused deadlocks. The late revelation of these problems – and refusal to address some of them – has caused Reiser4 to stay out of the mainline kernel.

– The AppArmor security module made use of internal virtual filesystem data structures in ways which were considered to be unsafe and unreliable. This concern (among others) kept AppArmor out of the mainline for years.

In each of these cases, a great deal of pain and extra work could have been avoided with some early discussion with the Linux kernel developers.

Getting the code right:

It is the code which will be examined by other developers and merged (or not) into the mainline tree. So it is the quality of this code which will determine the ultimate success of the project. The coding style shall be clearly defined in the Documentation/CodingStyle folder within the project.

Adding new code to the project is very difficult if that code is not coded according to the standard; many developers will request that the code be reformatted before they will even review it. A code base can be very large and requires some uniformity of code to make it possible for developers to quickly understand any part of it. So there is no longer room for strangely-formatted code.

* Abstraction layers:

Computer Science professors teach students to make extensive use of abstraction layers in the name of flexibility and information hiding. Certainly a complex embedded project makes extensive use of abstraction; no project involving several million lines of code could do otherwise and survive. But experience has shown that excessive or premature abstraction can be just as harmful as premature optimization. Abstraction should be used to the level required and no further.

At a simple level, consider a function which has an argument which is always passed as zero by all callers. One could retain that argument just in case somebody eventually needs to use the extra flexibility that it provides. By that time, though, chances are good that the code which implements this extra argument has been broken in some subtle way which was never noticed – because it has never been used. Or, when the need for extra flexibility arises, it does not do so in a way which matches the programmer’s early expectation. Developers will routinely submit patches to remove unused arguments; they should, in general, not be added in the first place.

On the other hand, if you find yourself copying significant amounts of code from another subsystem, it is time to ask whether it would, in fact, make sense to pull out some of that code into a separate library or to implement that functionality at a higher level. There is no value in replicating the same code throughout the project.

* #ifdef and preprocessor use in general:

The C preprocessor seems to present a powerful temptation to some C programmers, who see it as a way to efficiently encode a great deal of flexibility into a source file. But the preprocessor is not C, and heavy use of it results in code which is much harder for others to read and harder for the compiler to check for correctness. Heavy preprocessor use is almost always a sign of code which needs some cleanup work.

Conditional compilation with #ifdef is, indeed, a powerful feature, and it can be used within the project. But there is little desire to see code which is sprinkled liberally with #ifdef blocks. As a general rule, #ifdef use should be confined to header files whenever possible. Conditionally-compiled code can be confined to functions which, if the code is not to be present, simply become empty. The compiler will then quietly optimize out the call to the empty function. The result is far cleaner code which is easier to follow.

C preprocessor macros present a number of hazards, including possible multiple evaluation of expressions with side effects and no type safety. If you are tempted to define a macro, consider creating an inline function instead. The code which results will be the same, but inline functions are easier to read, do not evaluate their arguments multiple times, and allow the compiler to perform type checking on the arguments and return value.

* Inline functions:

Inline functions present a hazard of their own, though. Programmers can become enamored of the perceived efficiency inherent in avoiding a function call and fill a source file with inline functions. Those functions, however, can actually reduce performance. Since their code is replicated at each call site, they end up bloating the size of the compiled kernel. That, in turn, creates pressure on the processor’s memory caches, which can slow execution dramatically. Inline functions, as a rule, should be quite small and relatively rare. The cost of a function call, after all, is not that high; the creation of large numbers of inline functions is a classic example of premature optimization.

In general, kernel programmers ignore cache effects at their peril. The classic time/space tradeoff taught in beginning data structures classes often does not apply to contemporary hardware. Space *is* time, in that a larger program will run slower than one which is more compact.

More recent compilers take an increasingly active role in deciding whether a given function should actually be inlined or not. So the liberal placement of “inline” keywords may not just be excessive; it could also be irrelevant.

* Locking:

Any resource (data structures, hardware registers, etc.) which could be accessed concurrently by more than one thread must be protected by a lock. New code should be written with this requirement in mind; retrofitting locking after the fact is a rather more difficult task. Developers should take the time to understand the available locking primitives well enough to pick the right tool for the job. Code which shows a lack of attention to concurrency will have a difficult path into the mainline.

* Regressions:

One final hazard worth mentioning is this: it can be tempting to make a change (which may bring big improvements) which causes something to break for existing users. This kind of change is called a “regression,” and regressions have become most unwelcome in the mainline release. With few exceptions, changes which cause regressions will be backed out if the regression cannot be fixed in a timely manner. Far better to avoid the regression in the first place.

It is often argued that a regression can be justified if it causes things to work for more people than it creates problems for. Why not make a change if it brings new functionality to ten systems for each one it breaks? The best answer to this question was expressed by Linus in July, 2007 (Linux kernel development community):

So we don’t fix bugs by introducing new problems. That way lies madness, and nobody ever knows if you actually make any real progress at all. Is it two steps forwards, one step back, or one step forward and two steps back?

(http://lwn.net/Articles/243460/).

An especially unwelcome type of regression is any sort of change to the user-space API. Once an interface has been exported to user space, it must be supported indefinitely. This fact makes the creation of user-space interfaces particularly challenging: since they cannot be changed in incompatible ways, they must be done right the first time. For this reason, a great deal of thought, clear documentation, and wide review for user-space interfaces is always required.

Code checking tools:

For now, at least, the writing of error-free code remains an ideal that few of us can reach. What we can hope to do, though, is to catch and fix as many of those errors as possible before our code goes into the mainline release. Any problem caught by the computer is a problem which will not afflict a user later on, so it stands to reason that the automated tools should be used whenever possible.

The first step is simply to heed the warnings produced by the compiler. Contemporary versions of gcc can detect (and warn about) a large number of potential errors. Quite often, these warnings point to real problems. Code submitted for review should, as a rule, not produce any compiler warnings. When silencing warnings, take care to understand the real cause and try to avoid “fixes” which make the warning go away without addressing its cause.

Locking checker tools shall be used to detect any locking issues within the RTOS multithread system. One such debugging tool is the “lockdep.” This tool will track the acquisition and release of every lock (spinlock or mutex) in the system, the order in which locks are acquired relative to each other, the current interrupt environment, and more. It can then ensure that locks are always acquired in the same order, that the same interrupt assumptions apply in all situations, and so on. In other words, lockdep can find a number of scenarios in which the system could, on rare occasion, deadlock. This kind of problem can be painful (for both developers and users) in a deployed system; lockdep allows them to be found in an automated manner ahead of time. Code with any sort of non-trivial locking should be run with lockdep enabled before being submitted for inclusion.

As a diligent embedded programmer, you will, beyond doubt, check the return status of any operation (such as a memory allocation) which can fail. The fact of the matter, though, is that the resulting failure recovery paths are, probably, completely untested. Untested code tends to be broken code; you could be much more confident of your code if all those error-handling paths had been exercised a few times.

Other kinds of errors can be found with the static analysis tools. Some time spent installing and using these tools will help avoid embarrassment later.

Documentation:

Documentation has often been more the exception than the rule with embedded software development. Even so, adequate documentation will help to ease the merging of new code into the mainstream, make life easier for other developers, and will be helpful for your users. In many cases, the addition of documentation has become essentially mandatory.

The first piece of documentation for any patch is its associated changelog. Log entries should describe the problem being solved, the form of the solution, the people who worked on the patch, any relevant effects on performance, and anything else that might be needed to understand the patch. Be sure that the changelog says *why* the patch is worth applying; a surprising number of developers fail to provide that information.

Any new configuration options must be accompanied by help text which clearly explains the options and when the user might want to select them.

Anybody who reads through a significant amount of existing embedded code will note that, often, comments are most notable by their absence. Once again, the expectations for new code are higher than they were in the past; merging uncommented code will be harder. That said, there is little desire for verbosely-commented code. The code should, itself, be readable, with comments explaining the more subtle aspects.

Certain things should always be commented. Uses of memory barriers should be accompanied by a line explaining why the barrier is necessary. The locking rules for data structures generally need to be explained somewhere. Major data structures need comprehensive documentation in general. Non-obvious dependencies between separate bits of code should be pointed out. Anything which might tempt a code janitor to make an incorrect “cleanup” needs a comment saying why it is done the way it is. And so on.

Internal API Changes:

The binary interface provided by the RTOS kernel to user space cannot be broken except under the most severe circumstances. The RTOS’s internal programming interfaces, instead, are highly fluid and can be changed when the need arises. If you find yourself having to work around a kernel API, or simply not using a specific functionality because it does not meet your needs, that may be a sign that the API needs to change.

There are, of course, some catches. API changes can be made, but they need to be well justified. So any patch making an internal API change should be accompanied by a description of what the change is and why it is necessary. This kind of change should also be broken out into a separate patch, rather than buried within a larger patch.

The other catch is that a developer who changes an internal API is generally charged with the task of fixing any code within the kernel tree which is broken by the change. For a widely-used function, this duty can lead to literally hundreds or thousands of changes – many of which are likely to conflict with work being done by other developers. Needless to say, this can be a large job, so it is best to be sure that the justification is solid.

Bitwise operators in C: 2 Simple Programs to start with

Bitwise operators in C: 2 Simple Programs to start with

Bitwise operators in C can be very handy in writing simple and efficient codes to solve many logical requirements.
Just to get an overview of how the bitwise operators in C can solve logical problems easily, here we are giving two examples which are tested in Linux environment.

1. Find whether the given input number in even or odd.
2. Find the missing number from the series of natural numbers starting from 1.

Note: To make the code easily understandable, some of the common defensive programming techniques are omitted.
So let’s start with the coding,
.
.
1. Find whether the given input number is even or odd.

Here in this simple program, the input is read from the ‘stdin’ and read into the variable ‘input’. If the entered value is zero, we are jumping to ‘end’ and exits the program.

The code, res = !(input & 1); is computing whether the given number is even or odd. Here the variable ‘res’ will be set to ‘TRUE’ if the variable ‘input’ is containing an even number. Since the last bit of even number will be zero, the ANDing of ‘input’ with 1 will result in zero ((input & 1)), thereby making the entire right hand side operation to ‘TRUE’ (!(input & 1))

2. Find the missing number from the series of natural numbers starting from 1.

In this program we can find the missing number from the given list of natural numbers starting from 1. The input list of numbers is stored in the array variable ‘real_input’. The function ‘GetTheOneMissingNumber()’ will find out the missing number from the given list.

In this program we are performing XOR between all the real numbers starting from 1 to ‘count+1’ and saving it to the local variable ‘xor_res2’. Similarly all the elements in the given list ‘input_list’ are also XOR’ed and saved into the local variable ‘xor_res1’. A final XOR between the values of ‘xor_res2’ and ‘xor_res1’ will give the missing number.

Computation explained,
xor_res1 = X1 ^ X2 ^ X3 ^ X5 ^ X6 ^ X7 ^ X8 ^ X9 ^ X10
xor_res2 = X1 ^ X2 ^ X3 ^ X4 ^ X5 ^ X6 ^ X7 ^ X8 ^ X9 ^ X10
xor_res1 ^ xor_res2 = (X1 ^ X1) ^ (X2 ^ X2) ^ (X3 ^ X3) ^ (X4) ^ (X5 ^ X5) ^… (X10 ^ X10) = X4

PIC16F886 Assembly example code for UART echo test using interrupt

PIC16F886 Assembly example code for UART echo test using interrupt

This example assembly code for PIC16F886 can be used to learn the working of the UART module within the PIC MCU. The assembly code project is comprised of two *.asm files and can be compiled using MPLAB IDE.

Working:
The program will echo back the characters which were entered from a PC (Personal Computer).
.
.
.
.
.
.
.
Source files:

1. main.asm

2. uart.asm

PS: The above assembly program is having a special behavior while executing the same. Readers are encouraged to find out the program behavior and are welcome to comment on this. 🙂

LED Blink Assembly code for PIC16F886 using Timer0 interrupt

LED Blink Assembly code for PIC16F886 using Timer0 interrupt

The below is a working example of LED blink program using PIC assembly code which can be compiled on MPLAB IDE.

Abstract:

The LED is connected to RC4 of PIC16F886 and the circuit is setup as active LOW. Here the LED will glow when the RC4 output is made to ‘0’.
.
.
.
.
.
.
.
The PIC configurations used are,

There are 3 global variables also used for the project which are shown as below.

There are 2 labels also used as shown below,

Complete Assembly program: