Search This Blog

Monday, January 18, 2016

Breadboarding Embedded Code - Part II


Following on the discovery of a really good Windows C compiler to replace my very old DOS version of Quick C (See: Link To Part I). Now I can share an actual example of how the Embedded Breadboarding process can used.

A lot of times I want to implement some sort of control algorithm for an embedded project and developing the code on the target may not be possible for a variety of reasons like: the target PCB is out being built and I don't have one yet, or because of cost effective reasons: It just takes longer to debug on the actual target system and lastly: it is often not possible to generate worst case test data (or Test Vectors) in real life. In these cases simulation is the best route to take.

Here I want to implement an “Analog Meter Like” response for a LCD display using a Microprocessors built in 10 bit ADC. What I want to do is,

1) Simulate some actual ADC bits (from a file).
2) Test a prototype Analog Meter Averaging Routine.
3) Save the results to to a file for analysis.

This is where Pelles C [1] comes in. Before I could accomplish this task however, I needed to write a couple of routines to read the simulated ADC bits (or the Test Vector) from a text file on my PC and then be able to feed them to a simulated ADC call and lastly rewrite the output bits to another text fie so analyzing and graphing the results will be easy.

The routines in Listings 1-3 below show one way to read the simulated ADC data from a file.

 uint16_t ReadFileLength(char *filename)  
 {  
      FILE* fh = fopen(filename, "r");  
      int32_t ch;  
      uint16_t number_of_lines = 0;  
      do   
      {  
        ch = fgetc(fh);  
        if(ch == '\n')  
             number_of_lines++;  
      } while (ch != EOF);  
      // The last line doesn't end with a new line!  
      // but there has to be a line at least before the last line...  
      if(ch != '\n' && number_of_lines != 0)   
        number_of_lines++;  
      fclose(fh);  
      return(number_of_lines);  
 }  
Listing 1 – First I read in the file and count the number of lines.


 void ReadTextFileToArray(char *file_name, uint16_t lines, uint16_t array[]) //(char *filename, int lines)  
 {  
      uint16_t val[lines];  
      FILE* fh = fopen(file_name, "r");  
      int current_value;  
      uint16_t i;  
      for(i = 0; i < lines; i++ )  
      {  
             fscanf(fh, "%d", &current_value);   
           array[i] = (uint16_t)current_value;  
      }  
 }  
Listing 2 – When I have read the number of lines, I then allocate memory for an array (see the main() function) and read the test data file again into the uit16 array.


 int main(void)  
 {  
      uint16_t length;  
      char *test_vector_file = "C:\\Users\\Public\\Documents\\AdcTestVector.txt";  
      char *output_vector_file = "C:\\Users\\Public\\Documents\\AveragedTestVector.txt";  
      length = ReadFileLength(test_vector_file);  
      uint16_t input_values[length];  
      uint16_t output_values[length];  
      ReadTextFileToArray(test_vector_file, length, input_values);  
      // Call the Simulated Embedded Code  
      Simulation(input_values, output_values, length);  
      WriteArrayToTextFile(output_vector_file, output_values, length);  
 }  
Listing 3 – A simple Main Program that wraps up reading in a text file and putting it in an array of uint16's.

The text file is just a simple file that contains a list of any number of simulated ADC bits to act as a Test Vector. The routines in the listings above count the lines in the file, then allocate memory for a uint16 array and finally read the text file data into the array value by value.

At the end of all this I have a nice array of uint16 bits that can be used as simulated ADC data to test my Analog Meter Simulated Averaging code.


Averaging Code in C

The Analog Meter Averaging code is based on the well known and loved Temporal Averaging method [2]. Basically this method adds a small portion of the latest ADC reading to a long running average value, hence implementing a classic RC Low Pass filter in software. The key is selecting the proper “small portion” to use. If the small portion is too large, then the filter doesn't filter very well, if the small portion is too small then the response is too slow. This is exactly like changing the “C” value in a RC filter.

The classic form of the averaging equation used is,

           AVG_new = (Xnew * Alpha) + (AVG_old * (1 - Alpha))

          Where: Alpha can range from 0 to 1.

As can be seen, the apparent averaging increases (Equivalent RC time constant gets longer) as Alpha gets smaller. If Alpha is one, then there is no averaging.

For this simple example I will use a value of alpha of 1/16, that makes (1-Alpha) equal to 15/16. This simplifies the actual C code from division to right shifts for the integer division. So my averaging function becomes,

      AVG = (X >> 4) + ((AVG * 15) >> 4)

Remember that right shifting by 4 is like dividing by 16.

Since this example is using a 16 bit uint variable to hold 10 bit simulated ADC data (as it would in the actual application) the multiply by 15 will not cause an overflow of the variable [3].
Listing 4 below puts the entire program together, including reading in the test data, processing it and writing the results back out to a file.


Simulating the Averager

My first test was using some multipurpose test data that includes a step, for step response testing and some noise that was generated with Octave[4] to test the filter. The noise was simulated with random bits ranging from 0 to 1023 or full scale for a 10 bit ADC. The results of the test are shown in Figure 1.

Figure 1- The results of running my simulated averaging filter. First a step response, then some very large signal random noise. This gives me a good feel for how the filter will work in the actual application. More simulations can be run in just a few seconds by writing another test vector to be used as simulated ADC bits and then analyzing the results.


Conclusion

With just one simulation run I was able to test my averaging algorithm for step response and filtering quickly and without using any actual hardware. At the same time I used the actual C code and bit widths that will be used in the final application. Now I can start simulating more realistic test signals that the application might see to check the system response. After this simulation phase I will have a very good idea of how the actual system will perform and there should be no surprises in the actual application. At the end of this design and test process I can just clip my averaging code and paste it in my actual application knowing it will work as expected (and tested).


Listing 4 (Below) – The entire Pelles C source code for the project.

 #include <stdio.h>  
 #include <stdlib.h>  
 #include <stdint.h>  
 uint16_t ReadFileLength(char *filename)  
 {  
      FILE* fh = fopen(filename, "r");  
      int32_t ch;  
      uint16_t number_of_lines = 0;  
      do   
      {  
        ch = fgetc(fh);  
        if(ch == '\n')  
             number_of_lines++;  
      } while (ch != EOF);  
      // The last line doesn't end with a new line!  
      // but there has to be a line at least before the last line...  
      if(ch != '\n' && number_of_lines != 0)   
        number_of_lines++;  
      fclose(fh);  
      return(number_of_lines);  
 }  
 void ReadTextFileToArray(char *file_name, uint16_t lines, uint16_t array[]) //(char *filename, int lines)  
 {  
      uint16_t val[lines];  
      FILE* fh = fopen(file_name, "r");  
      int current_value;  
      uint16_t i;  
      for(i = 0; i < lines; i++ )  
      {  
             fscanf(fh, "%d", &current_value);   
           array[i] = (uint16_t)current_value;  
      }  
 }  
 void WriteArrayToTextFile(char *file_name, uint16_t array[], uint16_t lines)  
 {  
      uint16_t val[lines];  
      FILE* fh = fopen(file_name, "w");  
      uint16_t i;  
      for(i = 0; i < lines; i++ )  
      {  
             fprintf(fh, "%d\n", (int)array[i]);   
      }  
 }  
 // Simulated embedded code  
 void Simulation(uint16_t test_vector[], uint16_t output_vector[], uint16_t length)  
 {  
      uint16_t x, avg;  
      avg = 0; // Initialize the average  
      // Loop for all the data  
      for(int i = 0; i < length ; i++)  
      {  
           // This simulates a GetADC() call  
           x = test_vector[i];  // Apply Averaging  
           avg = (x >> 4) + ((avg * 15) >> 4);   
           // Save the result in an array  
           output_vector[i] = avg;  
      }  
 }  
 int main(void)  
 {  
      uint16_t length;  
      char *test_vector_file = "C:\\Users\\Public\\Documents\\AdcTestVector.txt";  
      char *output_vector_file = "C:\\Users\\Public\\Documents\\AveragedTestVector.txt";  
      length = ReadFileLength(test_vector_file);  
      uint16_t input_values[length];  
      uint16_t output_values[length];  
      ReadTextFileToArray(test_vector_file, length, input_values);  
      // Call the Simulated Embedded Code  
      Simulation(input_values, output_values, length);  
      WriteArrayToTextFile(output_vector_file, output_values, length);  
 }  


Caveat Emptor:

This example has been implemented as a LCD meter driver and filter with good results. It is fast, low memory footprint and accurate enough for that application. However a closer look will reveal that this simple implementation looses bits of precision during the X shift operation. This can be seen by inspection that the current X value is shifted down before being added to the result. Those bits that are shifted down are lost. The probable limit for this filter is around 1/32 for Alpha because beyond this too much precision will be lost for even a simple meter driver application. 
 
Floating point in small embedded systems should be reserved for only where nothing else can be done because of it's memory footprint, time consuming calculation times and it has precision problems of it's own (A small embedded system is not a PC with it's unlimited resources). A fixed point solution can be implemented or a more normal box car type of averaging can be implemented if needed to preserve bits of precision as required (Also see Reference 5).


Simplification:

There are always a number of ways that any mathematical equation can be simplified. Sometimes approximations can be used, other times rearranging terms may be put to good use. The averaging equation here can be simplified to one right shift (or divide), this reduces the instructions required to do an average, but somewhat obscures the original formula. If you are tight for clock cycles you can use this simplified form of the equation,

      avg = (x >> 4) + ((avg * 15) >> 4); // Two Shifts

is equivalent to (within the scope of the loss of precision),

      avg = (x + (avg * 15)) >> 4;       // Only one shift


Extra Credit:

Just in case you are wondering how this simple filter preforms with different alpha values, I have plotted the obvious ones that are divisible by 2. These shifts are: 4 (>>2), 8 (>>3) and 16 (>>4) corresponding to 1/4, 1/8th and 1/16th sampling of the current X value. Certainly one of these should work for nearly every simple need.
References:

[1] Pelles C homepage: www.pellesc.de
 
[2] I first ran into Temporal Averaging in an Analog Devices article in the 1980's and first used it on an Apple ][ computer running Apple Basic. This technique also goes by a lot of different names, on WikiPedia the name is: Exponential Smoothing.

[3] Be careful here, I have one embedded compiler that gets confused and does not automatically cast intermediate results into the proper output width by itself. For instance the function,

     int16 = int8 * int8;

Will result in an overflow with this particular compiler. Most compilers will cast the int8's into the final result bit width (here an int16) before multiplying, thus avoiding the overflow. This is one case where the simulation will fail to tell you how the actual embedded code will run. Even with simulation, you still have to thoroughly test the final application code running on the target!
 
[4] GNU Octave is a matrix / numerical oriented interpreted language much like Matlab.
 
[5] Richard Lyons Book: "Understanding Digital Signal Processing", contains a number of clever implementations of this type of averager and several more averager implementations also. Highly recommended, as it is as understandable as any DSP book gets.

By: Steve Hageman / www.AnalogHome.com
We design custom electronic Analog, RF and Embedded system for all sorts of industrial products and clients. We would love to hear from you if we can assist you on your next project.

Monday, January 11, 2016

Breadboarding Embedded Code, The Easy Way...


Those of us who work in the Analog World are intimately familiar with the concept of breadboarding little bits of circuits so that we can better understand their limitations before we commit them to a PCB layout. We just get our trusty Soldering Iron, some parts and build away.

In the small embedded world we have much the same wants, many times we don't have access to a full emulator for the processor that we are using and debugging is painful using writes to a serial debugging port, etc. Not at all like when we write code on a PC using something like Visual Studio where the debugging is so easy and quick.

This came up recently (again) when I was developing a new PID control algorithm for a controller project. I wanted to Breadboard the C code on my PC so that I could quickly play some What-If's with the code and input parameters before committing the code to the actual board for integration, where the debugging is at least 10 times slower.

Using Visual Studio C# is out of the question for developing, and testing simple integer C code because all the integer math in C# is limited to Signed 32 Bit math. If you use C# like this you then have casts all over the place to keep your desired variables in the right format, like uint8's! That's ugly and not useful at all.

Visual C++ is an option, but it isn't quick nor disk efficient what with it's multi-Gigabyte disk footprint. Too much overhead and it is just overkill.

In the past I used to use a 1990 DOS version of Microsoft Quick C, unfortunately it doesn't play very well with 64 bit Windows anymore. So I looked around for another option. Some folks have ported DOS Turbo C to run on Windows in a DOS Box, but that looked too clunky. After all, who wants to give up a nice simple Windows GUI?

I have also tried to use Microchips MPLAB simulator, but it is very slow to simulate native PIC machine code, even on a multi GHz PC. No… What is needed is not a target simulator but a simple basic, but complete “Standard C” compiler to test straight C code on a PC.

After a few minutes of searching I found Pelles C [1] a wonderfully simple, but complete freeware C and fully functional IDE written by Swedish developer Pelle Orinius. This is a native implementation of C on Windows with a very easy to use native debugger and you can easily spit stuff out to a console window using printf(). Just like with Quick C, but modernized.

The footprint is very light at 18 MB for the basic compiler and 51MB if you want to include the windows extensions to write x64 windows apps with it.

There are some tutorials on YouTube, most notably the one on how to get debugging going and in 20 minutes I was writing, compiling and debugging projects on my own, and that perfectly fit my needs.

This is truly a very easy to use C compiler and fully functional IDE (with code complete) requiring no funny headers or setups to run on Windows. The only headache I had was with the usual Windows User Account Control (UAC), where on my first attempt to compile, I was warned that the files could not be written to disk. A quick setting of the directory permissions to me for full read/write access solved the problem.

The simplest program you can write in C is literally,

int main(void)
{
int var;
var = 10;
}

No #includes needed and you can single step through this program to see the execution as simple as it is.

To get way more complex and printf() something to the output window is no harder than this,

#include <stdio.h>
int main(void)
{
int var = 0xab;
printf(“Var = %0x”, var);
}

I always write my code with standard C typedef's for my variables just so it is clear what type I mean (and will actually get). In my code, variables are always declared like this,

uint8_t var;
int16_t var2;
uint32_t var3;

I thought that I would have to write a small <stdint.h> include file for these definitions, but no, there it was, in the Pelles C include directory. Now that's nice, I didn't even have to write <stdint.h>, that's a first!

As I said, no funny windows setups required, just plain and simple “C”. With all the features of a modern Windows IDE (read: Code Completion) and as a plus the IDE even uses Visual Studio like Icons and shortcut keys, like F9 to toggle a breakpoint.

Using Pelles C we have a simple, modern way to Breadboard our embedded C code in a time effective manor using all the resources of our PC's and not depending on actual hardware and it's constrained debugging.

This is really good and useful software...


References:

[1] Pelles C homepage: www.pellesc.de


By: Steve Hageman / www.AnalogHome.com
We design custom electronic Analog, RF and Embedded system for all sorts of industrial products and clients. We would love to hear from you if we can assist you on your next project.


Monday, January 4, 2016

An old article by Jim Williams

Analog Designs Aren't Dead Yet
An old article by Jim Williams...

I was looking over some old Analog Dialogues that are available on the web and I ran across this very old article by Jim Williams [1] (one of his first),



It is from 1976 and it covers the classic "Weighing Scale" application [2]. It is a reprint of an article that first appeared in EDN Magazine on October 5th, 1976. The title of the original article was,

"This 30ppm scale proves that analog designs aren't dead yet" [3].

It is interesting that way back in 1976 the main thing on some Analog Designers minds was: "Analog isn't dead yet!", still sounds familiar today. A recent issue of a popular magazine recently had a cover that boldly stated that “Analog is not dead”, yet sadly the issue contained really no Analog articles at all. So Analog may still not be dead, it just isn't written about much anymore.

Back to Jim's Article

I called this a classic weighing scale design because numerous clever circuits have been presented over the years to make a high resolution scale settle out to very high precision very fast. A difficult problem that usually involves the use of a Nonlinear Filter. This article (Last Figure) shows that function as a Black Box and provides very little detail about it other than to describe its function as,

“...Carefully designed nonlinear filter permits large changes of weight to be stably registered within 5 seconds, small changes within one second and fast disturbances … To be rejected”

This is the age old problem in electronics, that is,

“How do I make a very high resolution measurement in the presence of noise quickly?”

Numerous articles over the years have presented solutions to this problem one way or another. The simplest way is to put back to back diodes across the resistor in a RC Low Pass filter as shown in Figure 1. While this works, it is pretty limited in it's adjustability. You have the choice of a 0.6 volt or 0.3 volt forward voltage drop for the diodes depending on weather you use Silicon or Schottky barrier diodes. Over the years numerous other circuits have been presented to increase the utility of this simple circuit even more. Perhaps the ultimate form of the purely Analog non-linear filter was presented by Burr-Brown Engineers: Messrs. Stitt and Burt in 1991 (See Figure 2 and Reference 4).



Figure 1 – The simplest form of a nonlinear filter is to put bypass diodes around a RC filter. It is simple, but it isn't very adaptable or stable over temperature.

Hewlett-Packard also faced a similar problem with their Microwave Power Meters starting as I recall with the Digital Model 436 Series and continuing to this day. The first digital power meter, The Model 436 [5] used a hybrid approach, incorporating a analog filter, and also including a digital box car or moving average filter. The combination of the two allow for fast settling to a low noise value. Later models of this power meter series also incorporate a non-linear digital filter by resetting the moving average if the input exceeds around 12% of the average value. Thus improving on purely analog filtering even more.



Figure 2 – Reference 4 shows how to make a much improved nonlinear filter. By the addition of an OPAMP the diode threshold can be made adjustable over a very large range. Stability over temperature issues remain however.

Analog Design is now Hybridized

This hybrid trend continues unabated. While there may be relatively few purely analog designs anymore, there are many examples of hybrid designs that take the best of both the Analog and Digital worlds.

If your latest design digitizes the signal of interest you can apply an infinite variety of digital filtering both linear and non-linear. You can even change the filtering characteristics on the fly when the system is running.

One of the hybrid approaches I have used in noisy industrial settings is to use an purely analog filter set to the maximum bandwidth needed. This acts like a roofing filter and if properly designed will even tame Radio Frequency interference and add ESD suppression. The signal is then digitized by a microprocessor and the samples are put in a buffer. To non-linearly remove impulse noise a median filter is applied. This discards the largest and lowest recorded value, then the rest of the buffer is averaged yielding an average with large peaks removed from the measurement. Even here a fair amount of Analog knowledge is needed to make the filter work as a low pass filter for the Analog and reject RF and ESD as required by the system usage. Careful experimentation and testing to optimize the digital filter is also required for best performance. The degree of filter memory is also adjustable from remembering all samples to remembering no samples between averaging cycles. The RC filter always has some memory and this leads to slow settling times. Best of all however, if the Digitizer is stable over temperature, all the digital filtering will be drift free also.

So I wouldn’t say Analog Design is Dead, or even sick! Analog Design has just been hybridized by Digital Processing to make our systems even more bulletproof and lower cost to boot, we should take advantage of this whenever possible.



Original Article Snippet from Analog Dialog [2]



References

[1] Analog Dialogue was distributed in printed form starting in 1967, today it continues as an online electronic publication.
http://www.analog.com/library/analogdialogue/archives.html

[2] Analog Dialogue, Vol 10, No 2, 1976
http://www.analog.com/library/analogDialogue/cd/vol10n2.pdf
The screen captures here are reproduced from this source and are copyright and owned by Analog Devices, Inc.

[3] Thanks to Kent Lundber of MIT for cataloging all of Jim's articles, where I found that this was indeed one of Jim's first articles.
http://web.mit.edu/klund/www/jw/jwbib.pdf

[4] Stitt, Mark and Burt, Rod, “Fast Settling Low-Pass Filter”, Burr-Brown Corporation, AB-022, January1991
http://www.ti.com/lit/an/sboa011/sboa011.pdf

[5] HP Journal, October, 1975.
http://www.hpl.hp.com/hpjournal/pdfs/IssuePDFs/1975-10.pdf


By: Steve Hageman
www.AnalogHome.com