Wednesday, September 12, 2018

How I spent my Summer Vacation

I remember in the 5th Grade, at the beginning of the school year we were tasked with writing 500 or some such words on that very subject. I didn't like the assignment, perhaps because I didn't really do anything that I thought was noteworthy. I do remember listening to many faraway lands on Knight Kit and Magnavox Shortwave radios, plus looking at the Stars and Planets with my trusty Tasco Telescope.

But this year was way more fun than that!

I built this small footprint Data Acquisition System.

  • 32 Bit MCU @ 250 MHz with Floating Point Unit
  • 5 x 16 Bit ADC's
  • 3 x 16 Bit DAC's
  • 3 x DDS Chips
  • +/- 1 Amp Precision Current Source 
  • 6 Analog IO Pins
  • 8 Digital IO Pins
  • Thermometer + Barometer
  • Real Time Clock
  • Onboard EEPROM
  • MicroSD Card for Data Logging
  • 6-30 VDC Input Power @ 1 Watt
  • Reduced 2 x 4 Inch Footprint

Article By: Steve Hageman

We design custom: Analog, RF and Embedded systems for a wide variety of industrial and commercial clients. Please feel free to contact us if we can help on your next project.

Note: This Blog does not use cookies (other than the edible ones).

Sunday, June 10, 2018

Improving Code Quality with Microchips MPALB-X IDE and XC32 Compiler

Authors Michael Barr [1], Les Hatton [2], Robert Martin [3] and a host of others basically plead with programmers to use more formal code analysis techniques, such as: “Static Analysis” during the design phase of all embedded projects.

Addition of these steps is not time consuming or even expensive in most cases, the payback is usually justified by preventing even a single problem in shipped code.

Recently I have moved all my Microchip PIC development to their MPLAB-X and XC32 compiler. This saved a nominal sum per year in compiler updates, because this Microchip tool chain is free and it makes my clients also happy to be able to get the development tool chain for really the cost of a very simple programmer / debugger.

The XC32 compiler is a version of the venerable GNU GCC compiler geared especially for the MIPS based PIC32 product line. This tool chain also includes their rather all encompassing ‘Harmony’ software framework that handles all the driver and peripheral initialization and provides a consistent Hardware Abstraction Layer when programming any of the PIC32 devices.

Note: This article applies to MPALB-X version 4.15 and greater and XC32 Version 2.05 and later, the current versions as of June 2018.

Improved Static Analysis for Free:

The first step to better code is to turn all the GCC warnings, this is accomplished by supplying the ‘-Wall’ switch to the compiler options window [4], as shown below.

   Add All Warnings to the XC32 compiler options

‘Wall’ or ‘Warnings All’ will enable all the possible compiler warnings, but it doesn’t go out of its way to mark every library function as a problem. I find that it only adds three or four warnings from a previously cleanly compiled project. The errors it finds are mostly relevant and easy to deal with. You will find unused variables, bad definitions and improperly initialized variables.

I’m not sure what level of warnings are enabled as default in XC32, but you should definitely not compile without ‘all warnings’ turned on. It’s absolutely free, so use it.

GCC supplies a push/pop mechanism to disable specific warnings at various places in the code.

For example the function,

void foo(void)
    int32_t unused = 0;

will produce a warning: “warning: unused variable 'unused' [-Wunused-variable]”

This warning (or other warnings) can be disabled at the ‘function scope’ like this,

#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-variable"
void foo(void)
    int32_t unused = 0;
#pragma GCC diagnostic pop

The above works to disable this specific warning at the ‘function level’

Trying to disable this warning inside a function will not always work and may be ignored.

void foo(void)
    #pragma GCC diagnostic push
    #pragma GCC diagnostic ignored "-Wunused-variable"

    int32_t unused = 0;

    #pragma GCC diagnostic pop

The above does NOT work to disable this specific warning inside a function.

If you think about the above example, this makes sense. How could the compiler know that the variable is unused until it hits the final closing brace ‘}’ ? So by extension this example does work,

void foo(void)
    #pragma GCC diagnostic push
    #pragma GCC diagnostic ignored "-Wunused-variable"

    int32_t unused = 0;
#pragma GCC diagnostic pop

The above does work to disable a specific warning inside a function, but not for a single variable. This will disable the ‘unused variable’ warnings for all variables between the ‘ignored’ and the ‘pop’.

For more information in the GCC diagnostic pragmas see Reference 5. Also note that Microchip makes no mention of this functionality in their documentation so it should be considered: ‘Undocumented’.

Improved Static Analysis for very little cost:

The next level of static analysis is to use a ‘Lint’ program [6]. First developed for UNIX in the mid 1970’s, but not widely used with regularity much of anywhere else because of the lack of suitable executables.

In the mid 1980’s when I first started writing C code, there was a program advertised called PC-Lint [7] and it sold for probably $300 then. That was a lot of money. Today the same program, but updated for 30+ years sells for $400. That’s not so much money when you consider the cost of giving one bug to your customers. In fact now, it’s downright cheap and will pay for itself if it finds a single problem! There are a few open source version of something like Lint, but nothing that compares to a reasonably priced, finished commercial product like PC-Lint.

MPLAB-X contains an optional plug-in to enable PC-Lint functionality inside the MPLAB-X environment. It is fairly easy to get going, but it took me probably 3 hours to get it working inside MPLAB-X the first time. I might be able to save you some time with the following instructions.

Installing the MPALB-X Plug-In:

Start MPLAB-X and install the PC-Lint Plug-In
    Tools - > Plugins - > Available Plugins
    Select: “PC-Lint Plugin”, and finally select: “Install”.

Installing PC-Lint:

With the advent of User Account Controls (UAC) in Windows 7 and later it is much harder to get software to play properly, because many programs need elevated privileges to have access to certain directories, etc.

The default install location for PC-Lint is in “C:\Program Files(x86)\PC-Lint”, this is a problematic directory when running with UAC enabled, and I wasted an hour trying to get configuration files properly written until I gave up and re-installed PC-Lint off the root directory here: “C:\PL-Lint\”, then no problems at all. Just do this and save yourself time!

Even with the very latest PC-Lint version you won’t get the latest XC32 compiler initialization files or ‘.Int’ files. You will need to go to the Gimple website ( ) and get this file,


Save the file in this directory,


Don’t run the PC-Lint supplied configuration file builder (CONFIG.exe) it won’t know about the latest co-xc32.Int file anyway and it will just need to be manually modified anyway.

Instead just make a file named: ‘XC32-std.Int’ in the directory: “C:\PC-Lint\”

Place these lines in the file,

//  Microchip MPLAB XC32 C, -si4 -sp4,
//  Custom lint options file for XC2 V2.x


C:\PC-Lint\options.lnt  -si4 -sp4
-i"C:\Program Files (x86)\Microchip\xc32\v2.05\pic32mx\include\lega-c"

You may need to modify the exact path and name depending on your circumstances. Here you can see that I am using Harmony Version 2.05 and I use the ‘Lega-c” library and include files.

If you are in doubt as to the exact path to use, just open up your projects ‘stdio.h’ or some other well known system header file and see what the path is by hovering on the file name tab in the GUI, then use that.

Make sure that there is a file called: “Options.Int” in this directory also: “C:\PC-Lint\”. Right now this file can just have a single empty line in it.

Now setup the PC-Lint interface in MPLAB-X. Select: Tools - > Options → Embedded and then select the PCLint tab at the right hand side as shown below. Then click: OK.

Set these options as above on the PC-Lint configuration screen.

Right click on your main projects name, then select: “Generate PCLint Dependency Files” You should see this in the PC-Lint output window.

If the directory is not writable these files will not be written correctly and you will waste another hour trying to figure out why. To be safe, re-generate these files every time you switch and start working on another XC32 project. Just think of it as “Getting Latest” from your source control.

Let’s go Linting:

This step will seem daunting and you will wonder why you did all this in the first place. Why? Because your first ‘Lint’ will produce about 20 pages of warnings.

Let’s try it – Select a small ‘c’ file from your project, like “App.c” for example, right click on its name and select: “Lint this file”.

Have no fear – 99% of those warnings will be the same complaint about some name probem or some header file, etc. These warnings that can be easily suppressed.

Just open the file: “C:\PC-Lint\options.Int” file and start adding errors to suppress the spurious warnings. In 30 minutes, by checking half a dozen of my code modules, I was easily able to get rid of all the ‘chaff’ and get down to those few simple things that should be checked or fixed.

Here is the “Options.Int” file that I ended up with,

// Please note -- this is a representative set of error suppression
//                options.  Please adjust to suit your own policies
//                See  manual (chapter LIVING WITH LINT)
//                for further details.

-w3     // Overall warning level (3=default) Settings 2 and 3 are OK.

-e586   // function 'printf' is deprecated.
-e793   // suppress message about extern identifiers being more than 31 chars long
-e9045  // non-hidden definition of type 'struct
-e9058  // unused outside of typedefs
-e970   // Use of modifier or type '_Bool' outside of a typedef
-e9071  // defined macro '_APP_H' is reserved to the compiler
-e537   // repeated include
-e950   // Note 950: Non-ANSI reserved word or construct: like: '_nop()'

-passes(2) // Make 2 passes on the code

Now that wasn’t so bad after-all – Now I can just Lint my files one by one and see the warnings / errors that really show up. I had a number of iffy initialized variables, some missing ‘defaults’ in Switch statements and a few of the dreaded,


instead of,



Most importantly I found a few real problems (like copy and paste errors) that would have prevented proper program operation and would have cost me debugging time to fix.

Gimpel offers this sage advice for first time “Linters”,

“If you've never linted your code before, you may want to modify the warning level initially.  Try running with -w1 to report only errors.  After you've fixed the resulting errors (or suppressed those that you don't want reported), run with -w2, and so forth. “

Further Reading:
Hopefully this tutorial will get you at least 80% of the way to where you want to be, in a very short period of time. For more information, the Gimpel website has a FAQ list that provides helpful hints and the PC-Lint manual is full of further in depth information.

You may also want to consider "Breadboarding" your C code on your PC before placing it on your target Embedded System, these two posts show how I accomplish this,

* Breadboarding Embedded Code
* Breadboarding Embedded Code - Part II


[1] Micheal Barr The Barr group. Prolific writer about best ‘C’ practices,

[2] Les Hatton, Author of: “Safer C”, McGraw-Hill, 1994

[3] Robert Martin is a frequent speaker at programming conferences. Many of his talks can be viewed on Youtube.

[4] There are a number of other compiler options available, I find that this one is the best for catching stupid mistakes without marking every single library function as non-compliant. For other possible options see the XC32 Compiler Users Guide – Yes there is actually a manual and as near as I can tell Microchip keeps it up to date.

[5] GCC Diagnostic Pragmas manual page,

[6] LINT

[7] Gimple Software PC-Lint, Version 9.00L is the current version.

PC-Lint has been called the longest continuously available software product in the history of mankind, since it has been available from the 1980’s until now.

Article By: Steve Hageman

We design custom: Analog, RF and Embedded systems for a wide variety of industrial and commercial clients. Please feel free to contact us if we can help on your next project.

Note: This Blog does not use cookies (other than the edible ones).

Monday, December 18, 2017

Custom Build-A-Box

One thing that many Analog Engineers who work on very low noise and low frequency circuits figure out very quickly is that even simple room air currents destroy the achievable accuracy of our circuits.

Most of us first figured this out when we hooked a Strip Chart recorder to our designs output for a day and, much to our surprise (and dismay) the output would invariably track the day and night room temperature variations very well. We managed to build a thermometer when all we really wanted was a very low drift amplifier.

Air is not only good to breathe but, it has a thermal mass, it has a temperature and it has a thermal transfer coefficient. When air circulates around a low noise / low frequency design it either heats or cools the circuit and normally in a not so evenly manner. This gives rise to all sorts of thermocouple effects that are exasperated by the air induced temperature gradients.

The first thing that most of us grabbed when we observed this effect was observed was some cardboard box that was dutifully placed on the circuit in question to keep the room air currents off of it.

But the ‘Fun’ has just begun...

Modern semiconductor packages are so small and thin now that many designs are now sensitive to IR light. Yes the IR light that your fluorescent lights give off is enough to bombard the IC’s transistors with photons because the packages are now transparent to IR light. This can also cause unexplained circuit drifts. Again a ‘Cardboard Box’ is the solution (Keep the circuit in the dark!).

Even light and air currents take a back seat to Long Wave and AM radio stations messing with your circuits. If you have never worked a few blocks from an AM radio transmitter, well let’s just say you have sure missed some fun hours of debugging there [1]. Those Florescent lights are major EMI ‘aggressors’ here too. Not the ones on the ceiling, as they are usually far enough away. The real trouble makers are the ones on your lab bench or even the microscope light. Yes, my microscope light is a real circuit ‘destabilizer’ - a ‘double whammy’ with IR and EMI bombardment!

Even a “Dark Cardboard Box” won’t filter out RF, so naturally, we just wrapped the box with aluminum foil or copper tape! There! A job sorrta well done...

Then Vs. Now

That was then, when our breadboards were spread all over the lab bench. Today our breadboards are finished products and may well ship to customers. This makes finding a suitable “Cardboard Box” to ‘tighten up’ the finished design very difficult indeed.

3D Printing to the Rescue

A 3D printer can quickly make any size or shape “Plastic Box” that you want, when you want it.

In a current design, I needed to get some air current isolation from the main circuit to the very sensitive and high gain Analog Front End. Years ago I would have hacked something together with scissors, an X-Acto knife and the cardboard off the back of a paper tablet. This would take a while and it would have looked pretty amateurish by the time I wrapped it with Electrical or Kapton tape.

For this design I just imported the Step model of the Analog PCB [2] into Design Spark Mechanical [3] as shown in the figure below. In less than an hour I had drawn the top and bottom of the box I wanted fabricated right around the PCB model and exported it as a STL file to my 3D printer.

The Custom Box was designed around a 3D accurate STEP model produced by my Altium PCB software. In less than 2 hours the box was printed as shown below.

The finished design as viewed from Design Spark Mechanical. The design is simply 'drawn' around the 3D accurate exported Altium PCB Model.

The actual finished two halves of the 3D Printed Box. Note the cutouts where the IO connectors have to go. For a real design I would have used Black Material because it looks more ‘professional’ but yellow shows up better in the photos.  

Bottom of the custom 3D box installed on the main board. The Analog Front End IO Connectors fits in the blue connectors that poke through the cutouts that I made in the 3D printed box bottom.

The base and top of the 3D box as it fit on the main electronics PCB. Way better fit than cardboard ever was.

The custom 3D box covers the analog front end board well. The Analog Input connectors in this particular design are BNC connectors.

Since I wanted EMI shielding also (you never know where your design may end up [1]), I sealed the box together with my some of my favorite copper tape. Be sure to always use the copper tape with conductive adhesive. The Copper tape enclosure is ‘grounded’ by electrically connecting the copper tape to the BNC’s with an overwrap.


3D printers are certainly handy, not only in Robotics Projects but in making custom, 21st Century “Cardboard Box” replacements in less than 3 hours start to finish – and they look better too.

Certainly a more professionally looking finished job and better working instrumentation too boot!

References / /Notes:

[1] I once worked for a company making instrumentation products that had a manufacturing plant in Puerto Rico. About a mile from the plant was a 1000 foot tall Navy Communications antenna. It did not broadcast continually, but when it did you couldn’t test anything at the plant site! Thank goodness the broadcasts happened very infrequently, otherwise we would have had to move!

[2] Altium has native 3D PCB design. Exporting a 3D Step model of any design is a “one click” operation. Then that model can be imported into any modern #d Drafting Package.

[3] Design Spark Mechanical – A very easy too use, free program from RS / Allied.

Article By: Steve Hageman

We design custom: Analog, RF and Embedded systems for a wide variety of industrial and commercial clients. Please feel free to contact us if we can help on your next project.

Note: This Blog does not use cookies (other than the edible ones).

Wednesday, December 13, 2017

An Interesting Breadboard and Circuit – The “Greatbatch Pacemaker”

My good friend Mike Seibel recently reminded me of a breadboard that he saw in my cube when we worked together at HP in the late 1990’s. The breadboard is still around and it intrigued me then as it does now… Here is the complete story,

Decades ago in 1995 I saw an interesting “Breadboarding Technique” that I had never seen before in an issue of the IEEE Spectrum magazine [1]. There was this full page picture of a chap named Wilson Greatbatch holding a manila folder with a simple circuit drawn AND attached to it. Mr. Greatbatch had built the breadboard right on the folder where he drew the schematic. That circuit was one of the first Pacemaker Circuits ever developed. The scan below is a poor quality reproduction, because in the original full page picture I could clearly make out the circuit and component values (Figure 1).

I had never seen anyone build a breadboard on a manila folder like this and I was intrigued by the designs oscillator and charge pump voltage doubler, so I built one of my own using my most common breadboarding technique – parts soldered directly to a piece of copper clad FR-4 (Figure 2).

I really didn’t think the circuit would run all that long, so to “prove my point” - I powered it with a used Lithium cell taken from a flash camera when it would no longer power the camera anymore.

Then I hung the running circuit on my lab wall and every few months I would see it, pull it down and test it to see if it still worked. To my unbelievable surprise it continued to work until about Mid 2010.

That’s around 15 years, even when I gave it every opportunity to fail early by powering it from a nearly dead battery in the first place. As usual, the circuit: “Had the last laugh” on me!

The article about Mr. Greatbatch went on to tell how Mr. Greatbatch developed the first Pacemaker by removing the wrong part from a 1 kHz oscillator that he was trying to build and it started to Squegging at about a 1 Hz rate with a narrow output pulse – and it hit him – “This is a Heartbeat!”. Again, a circuit ‘fail’ had the last laugh and a multi-billion dollar industry was born.

I could only hope that more of my “circuit failures” would lead to good things, but alas, most of my failures just lead to smoke or other parts that die sympathetically in one big chain reaction. At least I have never set a Lab on fire (yet!).

So take note of interesting circuits you see, give them every chance to fail, and they will surprise you every time!

Figure 1 – A poor scan of the original article picture. I was easily able to see the circuit and values in the original picture. What caught my eye originally was the breadboard Mr. Greatbatch drew and built on a Manila folder! (Picture originally copyright 1995, IEEE Spectrum).

Figure 2 – My prototype built with my usual breadboarding style – parts soldered directly to a piece of FR-4 Copper clad. 

Figure 3 – The schematic of Mr. Greatbatch’s original circuit. I especially liked the clever voltage doubler at the output. Mr. Greatbatch’s original circuit had a DC blocking capacitor on the output that I did not include, since I was not going to actually ‘Pacemake’ anything. The original circuit used 2N Something transistors, I used the very popular 2N3904 and 2N3906 transistors in my version. The current source “IG1” at the input is just a ‘kick starter’ that pulses 100 mA for 10 nanoseconds to get the oscillator running for the Spice Simulation.

Figure 4 – The output pulse is 2 x 2.8V peak to peak because of the clever voltage doubler My circuit ran at around 53 Beats per Minute. The width of the pulse is around 1.5 milliseconds.

[1] Adam, John A., “Profile: Wilson Greatbatch”, IEEE Spectrum, March 1995

Article By: Steve Hageman  

We design custom: Analog, RF and Embedded systems for a wide variety of industrial and commercial clients. Please feel free to contact us if we can help on your next project.

Note: This Blog does not use cookies (other than the edible ones).

Saturday, December 2, 2017

Li-Ion Notebook Battery's :: The facts of life….

The latest generation of Li-Ion batteries as used in notebook computers are quite long lived. I currently use a HP ZBook Laptop that has a 60 Watt hour flat Li-Ion battery pack. HP Supplies a battery test utility that I have used every month since getting the PC. It records the number of charge cycles and the current full charge capacity of the battery.


Battery pack design as used in my HP ZBook Notebook.

My normal usage is that the PC is plugged in 50% of the time, and I might drain it nominally 20 to 50% when it is unplugged. About once a month I will drain it pretty far down doing something offline for a number of hours (like running some experiment in the lab).

The “Total charge cycles” counter counts a charge cycle when the battery gets a full charge capacity put in it. It doesn’t matter if the charge is fully 100% or ten charge cycles of 10% - both of these count as one full charge cycle.

Over the past 27 Months I have averaged about 10 charge cycles per month, that’s probably less than a real “Road warrior” would do I’m sure.

The battery capacity has decreased better than advertised. The rule of thumb is that these Li-Ion batteries will loose around 10% of their capacity per year. This battery has been loosing just a little less than 6% per year.

The loss of capacity in this particular battery as measured every month.

Article By: Steve Hageman   

We design custom: Analog, RF and Embedded systems for a wide variety of industrial and commercial clients. Please feel free to contact us if we can help on your next project.

Note: This Blog does not use cookies (other than the edible ones). 

Thursday, October 5, 2017

Friends Don't Let Friends Use Un-Shielded Inductors

Yikes – I saw another one of these power supplies in a high sensitivity receiver. You've seen them too, those sweet little problem solver DC/DC converter modules (figure 1).


Figure 1 – These sweet little DC/DC modules sure solve power problems, but...

Yes, they solve power problems but they cause all sorts of havoc with the sensitive Analog Signals, especially in wideband systems. I have seen many issues over the years with unshielded inductors and in fact my standard design catalog of parts only lists shielded inductors for this very reason.

An unshielded inductor does not enclose the magnetic field (or the electric field for that matter) and ‘will’, notice I didn't say 'may', because it 'will' couple switching noise into other parts of the system.

Into PLL Synthesizers, RF front ends and IF circuits these little beasts spew their evil. In analog radios the noise level was high enough that you would probably never notice the issue, but with wideband digital radios and other measuring circuits that have dynamic ranges of 90 dB and more the problems are hard to miss.

There are a few alternatives,

1) Only buy a DC/DC module with shielded inductors. Downside: That's hard to do because cost is everything in these designs and shields cost money, so shielded designs are not easy to find.

2) Build your own DC/DC using good shielding. Downside: Your design will almost certainly be bigger and probably cost more too.

3) Add secondary shielding to the power modules: Downside: Size and cost.

4) Don’t do anything and hope for the best. Downside: Your product is cheap, but doesn't work very well, if at all.

To demonstrate the problem, I setup a magnetic / electric field probe [1] a few inches from the operating DC/DC module as shown in figure 2.


Figure 2 – A measurement of the ‘spew’ was made with a small probe placed a few inches from the operating DC/DC converter module.

Measuring the resulting field produced with an oscilloscope trace as shown in figure 3.


Figure 3 – The probe measurement for the unshielded inductor of the DC/DC converter.

To demonstrate how simply shielding the inductor works, I located a Ferrite ring of the proper size in my junk box of parts. The Ferrite ring was a bit tall but otherwise it was a perfect fit diameter wise as can be seen in figure 4.


Figure 4 - I found a Ferrite ring in my junk box of parts that fit this inductor early perfectly.

With the probe in the same location as it was when the measurement of figure 3 was made, the Ferrite ring was added to the DC/DC operating on the PCB and another measurement was made as shown in figure 5.


Figure 5 – The probe measurement is vastly improved simply by adding a Ferrite shield to the inductor of the DC/DC converter.

The reduction in measured field strength is obvious, it’s is nearly a 5:1 peak-peak voltage reduction, just by adding a Ferrite ring around the switching inductor!

As noted above it may be difficult to find commercial DC/DC modules that use any shielding, especially in the low cost open frame market segment. In these cases enclosing the power section in a steel can type of shield on your PCB will help. A small steel can PCB shield like is shown in figure 6 is effective at shielding electric and magnetic emissions over a broad frequency range. The use of shields especially when combined with proper analog grounding techniques [2], noise reductions of 25 to 50 dB are typically obtained.


Figure 6 – These low cost, semi-custom shields are a rel lifesaver when designing with high noise switching (or digital) circuits around sensitive analog. Place these shields both the analog sections and the noise generating circuits for maximum effectiveness.

Remember: Be a good friend and don't let your friends do the wrong things, if they do anyway, be sure to hand them a shielding can!


[1] This home made probe consists of a cut in half torrid wound with 10 turns of magnet wire. The signal is fed through a 50 Ohm coax line to a 20 MHz bandwidth limited oscilloscope input that is terminated in 50 Ohms at the oscilloscope.

[2] Proper analog grounding is defined as: Adequate separation between analog traces AND maximum, unbroken ground planes everywhere else.


Article By: Steve Hageman    

We design custom: Analog, RF and Embedded systems for a wide variety of industrial and commercial clients. Please feel free to contact us if we can help on your next project. 

This Blog does not use cookies (other than the edible ones).

Wednesday, August 9, 2017

FTDI / FT232R USB to Serial Bridge Throughput Optimization

Using a USB / UART Bridge IC like the FTDI FT232R is kid of a funny thing, what with "Latency Timers",  “Packets”, "Buffers" and what not, you soon find that it is not like a traditional RS232 port on a PC. Meaning: At higher BAUD rates you will probably notice that it is slower than your old PC with a dedicated Serial port.

Problem is you can't get a PC with a serial port anymore so we are kind of forced to use the USB equivalent.

There are ways to speed up the USB / UART bridge however, so read on (for the remainder of the article I will call the USB / UART bridge by the simple name: Bridge).

USB Transfer Background

A very popular, reliable and low cost Bridge chip is the FTDI FT232R. These FTDI chips have various settings: Baud rate, Packet Size, Latency Timer, Data Buffer and Flow Control pins and these all conspire together to meter data flowing across the USB link [1].

Baud Rate – This is the rate at which the FT232R UART will communicate to the attached downstream serial port. Normally on the downstream side this will connect to a processors UART port. The maximum BAUD rate is dependent on the application. If the end user is going to use some prewritten application, like a terminal program then there will be some constraints that you won’t be able to exceed. 115.2k is at least the minimum BAUD rate that every modern application would be expected to support. Most recent PC applications will support several BAUD rates above this, usually up to 921.6k.

The FTDI driver also has the ability to ‘alias’ higher rates to lower numbers to fool PC Applications into supporting faster BAUD rates. See reference 2 for more information on how to do this.

If you are going to write your own application and plan on using the FTDI DLL interface, instead of the Virtual Com Port (VCP) then you can use any BAUD rate that the processor and FT232R will mutually support. The maximum BAUD rate for the FT232R chip is 3M BAUD, which is easily supported on all modern 32 bit processors.

Note that there is no physical UART on the PC side, so Baud Rate means nothing on the PC side. This parameter is passed to the FTDI chip and that is how it sets it’s downstream BAUD rate to the attached processor, which does have a physical UART.

Packet Size – In these kinds of USB transfers the basic data packet size is 64 bytes. FTDI reserves 2 byes for it’s own use, so the user gets 62 bytes of data to use in every USB packet transfer. The packet size is fixed by the way USB works and can’t be changed.

The Data Buffer – The data buffer is set to 4k bytes by default (in the driver .INF file) and its part in metering data is this: The driver requests a certain amount of data from the device. The driver will continue to receive data until the buffer fills or the latency timer times out, then a USB transfer to the upstream program in the PC will take place. Valid buffer sizes are from 64 to 65536 in steps of 64 bytes. In the FTDI DLL Driver [3] the buffer size can be set by the command,

 FT_SetUSBParameters (FT_HANDLE ftHandle, DWORD dwInTransferSize, DWORD

Note: Only the InTransferSize can be set, the OutTransferSize parameter is only a placeholder and with the FT232 does nothing [3].

Note: The data buffer is physically on the upstream PC Side and is held in the USB Host Controller Driver.

The Latency Timer – This timer is set to 16 milliseconds (mSec) by default. If there is data in the Bridge when the latency timer times out then this data is sent. So at worst (with default settings) there is a 16 mSec delay in getting small amounts of data across the link. Valid Latency values are from 2-255 mSec. In the FTDI DLL Driver [3] the latency can be set by the command,

    FT_SetLatencyTimer (FT_HANDLE ftHandle, UCHAR ucTimer)

Note: The Latency Timer is physically on the upstream PC Side and is implemented in the USB Host Controller Driver.

Flow Control Pins – For the FT232 chip, the control pins have special meaning and functions. If the downstream side changes the state on of one of the flow control pins then the buffer, empty or not is sent in the next possible moment. This can be used to advantage by the downstream processor to signal the end of a transmission and to get the data sent ASAP to the PC. The PC can also control the downstream flow control lines. For instance the DTR line can be connected to the DSR line at the FT232R chip. Then the PC can change the state of the DTR line, which will cause the DSR line to change state and the transfer will immediately be initiated. All of the downstream to upstream Flow Control pins operate at the same priority, so there is no advantage to using one over the other if you are using them to just initiate a data transfer quickly.

Note: If you are using the Virtual Com Port (VCP) interface the Baud Rate, Latency and Buffer size can be controlled by: Editing the USB Driver “.INF” file, changing the appropriate registry keys or by using the Device Manager. There is no simple programming interface that I am aware of. Yet another reason to use the FTDI DLL for interfacing instead of the VCP, especially for applications that use custom programming on the PC side.

Understanding The Problem

Maximum throughput occurs when the maximum number of packets are sent in the minimum time. As a designer you have control over the: Baud Rate, Latency Timer, Buffer Size and the Flow Control pins.

So it is obvious that some combination of these four parameters will result in the fastest possible data transfer rate. The question then becomes: What are the optimum settings?

First we should study how my particular downstream processor and the PC communicate.

My application for this example is the control of an external digitizer. The external digitizer instrument has a 32 bit processor that is connected to the FT232R chip through a UART in the processor. My command structure is always a “Command / Response” type of communication.

Case 1: To start a digitizing data capture I can send a trigger command from the PC like: “TRIG:IMM” the command string is terminated with a “Linefeed” character. This command is decoded in the instrument processor to signify that a TRIGger / IMMediate command should take place and the downstream processor starts the data acquisition process.

To keep the PC and instrument in sync, the digitizer then sends back an acknowledgment when the command has finished. The acknowledgment chosen is the same ‘Linefeed’ character.

This way the PC and the downstream processor can always stay in sync and they both know when the communication has finished because they both wait for the ‘Linefeed’ character before proceeding on. I typically don’t use any other handshaking (or flow control).

In my simplest case (as above) the command from the PC might be 1 to 20 characters long and the response is always just a single character (the ‘Linefeed’).

Case 2: Is when the digitizer has captured all the data and the data is sent back to the PC for analysis. Again a simple command is sent from the PC to the Instrument like: “TRAC:A?”, meaning: Get data TRACe for channel A. This is followed by the ‘Linefeed’ terminator and then the fun starts. There might be a lot of data captured in my instrument that has to be transferred back to the PC. The standard capture is 1024 bytes of 16 bit ADC data. These ASCII values of data are separated by commas so a pretty worst case transfer might be something like,

    “65535,” repeated 1024 times and terminated with a ‘Linefeed’

This is 6 x 1024 + 1 characters or 6145 characters total. With a setting the of 3 M BAUD the processor can pump this data out to the FT232 chip in a little over 20 mSec. This was confirmed with a scope, the processor can easily pass this amount of data in this time without interruption or gaps.

The minimum case would be if the ADC Data was all zeros. In this case the transfer would be,

    “0,” repeated 1024 times and terminated with a ‘Linefeed’

This is 2 x 1024 + 1 characters or 2049 characters total.

It can be seen that even with a fixed length of data points to send back to the PC, if leading zeros are suppressed then the data could be anywhere from 2049 to 6145 characters total. Any optimization would have to take this into account.

Optimizing The Parameters

For Case 1: Where the command size is something around 10 characters and the return is simply the ‘Linefeed’ character, the buffer will never fill and the only way to minimize the transfer time is to set the latency to 2 mSec or use one of the Flow Control Lines to force a transfer.

For Case 2: Where the upstream data is large the proper choice of Buffer and Latency is not so clear.

Naturally as an engineer I read the FTDI optimization application note [1] and took it’s suggestions for setting the latency and buffer size and tested this by measuring and averaging 100 transfers. To my surprise the ‘Improved’ settings gave about the same average transfer speed as the default settings.

So then I started hacking settings by changing the parameters 20% either way and looking at the results – still nothing conclusive and I wasn’t able to converge on a higher transfer rate. I started to wonder: If the maximum transfer rate might be a sharp function of some combination of the latency and transfer rate, how would I determine this? I would likely miss it by hacking a few values at random and this wasn’t getting me anywhere anyway.

I turned to my old Friend: “Monty Carlo” as in the “Monty Carlo Method”. This is a trusty way of randomly picking a lot of values, applying them and then and then seeing what the result is. Monty Carlo analysis is useful when you don’t have a clear understanding of the underlying functions that are controlling something. You will be less likely to miss some narrow response if you randomly pick enough values than if you use an orderly method to step the values.

I wrapped a loop around my benchmarking routine and set it out to capture 5000 random Latency and Buffer Size parameter variations. I also set the test program to run as a EXE and not in the development environment, to remove that as a source of variation, and I didn’t use the test PC for anything else during the run.

Just looking at the raw data, the fastest to the slowest transfer time was: 0.0306 to 0.348 Seconds or 11X speed difference. The Default data rate with a 16 mSec Latency Timer and 4k Buffer was: 0.054 Seconds.  Changing the default to the fastest setting could result in a 54/30 or 1.8X speed increase. That’s worthwhile pursuing.

Looking at the raw data some more, the fastest 21 transfer times all had buffer sizes of 64 bytes. There is a conclusion right there without even having to plot the data!

Being curious, I did plot the entire 5000 points of data and the results are shown in Figure 1. There are some outliers which can probably be explained by the Windows Operating System going off and doing something else during a data transfer, but the large majority of points fall in a small band of values.


Figure 1 – A random selection of points was made for the Buffer Size and Latency, at each point an average of 10, 6145 byte transfers was made and recorded (Vertical Axis). A few features can be seen: A ‘rift’ is visible along the Buffer Size axis. Generally the minimum transfer time is with small Buffer and Latency values (lower front corner of the plot).

The ‘rift’ is an interesting feature of Figure 1. Figure 2 is a zoomed in look at that feature with the plot rotated so that the Transfer Time variation is flattened.

Figure 2 – A zoomed and rotated in look of Figure 1. Now the effect of transfer speed (Vertical Axis) on buffer size can be clearly seen. It does in fact have a minimum at around 6145 bytes and sub-multiples of that. However a minimum can also be seen at the smallest buffer sizes. Note: Since the 3D curve was rotated down to get all the slope out of the curve, the transfer time (Vertical Axis) is not valid for values anymore, it only is a relative measure: Lower on the graph is a faster overall transfer time.

Figure 2 shows the 3D plot flattened on the Buffer Size Axis – A few clear trends are present, the overall transfer time is minimized as the transfer buffer is reduced, reaching a minimum at the right end of the scale or, 64 bytes. Also, there is a minimum at around 6145 bytes and sub-multiples of 6145 bytes. This is predicted by the FTDI application note [1].

Figure 3 – A zoomed in view of small buffer sizes and latency numbers with a 6145 character transfer. Here the minimum can be clearly seen. Setting the Buffer size to 64 bytes and the latency to less than 10 results in the minimum transfer time over the other cases by nearly 10%. The rightmost curve in the plot above tells the story.
Figure 3 shows a zoomed in portion of Figure 1, for small buffer sizes and small latencies. Here it can be seen that the lowest transfer time is when the Latency is set to the minimum. The lower total transfer time group is when the buffer is 64 Bytes (rightmost curve in figure 3).

To analyze these results fully, I sorted the data and found the 24 fastest transfers. These all had a 64 byte buffer, as figure 3 predicted. Then I plotted the transfer time versus latency as shown in figure 4.

Figure 4 – The 24 fastest transfers of 6145 characters all had a 64 byte buffer. Plotting this set of data versus Latency showed that there is a very small local minimum here, but the difference in transfer time from 2 to 7 mSec Latency setting is less than 1 part in 30.

Figure 4 showed that there is indeed a local minimum in transfer speed, but the difference is so small that there really isn’t any appreciable difference for any Latency Value from 2 to 10 mSec.

Figure 5 – As a verification, I also did a summary plot of 2049 character transfers to make sure that the optimization worked for the smallest typical data set too. This plot follows the same trend of figure 4.As before any Latency Value from 2 to 10 mSec results in a very low transfer time.


It’s is really easy, for this example to minimize the USB transfer time for the two use cases in my project just by setting the buffer size to 64 and the Latency to 2 mSec.

This is far simpler than what reference 1 would lead you to believe, but it has been proven by actual measurements. Using these settings also eliminates the need to use the control lines to force a transfer as this won’t shave any time off the transfer time when you have the latency set to its minimum anyway.

As for PC performance: Even on my low end i7 core based notebook, running Windows 7 / x64,  I don’t notice any operating system sluggishness or excessive processor loading using these settings. So there don’t seem to be any downsides to this.

If any sluggishness is noticed, the Latency can be set as high as 10 mSec (5X Higher) with no appreciable reduction in the large data transfer and only an 8 mSec penalty in response time for the single character case (Chase 1), which may not be noticed or that important in the overall scheme of things.

If a higher Latency option was selected it might be wise to wire up one of the FT232R’s flow control lines and to have the downstream processor toggle this at the end of every command to maximize the speed of the single character transfer case.

As a final note: The FTDI Latency and Buffer size settings can be changed at any time after a FTDI USB device is opened for communication, and they take effect immediately. The elapsed time to set both parameters is less than 1 mSec so there is not much time penalty in actively managing the settings as a program runs.

This exercise lowered the transfer time in my application for 6145 characters from 0.052 to 0.031 Seconds, a factor of 8X for Case 1, a small 1 character upstream transfer, to a 1.6X speed improvement for Case 2, 6145 character upstream data transfers. I can now achieve an overall 1.9 Million Bits per Second transfer rate without changing the hardware at all. That’s time well spent tweaking a few software parameters.

Caveat Emptor

It should be noted that the USB bus is a cooperative and possibly multi device system that has a limited overall bandwidth. My applications requirement always specify that my devices are the only devices on the bus consuming any bandwidth for maximum speed. This may not always be the case. For instance there may be a wireless Mouse attached or a Disk Drive, etc.

Even though you may think that your ‘widget’ is the only device in the world, you can never be sure what your customers may try to inter-operate with. It is wise therefore to test and to make sure that your application and its settings can withstand other things going on in the PC at the same time. I personally have written a USB disk drive file mover application that I run on the PC while doing stress testing of my applications. This application consumes a large amount of USB bandwidth by copying large files back and forth across the USB interface to a USB disk drive in the background while I run my application in the foreground looking for transfer issues.


[1] “AN232B-04 Data Throughput, Latency and Handshaking”, Published by: Future Technology Devices International Limited,

[2] “AN120 Aliasing VCP Baud Rates”, Published by: Future Technology Devices International Limited,

[3] “D2XX Programmers Guide”, Published by: Future Technology Devices International Limited,

Article By: Steve Hageman     
We design custom: Analog, RF and Embedded systems for a wide variety of industrial and commercial clients. Please feel free to contact us if we can help on your next project.  
This Blog does not use cookies (other than the edible ones).