Data Acquisition

Data Acquisition and Control

by Tom Ligon, Assorted Technical Expertise

On This Page:

Back to the ATE Home Page

Back to Tom Ligon's Home Page

My Roots in Data Acquisition and Computer Control

I have programmed computers since about 1969, when I learned Fortran in high school (that was quite unusual at the time). A few years later, I began to dabble in computer interfacing to the real world using a PDP-8, a stone-axe little machine which, at the time, was considered a "mini-computer", but which had less capability than some kitchen appliances sold today. Nevertheless, it was fascinating that I was able to connect wires to an actual computer, to use it to sense external voltages and switches, and turn external devices on and off.

Shortly after I started work at ARTECH Corp, I was tasked with adapting a 4 MHz, 65kbyte, Z-80-based S-100 computer, a NorthStar Horizon, for data acquisition and control. The Z-80 was a big improvement over the PDP-8, although the NorthStar was still rather primitive by today's standards. How primitive? The computer was made of wood -- at least, the case was, a nice walnut veneer over plywood! But it was soon one of the most powerful computers in Northern Virginia. True, it was no Cray, but it controlled a 20 HP hydraulic system, and could bench press 20,000 pounds!

The NorthStar's primary job was running tests on an MTS 810 Servohydraulic Test Machine, essentially a stout frame holding a hydraulic cylinder which could be precisely controlled, with associated electronics. The motion or force produced by the cylinder followed electrical waveforms, tracked by several transducers. By selecting the feedback transducer, the force, cylinder movement (displacement), or other transducer signal could be made to follow the electrical waveform quite precisely, and generally fairly rapidly. Essentially, the MTS 810 was an electronic function generator with muscle. The NorthStar gave brains to the muscle.

The NorthStar was equipped with two accessory boards, one a 12-bit Analog to Digital (A/D) converter card which had 32 inputs, each measuring +/- 10 volts. The second card was a 12-bit Digital to Analog (D/A) card with 4 outputs, each producing +/- 10 volts. 12-bit means the card represented the voltage as an integer between 0 and 4095 (2 to the 12th power is 4096). To read a voltage, a simple sequence of operations selected the input channel, instigated a reading (the conversion time was 25 microseconds, allowing up to 40,000 readings a second), and input the integer from an I/O port (special-purpose input/output memory location), which could be converted to voltage or other engineering units by simple math. To output a voltage was comparable: convert the value to a 12-bit integer, and send it to the port of the desired channel.

The first two major applications of these cards were remarkably sophisticated for such a simple computer, and turned out to be the models for a large family of programs I developed over the succeeding 15 years. These programs were used to test fatigue cracking and crack-related fracture properties of metals, and the little NorthStar became something of a pioneer in the field.

Back to Top

Fatigue Crack Growth Rate (ASTM E-647)

We utilized a standard specimen called a "compact tension" specimen for much of our fatigue and fracture work. This is a rectangular (almost square) metal plate, with two holes near adjacent corners, and a notch in between them. The specimen is mounted in a pair of clevises using pins thru the holes, and load applied to the holes tends to open the notch, applying a high "stress intensity" to the sharp tip of the notch. Cyclic load of sufficient magnitude soon produces a crack at the notch tip.

The standard method for determining fatigue crack growth of specimens is ASTM E-647, and my program was one of the first to conduct this sophisticated procedure under computer control. We monitored the "crack opening displacement" (COD) via a strain-gage-based transducer which fitted into the notch. By recording load and crack opening displacement, the specimen stiffness could be determined. Crack length and specimen stiffness are related by a high-order polynomial which has been determined for this particular specimen geometry. After crack length had been determined, stress intensity at the tip of the crack could be calculated. The program recorded many such readings, and produced a curve of the rate of crack growth as a function of stress intensity, or "dA/dN versus delta K."

The sophistication and speed of this program, especially considering the computer's limitations, is quite remarkable. We did not have tools like LabTech Notebook or LabView at the time, and even if we did, it is doubtful that they could have handled the application. The data acquisition process started by taking a burst of load and COD data. The machine could run at a frequency of 30 Hz, and we needed to get a fair representation of the waveform, including the maximum and minimum load. I determined that a burst of 256 pairs of data readings would catch the peaks of a sine wave to within half a bit on a 12-bit system.

I developed an assembly language driver (assembly language produces the computer's native "machine code", and causes the machine to run in the fastest and most efficent manner possible) which scanned in pairs of points at a rate calculated to give 256 point pairs in the period of one oscillation. At 30 Hz, that's 7680 pairs of readings per second, or over 15,000 points a second, less than half what the board could run full-out. For comparison, LabTech and LabView max out at 2000 points a second, using boards with over 10x the conversion speed and computers over 100x as fast!

This is progress?. Well, thank you Mr. Gates! If you start to get the impression I rather don't like the Windows operating system for data acquisition, I'd have to say you're perceptive.

And now for the nitty-gritty of the process ... skip this if it bores you, but it does serve to illustrate the incredible power of custom-programming, as opposed to the common drag-and-drop datalogging applications most people use today.

The assembly language driver utilized an interesting interleave technique which most modern A/D boards are capable of, but which is rarely implemented in the pre-written drivers which come with the boards. These boards typically use a single converter which is "multiplexed" (switched) to each incoming channel. After switching, the buffer amplifer must be allowed to "settle", or the reading will be skewed by the previous channel read. For these boards, the settling time was 20 microseconds. However, once the conversion was initiated, the reading was made by charging a "sample and hold" capacitor, downstream of the buffer amplifer, that gave a stable voltage for the converter to process. Thus, once the conversion started, you could switch up the next channel, which would settle while the conversion proceeded. This technique nearly doubled the number of points which could be read while switching channels, and is a forgotten secret of the industry!

After a waveform had been acquired using the assembly-language driver, the interpretation of the data proceeded in BASIC (later, implementing these programs on a PC, I switched to Pascal). The assembly-language driver had determined maximum and minimum load. As the data scanned in to the BASIC program, data above and below a certain percentage of maximum load were excluded, due to non-linearity of the data in these regions. During the scanning-in, the numbers were converted directly to engineering units, and various sums were accumulated for a "linear least-squares regression analysis", i.e. a fit of the data to a straight line. The slope of this straight line was the specimen stiffness.

Next, the stiffness was entered into a high-order polynomial equation that determined crack length. This value was, in turn, used with the load data, cranked thru another high-order polynomial equation, to determine stress intensity. These data were recorded on disk for later analysis. Meanwhile, the computer monitored the values and adjusted the machine (via the D/A outputs) and determined when to terminate the test.

After the test was completed, the data were run thru a second program, which used the number of elapsed cycles (N) versus the crack length (a), and the delta K stress intensity values, to determine da/dN versus delta K, or the rate of crack growth versus stress intensity. This process also used a curve fit, a sort of moving average using a second-order regression analysis, to smooth the data. The resulting data were plotted on a log graph, and could also be cranked thru a log-fit regression analysis. This gave a slope and intercept which characterized how fast a crack would grow in the material at any given stress intensity. The couputer could measure crack growths below around half a millionth of an inch per cycle!

Think about this -- about every 14 seconds, the computer would take 512 numbers, measurements from the real world, and produce measurements of crack length and stress intensity. For slow crack growth, it used a moving average of recent readings to overcome noise. It would determine if significant crack growth had occurred, and, if so, store the data. These stored values were elapsed cycles, stiffness, and load, just a few numbers produced from 512, and stored only when a significant change had occurred. A typical test might store a few hundred of these points, fitting easily on a low-density floppy disk. At the conclusion, the points were boiled down to a graph, and the key numbers from the graph, a slope and an intercept, could be determined. A typical test might last a day (tests in salt water, run at low frequency, might last a couple of months). So a day of testing might involve over 3 million readings, all boiled down the to two numbers that an engineer really needs to tell if the bridge is going to fall down!

Most remarkable, all of this was done with an inexpensive "dumb" data acquisition card. We did not have sophisicated on-board memory or direct-memory-access background processes, or programmed-in burst modes. The computer controlled the functions of the card all by itself. The computer, even a pokey 4 MHz machine, was plenty fast enough for this, capable of running the A/D card at full speed, with time to spare. This is even more true today, with cards running 10x as fast as the old ones, and the computers topping 100x as fast. And yet, writing custom drivers, capable of sophisticated custom applications beyond the built-in abilities of the modern cards, seems to be beyond the capability of most programmers these days. Too much time learning to make animated icons and devising viruses carried by Java scripts? Too much attention to becomming certified in an operating system that slows the computer down until it can't keep up with an 8088 running DOS?

My first exposure to this method was on a borrowed computer which took over two minutes to get a crack length estimate, due largely to a poorly-designed data processing strategy. The NorthStar could do the job in 12-14 seconds, a huge improvement, as the crack might grow significantly in a minute at the higher rates. The later implementation, using Pascal on an 8 MHz PC-XT, could do the job in about 2 seconds. Let's see, over a 60-day test, that's over a billion data readings taken, boiled down to something managable without cluttering up a hard drive or requiring hours of post-processing!

I'm positive, from years of experience using it, that LabTech Notebook Pro cannot handle this application, and my exposure to LabView suggests that it cannot do so either, in spite of it being a big improvement over LabTech. The problem is that the "drag-and-drop" icon system these programs use do not allow the precise sequential control needed, the available data processing icons do not allow the exacting curve-fitting processes to be employed, and their intensely graphics-oriented, Windows-based, full-of-bells-and-whistles approachs hamper the computer so severely that even a Pentium clocking at cell-phone frequencies can't take the data fast enough! But the implementation on the NorthStar took a couple of pages of assembly language and about 8 pages of BASIC. The more sophisticated Pascal implementations run about 40 pages of code, most of which is re-used utilities picked up by every version, with only a few pages of code doing the key data processing.

Variants of the fatigue crack growth program have been numerous. It has been adapted, by changing just a page or two of code, to simple flexural stiffness degradation monitoring, to determining the phase angle between two sine waves, to integrating the energy absorbed by shock-absorbing materials during cyclic loading or even impact.

Back to Top

Fracture Toughness (ASTM E-399)

The second program of the pair originally developed was somewhat less technically challenging. The data were taken slower, by a method which might be executed with modern datalogging software, with some success. However, we didn't have commercial datalogging software available at the time, and programming for this application turned out to be a straightforward matter.

Often, rather than wishing to fully charactarize the fatigue properties of a material, we simply pre-cracked the specimen, then loaded it to failure. This is the classic ASTM E-399 test, or a variant thereof, and determines a value known as K sub IC. However, our customers were more interested in a variant of this test which determined a value J sub IC, or the "J-intergal" (ASTM E-813, now superceded by ASTM E-1820). This test was a bit more delicate, subjecting the cracked specimen to a series of progressively higher loads, enough to propagate the crack but not cause total failure. After each step in loading, the specimen was partly unloaded, the resulting load-displacement curve producing a stiffness, and thus allowing a crack length estimate. Integration of load versus displacement produced a value for mechanical energy input to the specimen, and comparison of that energy input to the crack growth produced data which high-end materials engineers could use to predict the behavior of materials as failure approached.

This application fairly begged for a computer's controlling touch. However, per instructions from the customer and my employer, I instead designed and built a special-purpose hardware function generator that allowed the operator to manually initiate the unloadings and loadings. This process allowed a well-known phenomenon, "operator error", to muck up more than one test. The control process, while not difficult, required concentration, something computers do better than people, and I begged for the chance to teach the trick to the computer, which could have done it via one of the D/A channels with no problem whatsoever.

Alas, the data the test produced were beyond the capabilities of all but a tiny handful of engineers to utilize. It was a great test, but nobody knew what to do with the numbers, and the market suffered.

Nevertheless, the program was the seed for a large family of other, more useful programs. Among these are E-399 fracture toughness, standard tests for tension and compression per ASTM E-8 and E-9, tests of structures in which multiple strain gages or other transducers are used, long-term creep tests, and pressure burst tests. In many of these tests, the computer generated the signal to control the applied forces. Stripped down even further, it became a simple datalogger, recording such things as temperatures from arrays of thermocouples, or power utilization in consumer product tests. One customer even had us do compression failure tests on actual human knee bones.

Even such simple datalogging is frequently best done with software smarter than your average glitzy datalogging package. The usual datalogging software records at a fixed rate. You might set up "stages" of acquisition in which the rate is varied, but the recording process is still fairly "dumb." By contrast, with custom programming, it is easy to set up "triggers" on selected channels. In that way, the program may scan the inputs at a high rate, but only store data when something significant happens on a key channel. Let's say you're monitoring a test in which there will be little or no change for hours, then a few seconds in which all hell breaks loose: you could record 1000 16-bit points a second for hours, or you could be smart and record data only when changes are detected. In the old days, we much preferred the latter course, because our computers and mass storage devices were so limited. Today, computers have more RAM than we had hard drives, and 10 GB hard drives are considered kid's toys, so folks are more likely to squander memory. But it does not take much imagination to realize, given enough sufficiently long tests at high data scan rates, that storing hours upon hours of high-speed data that shows absolutely nothing useful is a foolish waste. Furthermore, custom software easily avoids the problem. The result: instead of data files that won't fit on a CD-ROM, you might record 10 tests on a floppy, and be able to e-mail them in seconds rather than tying up a DSL line for an hour.

Back to Top

High-Speed Tests

As fast as the 12-bit data acquisition cards were, and modern 16-bit cards are, sometimes they're simply not fast enough. This is especially true in impact situations. Certain of ARTECH's customers were very interested in the behavior of cracks during high-speed catastrophies.

One such test was a modification of the standard "dynamic tear" test, ASTM E-604. In this test, a large weight is dropped down the guides of a guillotine-like test frame. The weight is equipped with a "tup", or striking protrusion, and the specimen, a long, rectangular metal bar with a sharp notch in its middle, rests on a two-point support called the "anvil." The tup strikes the specimen in the middle, opposite the notch, breaking it at the notch. The energy absorbed by this delightfully violent test is determined by the loss of velocity of the weight, or by the resulting height of dead-soft aluminum absorber blocks that stop the final motion. The variant of this test which the customer needed involved pre-cracking the specimen at the notch (using a modification of the fatigue program above), and installation of a strain gage just off the tip of the crack. Propagation of the crack past the strain gage caused a drop in strain reading. We also applied strain gages to the tup, allowing it to be used as an impact load transducer.

The desired data was a record of the impact force and the strain gage readings. By a little clever math, the absorbed energy could be determined from the velocity and mass of the weight and the load versus time trace, and correlated nicely with the total velocity change measured by a pair of flag velocimeters on the machine. Thus, we were able to determine the exact impact energy that caused the final failure mode to begin.

Related work was even more fun. We hauled the computer and electronics out in the Nevada desert, set steel plate specimens on supporting dies, and set off explosive charges above the specimens. Again, strain gages were situated adjacent to notches in the specimens, and would indicate crack growth. Typically, four strain gages were used per specimen.

Out in a desert, hundreds of miles from the nearest computer store, was a great place to sharpen my troubleshooting skills, when the equipment was rattled to pieces by the delivery company. I can troubleshoot even computers to the component level, if needed.

The data needed were recorded on what was essentially a pair of two-channel digital oscilloscopes. These primitive instruments cost $8000 each, represented the data only to 8 bits (0-255), and stored only about 4000 points per test. Still, they would sample two million times a second, something the 12-bit cards could never do. These instruments proved useful in myriad impact studies, including simulations of BBs striking human eyes, and impact forces imparted by paintballs.

Today, such tests are more easily conducted using digital oscilloscopes, costing a fraction as much, with far more memory, faster, and with more resolution. Such scopes can usually be equipped with an interface to allow the data to be downloaded to a computer, and may also have a removable disk drive of some sort, readable by a computer. Once the waveforms are in the computer, they can be processed, stored, printed, e-mailed, or any other thing the operator desires. Among the interesting options is Fast-Fourier-Transform analysis, the determination of the frequency content of the data.

Back to Top

Smart Instruments and Controllers

The number of smart gadgets is large and growing rapidly. Some which I have used include remote data acquistion modules, programmable temperature controllers, the SRS RGA-100 residual gas analyzer, and the Ocean Optics S2000 photospectrometer. In addition to computer-interfaced oscilloscopes, many meters today have computer interfaces of various descriptions.

Interfacing methods vary. One common, if primitive, approach is to use RS-232 serial communications, frequenly on a really stone-axe level. One temperature controller I've used, and the RGA-100, use this approach. One simply transmits a few characters of information to the device to elicit a response. The response may be for the settings of the device to change, or may be for the device to return data to the computer. These operations were a piece of cake in the old days of MS-DOS, or earlier. You simply wrote about a page of code to transmit and receive data, put out the needed command strings, and waited for a response.

Windows messed this up. Windows thinks any activity on the serial ports is intended for it, and it will try to intercept such traffic. It does not do so reliably, however. Programs which communicate reliably under DOS may work for a short time under Windows, then the data degrade to trash as Windows wakes up and starts messing with it. Efforts to disuade Windows from intercepting the data are often frustrated by Windows "user-friendly" attempts to self-configure itself. These problems can be overcome: I bought myself a book on serial communications in the Windows environment. The book is two inches thick. The first few pages are basically a tirade by the author lambasting the way Windows has rendered what once was a simple process into a morass of technical horror.

When I obtained LabTech Notebook Pro for Windows, it was with the assurance that it had tools included for RS-232 communications. Well, it did, sort of. You could drop an icon onto the screen which would cheerfully listen for communications on a serial port. The problem was, there was no provision for transmitting scripts (in this case, a few characters) to request the external device to transmit the needed data! That took custom programming, using the C++ drop-in icons, and even then, the basic construction of LabTech compromised the results. The C-icon had to be basically an independent program triggered by LabTech, and did not integrate gracefully with LabTech itself.

At EMC2, we did use LabView to satisfactorily acquire data from the RGA-100 and S2000 devices. Each of these devices did come with software which operated them well, in their basic functions. However, such stand-alone operation is not always what you need them for. In our case, we wished to perform multi-channel data acquisition of various analog signals, while simultaneously watching emissions spectra and gas composition in the vacuum system, all of the data synchronized so that we could correlate it. We didn't want three computers, each with an operator poised over the keyboard to start the software, we wanted everything to run, in synchrony, at the touch of a single button, from one computer, if possible.

Finally, I've seen some laughable data acquisition solutions in which folks have bought a bunch of digital multimeters (DMMs) with computer interfaces, usually IEEE-488 (GPIB or HPIB). This is expensive, and frequently gives appallingly disappointing results. DMM's generally produce only 3-4 readings per second, so, compared to an A/D card with a 2 microsecond conversion time (up to half a million readings a second), DMM's are pathetically slow. This is especially true when you realize that a DMM with a computer interface typically costs as much as a good 16-channel 16-bit A/D card! That's before you even consider the cabling costs for IEEE-488, or the availability of suitable software to monitor more than one instrument at a time. Also, DMMs do their data conversion differently than the high-speed cards. DMMs typically do an integration over the measurement period, while the cards "freeze" the signal using a sample and hold capacitor. A DMM is not giving a snapshot of the instant of acquisition, and may give bizarre results if sampling a rapidly varying signal.

Back to Top

So in conclusion ...

I have years of experience in this field. I've used various approaches. Often, particularly if you're interested in simple datalogging, and like the convenience and glitzy graphics, using a package such as LabView makes good sense. However, for really sophisticated applications, custom programming is a powerful technique, able to do operations for which no drag'n'drop icon has ever been invented. It allows integration of many types of transducers and instruments, makes it easy to produce compact data files, and can reduce post-processing of data to a minimum.

People who have never tried it often resist. They fear the complexity. In fact, the process is not actually all that complex! The computer commands are simple, implemented in languages ranging from BASIC to C++, offering absolute control over the sequence of operations, executing any mathematical algorithm or process you can conceive, and the core programs are usually versatile and easily adapted to other uses. And editing a program for a new use is generally just a matter of dressing up a few lines here and there, done as quickly as moving a few icons on a screen.

I find programming to actually be simpler, now that I have at my disposal the basic data acquisition engines. Compared to the heiroglyphic icons of LabView, connected by barely-distinguishable color-coded lines, with boxes inside boxes defining processes, and the sequence in which you created the icons, rather than their organization on the screen, may define the actual sequence of events, there's something solid and comforting about:

      Step 1;
      Step 2;
      If Needed then Step 3;
Until Finished;

Finally, MicroSoft Windows is a travesty when it comes to data acquisition. Today's ordinary desktop computers would run like yesterday's supercomputers if not for this overly-complicated, graphics-intensive, unstable, unreliable, meddlesome, overbearing Lord of the Microprocessor. Without a doubt, we're stuck with it, and, undeniably, there are tons of useful software available for it. Windows is part of our lives. And it is useful: I routinely use it to process data and write reports.

However, I simply despise Windows as a data-acquisition platform. First, the question of speed: between mysterious background processes delaying datataking operations, and the compromised speed of a graphics-intensive environment, Windows degrades computer performance. I puzzled over one computer's tendency, about once every 15 minutes, to suddenly churn the hard drive when it was supposed to be concentrating its full attention to monitoring a critical system. The problem turned out to be a program called FindFast, routinely loaded in the background of Windows machines. Every 15 minutes, it scoured the drive to make a list of Microsoft Office files, just so you could access them a little faster. Never mind that the computer was never used to run these applications, the ordering had to be done! Telling My Computer to turn off this process did not help: it turned itself right back on at the next reboot! I finally renamed FINDFAST.EXE as FINDFAST.JUNK and the problem went away. That's just one of many examples of Windows' hidden adgendas and medlesome background processes.

My experience, and reports I've seen, suggest Windows crashes on the average of once a day. I've had CP/M data acquisition systems, and even MS-DOS systems, run reliably for months without failure or a reboot required. The tendency of Windows to turn parts of itself back on after you've disabled them is especially troublesome. Finally, about the time you finally learn to work around one quirk, an upgrade comes along that replaces it with a dozen new quirks!

I just got myself a copy of Red Hat Linux, and I can't wait to try it out!

Back to Top

Back to the ATE Home Page

Back to Tom Ligon's Home Page