Welcome, Guest |
You have to register before you can post on our site.
|
Online Users |
There are currently 6 online users. » 0 Member(s) | 6 Guest(s)
|
Latest Threads |
Thoughts on Obsolescence,...
Forum: Start Here
Last Post: HerbertDaf
09-05-2022, 04:55 PM
» Replies: 1
» Views: 9,746
|
A Brief Exploration of Bo...
Forum: Start Here
Last Post: Tom
04-05-2022, 04:10 PM
» Replies: 0
» Views: 2,305
|
The Fundamentals of Digit...
Forum: Start Here
Last Post: Tom
03-17-2022, 04:34 PM
» Replies: 1
» Views: 3,299
|
The Fundamentals of Digit...
Forum: Start Here
Last Post: Tom
03-15-2022, 08:32 PM
» Replies: 0
» Views: 2,326
|
The Fundamentals of Digit...
Forum: Start Here
Last Post: Tom
03-14-2022, 05:39 PM
» Replies: 0
» Views: 2,419
|
The Fundamentals of Digit...
Forum: Start Here
Last Post: Tom
03-10-2022, 07:23 PM
» Replies: 0
» Views: 2,079
|
Fundamentals of Digital E...
Forum: Start Here
Last Post: womantops660
03-10-2022, 07:29 AM
» Replies: 1
» Views: 2,988
|
What Does It Take to Get ...
Forum: Start Here
Last Post: Tom
02-21-2022, 04:36 PM
» Replies: 0
» Views: 2,318
|
Some History of Loop Cont...
Forum: Start Here
Last Post: Tom
12-24-2021, 02:49 PM
» Replies: 0
» Views: 2,145
|
Some History of Loop Cont...
Forum: Start Here
Last Post: Tom
12-17-2021, 04:23 PM
» Replies: 0
» Views: 2,069
|
|
|
A Brief Exploration of Boot Time |
Posted by: Tom - 04-05-2022, 04:10 PM - Forum: Start Here
- No Replies
|
|
In the 1960, computers were room-size devices. The IBM 1620 computer that I had access to used punched cards to load the operating system. If you crashed the computer it was usually caused by a divide-by-zero error. You would then need to reload the operating system in order to continue. You placed a large stack of computer cards on a card reader to restart. I remember it as a very slow process. Maybe it took 10 minutes to get up and running again. Since our student access to the computer came in one-hour blocks, a crash and reboot always seemed like a major setback.
Fast forward to 1979 when the Apple][ plus was introduced. My Apple][ isn't running at the moment due to the inevitable failed switch-mode power supply, so I can't confirm, but I remember the boot time from the floppy disk to be about 10 seconds. I started Lawson Labs with that venerable Apple][ computer. As time went on, and the business grew, I added an IPM PC, an IBM XT and an IBM AT computer. At one point, I had the four computers in a row. Boot times went up as the computers became more complex. If I started all four at once, the Apple][ would be ready to use first, and the IBM AT last. That pattern mostly tracked for user software, too. If I launched a text editor, or a spreadsheet program, the older the computer, the sooner it was ready to use. The later programs had richer displays, handled bigger files, and were more feature-laden, but for most ordinary tasks, any editor or any spreadsheet would do equally well.
Back in those days of technical optimism, it would have been sacrilege to point out that older was faster. Plus, for a while, it did feel like real progress was being made and that productivity was going in the right direction. Lawson Labs then was focused on data logging. The old way was a guy with a clipboard walking around and reading dials and recording the results on paper. The new way, using a computer to do the job automatically, was surely an improvement. For an overview, according to the numbers, US non-farm productivity growth was unusually strong in those years. But productivity growth peaked in 2002 and has not recovered by this writing twenty years later. We all know that it is far too easy to lie with statistics, but it may be that there is some truth in the idea that modern computers are actually slowing us down. How could that be?
First, lets take a step back and ask what the boot process amounts to. In early computers, permanent storage of computer code came at a premium. The Apple][ had a 2 kByte Read Only Memory chip called the boot ROM, or autostart ROM. It contained the code that would run at startup. One of its essential jobs was to allow more of the operating system to load from a floppy disk into volatile computer memory, ie RAM. The term “boot” refers to pulling ones self up by ones bootstraps, a process that may seem a bit mysterious, but Apple let in the light by printing the “firmware” programming code for the boot ROM in the manual. There were no Computer Science Departments in those days. Many of us studied that manual to learn the art of assembly language programming. Anybody know what 3D0G has to do with all this? (Hint: it is the command to start the Apple][ mini-assembler located in the monitor ROM. The code for the monitor ROM was also listed in the reference manual. 3D0 is the hexadecimal address, and G is the “go” command. Coders receive a pulse of adrenalin went they hit "G". With a mistake, you might end up anywhere.)
Early personal computers, including Apple and IBM, ran their operating systems primarily out of RAM memory. Early Personal Computers did not necessarily even have a disk drive. It is easy to forget that the IBM PC had a tape recorder connector on the back to plug in a cassette for permanent storage. Access to data on a cassette was necessarily slow, and though the disk drives of that day were much superior to a cassette, they were neither fast nor high capacity. Nonetheless, back then, if you saved something to the disk, it was actually written to the disk.
In more recent operating systems, in order to speed up operation, data written to a disk file will usually go into a buffer, and not actually be saved until later, because writing a larger block once after it has accumulated is faster that writing many smaller blocks as you go along. That particular change doesn't have much direct impact on boot time and start up, but it does slow shut down because then all file buffers must be flushed to the disk, and can indirectly effect start up time in a major way. That buffering causes indeterminate conditions for any disk files that are open when the computer crashes or when the power is cut without warning. Recovering from that sort of event requires extensive file checking the next time the system is started. Scanning a large disk drive for that class of errors can take dozens of minutes. Rebooting in those circumstances becomes extremely slow.
In a complex system with many interlocking parts, fractured data files can cause non-obvious symptoms. Further, the sequence necessary for error recovery may also be far from obvious. We are all familiar with the scenario where it takes many attempted restarts before a modern Windows operating system can recover after a crash. Here, we have situations where the boot time can be measured in hours, not seconds. We won't go into System Recovery Points here.
Next, a look at latency: Academics refer to Human-Computer Interaction, or HCI. The person hits a key on the keyboard. Then something happens. Maybe it is only that the keystroke is echoed to the screen, or maybe some other response is expected. The delay before the response is called System Response Time, or SRT. As a rule of thumb, a maximum keyboard SRT of 0.1 seconds is considered to be tolerable. An SRT of 1 second is a real annoyance, and can cause substantial reduction in productivity. Longer than a few seconds, the person's concentration is broken, and they will likely switch tasks while thinking dark thoughts.
Back when digital circuitry was first replacing analog, it was generally understood that latency needed to be unnoticeable in order to be acceptable. When you flip an analog switch, the result is expected to be immediate. (We can make an exception for fluorescent lights.) The computer keyboard has a buffer, not unlike that disk file buffer described above. The keystrokes first go into that buffer, and then are handled by the system. You won't notice any keypad latency on a pocket calculator, because the system was designed not to have any. Early PCs were much the same.
I just searched for history of computer latency to see if I was missing something before posting these thoughts. I found a study of keyboard latency done in 2017. Dan Luu, in New York City, measured latency for twenty two computers, going back to the Apple][e. The ][e came in fastest at 30 ms, while the newest computers show 90 to 170 ms latency. So it isn't just me.
Somehow, over the decades, we have become inured to computer latency and even outright non-responsiveness. Virus scans or system updates or who-knows-what in the background can bring Win10 to a halt. Worst is when you turn on the computer in the morning and after it takes forever to boot, it then ignores your input while taking care of some invisible, long-running background task. Psychologists study the resulting human stresses. Efficiency experts tear their hair. Telephone support personnel make small talk to avoid dead air. Ordinary folks get out their cell phones while they are waiting.
.
Maybe it is time to order a new power supply for that Apple][.
Tom Lawson
April 2022
|
|
|
The Fundamentals of Digital Electronics, Part 6 - Tri-state |
Posted by: Tom - 03-17-2022, 04:34 PM - Forum: Start Here
- Replies (1)
|
|
The Fundamentals of Digital Electronics, Part 6 - Tri-state
Part 5 of this series got us to microprocessors. That is a big step. More than a few new approaches were needed in order to make microprocessors practical. One of those new concepts was that of the tri-state logic circuit. A tri-state output can be high or low, like ordinary logic outputs, but it also has a third condition. That third state is called high-impedance. In plain terms, that means the output is simply inactive, as if the output were disconnected from the circuit. Tri-state is necessary to employ a data bus, because outputs from different logic circuits will drive the bus at different times. Some say the invention of the number zero enabled modern mathematics. In something like that fashion, tri-state outputs upped the game for digital logic.
The concept of tri-state is inseparable from the idea of an interconnecting bus, but how did we get here? As digital circuitry matured, the individual digital chips offered more function, and had more pins for interconnection. The digital interconnections problem went way beyond the previous analog situation. A transistor has three terminals. A resistor or capacitor has two. It is not that difficult to learn how to arrange analog components on a circuit board in order to achieve the desired circuit. Even with the advent of integrated operational amplifiers, the interconnection problems were manageable because an op amp typically had 8 pins. Digital ICs usually started at 14 pins and went up. A 7400, the first part number in TTL logic, contains four logic NAND gates with three pins each, plus two power pins. The interconnections between a field of logic gates is often a chaotic tangle. Those four gates might be used in different parts of the circuit, while the analog equivalent tends more toward a flow from input to output.
To deal with hand-building a digital prototype, a technique called wire wrap was often used. Wire wrap was borrowed from telephone switching circuitry, which also requires a very large number of interconnections. You ended up with a pile of tangled wire, even if you were careful to be systematic. Wire wrap was not fully reliable unless done exactly right, and was hard to rework or troubleshoot. It has mostly disappeared since the 1970's, and is not widely missed.
Once you got your circuit working, and laid it out on a printed circuit board for production, needing all those connections made it hard to pack the logic gates into a reasonable area. Replacing an analog function with a digital function required fitting the equivalent circuitry into something like the same space. Digital couldn't be bulkier if it was going to be seen as better. That's where the data bus came in.
Individual digital packages could contain ever more complex functions, and they could all share a set of generalized parallel circuit traces on a circuit board, but in order to share, the digital circuits needed to be able to get out of each other's way. The means for achieving that cooperation is tri-state outputs.
In a microprocessor-based system, a Central Processing Unit, or CPU, controls the address bus. A particular location is written to the address bus and a strobe is asserted by the CPU that tells the addressed device to read the bits on the data bus, or to write it's own bits to the data bus. Lets dig a little deeper, there.
The device mentioned above would be what is called a peripheral. That is, a circuit that works in concert with the CPU to provide some addition functionality. Lets say it is an external memory register. The register stores the last word that was written to it, and provides that word to the data bus when requested. So, storage resister defines a device. Next, how does the address bus work? In a simple computer system, only the CPU drives the address bus. External devices connected to the address bus are inputs, only. No tri-state needed. Decoders are used to determine which device is being addressed by the CPU. A decoder is basically a multi-bit digital comparator. It takes in a certain number of address lines and compares to a predetermined pattern corresponding to a range of addresses. The ultimate output of an address decoder is a digital signal, or strobe, that says either, yes, the CPU is talking to this register, or no, it is not. In the case of a memory register, there would be a read strobe and a write strobe to tell the register whether to store a word from the data bus or to write the word previously stored to the bus.
Every peripheral device does not need to fully decode the address bus. That would require too many connections and too much redundant logic. Instead, blocks of addresses are decoded together and subsequent decoders separate smaller blocks until an individual address is identified. When a peripheral device is told to write to the data bus, it has to be the only device that is writing at that time. If more than one device is writing simultaneously, the fight between outputs will result in indeterminate data. Everything else on the data bus must be in the tri-state condition for a selected device to write successfully.
So, the tri-state driver is essential for the operation of a data bus, and a data bus is essential for the operation of generalized, miniaturized digital circuitry. These relatively simple concepts were quickly overwhelmed by more complex enhancements. For example, Direct Memory Access, DMA, was used to allow peripherals to take control of the address bus and read or write to memory independent of the CPU. That was done to increase the speed of data transfers, so the timing all needed to be worked out carefully to avoid extra delay, but also to prevent outputs fighting over the bus. Timing diagrams for early microprocessor systems taking all these details into account can appear to be overwhelmingly complex. Other enhancements involve what are called interrupts, which preempt what the CPU was doing and send it off temporarily to another task. Recently, more function is integrated in with the CPU, so the external interface can seem to simplify.
For perspective, here is a list of microprocessor peripheral devices from a 1980's data book:
DMA Controller
Asynchronous Serial Communications Controller (serial ports)
Counter/Timer
Video System Controller
System Timing Controller
Numeric Data Co-processor
Universal Interrupt Controller
Hard Disk Controller
etc.
More and more integration has eased the designer's task, and shifted the playing field for how the user interacts with the system, but remember that you can build it all from the ground up by combining three simple logic functions, AND, OR, and NOT. After layer on layer of integration, the personal computer was the most visible result of that evolutionary process.
Apple and IBM became the main players for personal computers. PCs put the new microcomputers to work for ordinary users. Apple and IBM had different ideas about how to best go about it, which would be a good topic for future treatment here.
Tom Lawson
March 2022
|
|
|
The Fundamentals of Digital Electronics, Part 5 - The Microprocessor |
Posted by: Tom - 03-15-2022, 08:32 PM - Forum: Start Here
- No Replies
|
|
The Fundamentals of Digital Electronics, Part 5 - The Microprocessor
Part 4 of this series got us to the digital calculator, the first widespread application of highly integrated digital logic. A calculator chip is dedicated to a single purpose. Input is from a keypad, and output is to a display, and the processing is basic arithmetic. The success of the pocket calculator was due to small size, low cost, and ease of use. The next big digital development was the generalized miniature digital processor, which could be programmed to do almost anything. One of the first high volume applications for microprocessors was gasoline pumps. When putting gas in your car, you might not notice the difference, but the new pumps were cheaper, more reliable and more flexible.
All that flexibility brought a huge step forward in possibility, but a big step back in the convenience of putting these new devices to work. Mainframe computers of the day used paper tape, punch cards, and modified electric typewriters for input. For output there was usually a noisy, wide-bed printer that required special fanfold, pin-feed paper. None of those input or output devices were practical for use with microcontrollers. CRT computer monitors for displaying text didn't appear on the market until until the 1970's. (It is easy to forget that television sets had only been available since the 1940's.) So when about that time, microprocessor chips became available and affordable, how could a person put one to use?
In the 70's I was working at a medical electronics company, one of whose products was called a cardiac output computer. The computations were all analog, but the marketing department wanted trendy digital computation. We didn't have the budget for a rarefied development system with a CRT display, even if one had been available. Instead, phase one was designing and building an affordable development system that we could get up and running quickly.
We used a National Semiconductor SC/MP, the first microprocessor selling for under $10. The processor was an 8-bit device. That is to say that data was handled 8 bits at a time. Memory was a block of 8-bit storage registers addressed by what is called an address bus. The SC/MP address bus was 16 bits wide, allowing access to 65,535 storage registers. The memory itself was not included, and memory chips were expensive. We had under one kByte of physical memory actually in place for data storage. There was a central processing unit which understood 56 simple instructions. Some instructions were arithmetic, like ADD, some were logical, like OR, and some were for flow control, like JZ - jump if zero.
We put LEDs on the data bus. That worked pretty well for an 8-bit word. With practice, you can mentally translate back and forth between binary and decimal when dealing with 8 bits at a time. But 16 LEDs on the address bus would have been too much to handle. We built a 13-bit binary-to-BCD encoder out of a pile of discrete TTL family logic encoder chips. (Low power TTL wasn't available yet.) The encoder enabled a 4-digit LED 7-segment display to show the lower 13 bits of the address bus, up to location 8191, in decimal. Eight telephone switches allowed for data entry. (Telephone switches are obsolete now, too. I imagine most people today have never seen one.) A button caused the processor to execute the current instruction, called single-stepping. We entered the program code one binary byte at a time. Care was essential since a mistake could send you into a state beyond recovery. A car battery backed up the memory. If power was lost, you had to start over, since there was no non-volatile storage like a cassette tape or a disk drive. Even if we could have afforded a disk drive, there was no operating system, so it wouldn't have been any use without a lot more work. Non-volatile memory was then available in the form of PROM, Programmable Read Only Memory. You only got one try, a PROM with a mistake in it was useless. EPROM, or erasable PROM became practical only a little while later.
Troubleshooting code was painfully slow. Either you single-stepped, one instruction at a time, or you inserted Halt commands, and allowed the system to run until it hit a Halt. We inserted blocks of NOPs, for no operation, to leave places for corrections and for later additions. A jump instruction would skip over the block of NOPs so you didn't have to step through. If there was a mistake, and the processor got lost, you could overwrite sections of your program. The more code we wrote and tested, the more time it took to recover from such a mistake. So much for ease of use!
There were many lessons to be learned from the process. Most obvious, was the tradeoff between time spent on improving the development system and how long it took to write and test code. But the implications of the second major difference between a microprocessor-based digital system and the analog circuitry it replaced became increasingly important. Analog calculations are continuous. Digital calculations proceed in a series of discrete steps over a period of time. If the processor is fast enough, that difference doesn't appear to matter that much, but the more you give the processor to do, the more the processing time becomes a factor.
We bogged down our microprocessor early in the game. It had a 1 Megahertz clock and most instructions took several cycles. One task was establishing a baseline before the measurement. Another was taking the integral of an input signal minus the baseline. Another task was taking the differential of that input signal. A third task was combining the integral and the differential results and scaling them according to an earlier calibration. Input controls needed to be responded to, and outputs and status indicators needed to be updated and displayed. The processor had to keep up in real time while the data was being gathered at hundreds of points per second. Then, it had to complete its calculations and display results promptly. The doctors using the new digital version were accustomed to the earlier analog computers, so they expected to see their results immediately. We added circuitry to help out for speed.
By the end, we had added an analog integrator and an analog differentiator and an analog multiplier/divider. The microprocessor managed the system, and took care of the display. It turns out that a digital system doing 16-bit calculations with an 8-bit data bus requires a lot of time just to do the simple arithmetic. We actually considered interfacing a calculator chip to work the numbers. That turned out to be a general need. A few years later there were numeric co-processors paired with microcontrollers for speeding up calculations.
A whole range of other issues cropped up in the course of the microprocessor-based cardiac output computer project. Many of those issues are now commonplace and well-understood in what is now called computer science. In the mid 1970's, the entire enterprise was new and exotic.
Tom Lawson
March 2022
|
|
|
The Fundamentals of Digital Electronics, Part 4 - Counters |
Posted by: Tom - 03-14-2022, 05:39 PM - Forum: Start Here
- No Replies
|
|
The Fundamentals of Digital Electronics, Part 4 - Synchronous and Asynchronous Counters
In part 2 we covered an edge-triggered flip/flop, generally called a type D flip/flop, or a type D latch. We also showed how a type D flip/flop can divide a frequency by two. In part 3, we showed a digital oscillator, or clock generator circuit. Here, we will examine counters for timing functions, and compare synchronous and asynchronous approaches to timing circuits.
A counter is string of latches connected to a timing signal. If your clock runs at 1 Megahertz, and you divide by two with a D-Flop, you have a 500kHz signal. That can be fed to another divide by two stage for 250 kHz, then 125 kHz, etc. The simplest counter is called a ripple counter. A four stage ripple counter will divide by 16. See below:
The waveforms look simple enough, too. Here we start from 1 MHz:
Each succeeding stage cuts the output frequency in half. Digital timing circuits generally use oscillators stabilized by crystals or resonators which are available at specific frequencies. Knowing the input frequency, and being able to count a certain number of pulses, allows accurate and reproducible timing.
The analog timing circuits that digital timers replaced depended on resistors and capacitors to create timing delays. The delay is proportional to the resistance times the capacitance. Component tolerances and temperature effects alter the timing. Also, longer delays require higher component values. Measuring long periods with analog delays is especially difficult. You can't make a resistor value arbitrarily large. At some number of Megohms the leakage currents become dominant. In theory you can make a capacitor arbitrarily large, but it becomes expensive and bulky. Very long analog delays require voltage sensing with excellent long-term stability. The voltage sensor in the timing circuit will be waiting for the end of a long exponential tail. A little bit of offset drift can then cause a large timing error. With the advent of CMOS logic chips, we jumped at the chance to use ripple counters with many stages to count long delays.
A CD4020 is a ripple counter with 14 stages, which allows a 32,768 kHz crystal to be divided down by 2 to the 14th, for an output frequency of 2 Hz. It takes a lot of stages to get down to that half-second period, but it is only one 16 pin chip. The propagation delay in a CD4020B chip running on 5 volts could be as high as 360nS. By the time a clock edge propagated through all 14 stages, 5 microseconds could pass. That can matter more than it might seem.
If you need a timing period that is not one of those available when dividing a standard crystal or resonator frequency by some power of two, you need to add gating for that. In the simple example above, lets say you put an AND gate on the 8us period and the 1 us period, as below:
That should be straightforward, no? Well, actually not. See the waveforms:
The first stage of the counter acts a little bit earlier than the later stage, due to propagation delays. As a result, the first stage output rises before the third stage drops, so you see an unwanted slice at the transition. Unwanted slices are bad news in edge-triggered circuits. The more stages, the more delay, and the heftier the slice becomes. Problems of this sort can be hard to troubleshoot because they create marginal circumstances that can act inconsistently. Also, narrow, rare spikes are hard to see on an oscilloscope. The answer is a different kind of counter, called a synchronous counter.
A synchronous counter uses a common clock to update all the stages at the same clock edge. It adds quite a bit to the internal complexity of the counter to accomplish that. For example, an ordinary 74HC162 type 4-bit synchronous counter has 4 internal type D flip/flops, just like the ripple counter shown above, but it also requires 33 other logic gates of various descriptions to make it all work synchronously. Once the principles are understood, a designer uses the counter chip as a building block, so that extra complexity is hidden.
Various special-purpose digital counters were, and are, offered for particular applications. Those include decade counters with divide-by-ten gating built-in, up/down counters, and Binary Coded Decimal (BCD) counters. BCD counters are intended for use with digital displays where any given digit is a number from 0 to 9. A BCD counter wraps around from 9 back to zero. There are also counters with what are called 7-segment decoded outputs. That means the BCD or hexadecimal digit held in the 4-bit counter appears at the output pins as the state of the seven segments of a displayed character. For hexadecimal, above 9, a digit counts A, B, C, D, E, F, and then back to zero.
Using synchronous counters allows arbitrary gating of periods, but even then, care is still required to avoid very brief slices. Most ordinary synchronous counters come 4 stages to a package. It would take four four-bit chips to exceed those 14 stages available from the CD4020. Higher density digital circuitry came to the rescue. The 8254 timer chip (now obsolete), as used in the original IBM PC, is a 24 pin chip with three synchronous programmable 16 bit timers. With 48 stages of synchronous counting available, digital counting quickly took over most timing applications.
By now we have covered the fundamental building blocks needed for the evolution of digital processing. Simple logic functions can be combined to obtain complex logic functions. Latches store information. Oscillators and counters provide timing, and analog-to-digital circuits bring in data from sensors. Numeric or alphanumeric displays present the results. The first blockbuster digital computing application was the pocket calculator introduced in the early 1970's. The promise of the new digital technology was quickly evident. It took only 6 more years to get to the introduction of the Apple ][ personal computer.
Subsequent discussions might show how an early, simple microprocessor could be built and programmed to perform useful tasks. This isn't ancient history here, it all took off in the 1970's.
Tom Lawson
March 2022
|
|
|
The Fundamentals of Digital Electronics, Part 3 - an Oscillator |
Posted by: Tom - 03-10-2022, 07:23 PM - Forum: Start Here
- No Replies
|
|
The Fundamentals of Digital Electronics, Part 3 - an Oscillator
In part 1 we considered a simple flip/flop, which has two stable states, set and reset. (For that reason a flip/flop is also called a bistable.) In part 2 we looked at edge-triggered flip/flops. Those need a clock to run them. That will be our subject here.
A monostable is a variation on a bistable with one stable state and one transitory state. Monostables are also called one-shots. You trigger a one-shot and it changes state for some period of time, then reverts to the initial, stable state. An oscillator is a further extension. It continually switches states because neither state is stable. A digital oscillator can also be called an astable, or a multivibrator.
First, a bit of background. Once you begin building up more complex functions out of simple AND, OR and NOT functions you find that the tiny delays (called propagation delays) intrinsic to the logic functions begin to add up when you string a number of functions in series. A logic system with lots of independent logic functions (or logic gates) is called asynchronous, or unclocked. It is the designer's burden to make sure that no transitory indeterminate states can exist which might result in logical errors. Once a system gets complex enough, that task becomes over-burdensome. The solution is a clocked, or synchronous system, where state changes all happen at the same time. That way, any propagation delays can ripple through the logic before the next decision point. Modern computers are synchronous devices.
The point of the digression above is that a digital clock signal becomes an important building block before you can go too far with digital circuitry. A digital clock signal is an alternating series of ones and zeros at a particular rate or frequency. We can transform our bistable into an astable by adding two capacitors. Here is the original bistable from part one:
The concept is that a change of state should trigger the next change of state, but only after a short delay. The result is set, pause, reset, pause, set, pause, etc. No external inputs are required, so, we can remove the set and reset inputs. To establish the delay periods, we add two capacitors. The modified circuit now looks like this:
The bases of the two transistors are labeled BQ1 and BQ2. When Q1 turns on, Q2 is forced off by capacitor C1, which couples the negative edge at Q bar to the base of Q2, insuring that Q2 stays off until resistor R4 can turn it on again. Once the current in R4 has drawn the voltage at BQ2 up, Q2 turns on, forcing Q1 off by a the same mechanism in mirror image. If you find that hard to follow, think of the same circuit without the capacitors. You then would have a flip/flop with simultaneous set and reset, causing both transistors to be on. Adding in the capacitors simply forces the other transistor off for a short time after one transistor turns on. That drives the alternation. Here are the waveforms:
Q and Q bar are seen to toggle together, one always high while the other is low. The base of Q1, BQ1, shown in red, drops suddenly when Q2 turns on, then, voltage BQ1 climbs as capacitor C1 is charged by resistor R3. Once transistor Q1 turns on, at about 0.6 volts, the circuit toggles. The process is then repeated at BQ2. Note that the bases go quite a bit below zero volts here. If that is of concern, diodes can easily be attached to the bases to keep them from going more than slightly negative.
You may note that the Q and Q bar signals are not quite symmetrical. That is because they pull down hard when the transistors turn on, but they rise more slowly when they are pulled up by resistors R1 and R2. Smaller value pull-up resistors results in faster operation, but also increases the power supply current required. The tradeoffs between speed and power consumption have gotten a lot of attention over the years, resulting in more sophisticated oscillator circuitry, but the fundamental principles stay the same.
The frequency of operation is set by the RC time constant of R1 and C1 and R4 and C2. Using equal values for both pairs results in a duty cycle near 50%, which is usually desirable. If you wanted a shorter or longer on or off time, ratios can be adjusted accordingly. As a practical matter, the stability of the frequency of oscillation over temperature and the sensitivity of that frequency to changes in power supply voltage are matters of some importance. Those subjects are byond the scope of the discussion here.
The other aspect of a digital oscillator that may complicate the issue is startup. In SPICE simulation, and occasionally on the bench, an oscillator may find an intermediate stable condition in between on and off. That is almost always bad news for a clocking circuit. Typically, it takes a little extra push to get an oscillator started, but once running it will continue indefinitely. That little extra push can be a bit of noise, or any little mismatch between the set and reset action. In SPICE, setting an initial condition is often required to start the circuit. The initial condition statement in the SPICE figure above .ic v(Q) = 5 is what gets it started. Otherwise, ramp the power supply to begin, and insert a slight mismatch in component values. Then the simulated oscillator should start reliably.
That edge-triggered flip/flop from part 2 would be a good place to use the digital clock signal generated here. See part 4, Counters.
Tom Lawson
March 2022
|
|
|
What Does It Take to Get 24 Real Bits of Resolution? |
Posted by: Tom - 02-21-2022, 04:36 PM - Forum: Start Here
- No Replies
|
|
Here, we taker a closer look at the Lawson Labs Model 201, and analyze the various design considerations that went into maximizing the usable resolution. As the core of a data acquisition system, the Model 201 performs A/D conversion, data handling, and digital input and output. Expansion boards can be added for analog output or other special functions. The questions addressed here are directed towards what you actually need to make high resolution and accuracy practical.
Analog inputs
24 bit A/D chips that are intended for digital audio are not specified for DC accuracy. You may be able to get reasonable reproducibility out of an audio chip, but guaranteeing absolute accuracy is out of the question without over-the-top calibration procedures. The AD7712 is specified for DC accuracy, so it is a good choice for instrumentation. It has built in self-calibration circuitry, which we employ. However, the input impedance is not constant, but rather depends on the data rate because of capacitive sampling of the input. That sampling can interact with the input voltage source and reduce the real resolution and accuracy. Anyway, the input impedance is not nearly high enough to take full advantage of the accuracy and resolution of the A/D converter when used with higher impedance voltage sources.
That means a high-impedance buffer must be placed in front of the A/D converter. That buffer will introduce small offset and gain errors, so the the built-in self-calibration circuitry can no longer do the whole calibration job. We'll come back to that. The high-impedance buffer section needs to have excellent common mode rejection. Let's take a minute to explain what that means. A single-ended A/D input measures voltage in comparison to a ground potential, theoretically zero volts. The problem with that is that no two points can be relied on to be at the exact same potential. A 24-bit A/D converter with a +/-5 voltage range, like the Model 201, resolves less than 1 microvolt. It doesn't take much resistance or current flow to create a 1 uV voltage drop across a wire. There are also inevitable AC voltages present, but we talk about AC noise in other places, so will stick to the DC effects here. In order for the last few bits of a 24 bit A/D to have real meaning, you need to make a differential measurement, instead of relying on a hypothetical zero-volt ground. As you might guess, a differential measurement responds to the difference between the plus input and the minus input. For an ideal differential input, you could connect both inputs to a +/-5 volt sine wave and the reading should stay zero, regardless.
In practice, the gain from the minus side will not perfectly match the gain from the plus side, so some small remnant of the sine wave of common mode input voltage would show in your readings. We need that remnant to be vanishingly small in order to be able to ignore common mode effects. In the Model 201, the common mode rejection is minimized with a hardware trim. That leaves the offset and gain errors that occur in the added high impedance differential buffer section. You can try to trim out these errors in the hardware, but the process is necessarily imperfect, especially when you measure over a large temperature range. Also, offset and gain trims interact, and trimpots can change resistance over time and as the result of vibration. Best is to include the offset and gain errors in the self-calibration process. Again, that means the built-in self-calibration cannot do the whole job, if you are going to maintain the maximum DC accuracy.
A multiplexer is placed in front of the new amplifier so that calibration signals can run through the input buffer to the A/D. In the case of the Model 201, there is an 8-channel differential multiplexer with six channels for general-purpose input and two channels dedicated for calibration. All six available inputs have series protection resistors. These do not introduce noticeable error because of the extra-high input impedance buffers that follow. Then, the entire system is calibrated by both the internal self-calibration, and by the external signals, as managed by the external microcontroller.
To sum it up, the front end of the circuitry includes a protected multiplexer, a high-impedance differential amplifier stage, and calibration signals. The output of all that is fed to the A/D chip, under microprocessor control. Actually, there are two more circuit blocks between the input section and the A/D chip.
Overvoltage protection
Over enough time, all sorts of mishaps at the analog inputs are bound to occur. The easy way to protect the A/D chip is with clamp diodes to the supply rails. Clamp diodes can leak enough current to add measurable error to a high impedance input. Also, clamp diodes turn on incrementally over a range of voltage and may not save the A/D chip when the transient hits. A better clamp, for more reliability and best accuracy, requires a comparator and an analog switch to guarantee proper protective clamping.
Programmable filter
Delta sigma converters like the AD7712 are extremely effective at rejecting most frequencies of input noise. Still, for any given data rate, there are certain frequencies that elude the digital filtration. The solution to that problem is an analog low pass pre-filter that will remove the unwanted frequencies. Active filters introduce DC errors, so we avoid them. A simple one-pole RC filter will do the low pass job, but because the Model 201 is digitally programmable over a wide range of data rates, different filter constants are appropriate for different circumstances. So, the Model 201 has three programmable analog filter time constants. That added functionality involves adding a high-quality polypropylene filter capacitor and a mechanism to switch in different series resistors.
Power input and power supplies
The power input is given special treatment. First, we protect against reversed connections, then against overvoltage. Third, we pre-regulate at 24 volts, for any case where the input voltage is higher than that. Then, a 5 volt standby supply is produced. The standby supply is always on to keep the microcontroller active and checking for serial commands. If there is serial activity, the microcontroller powers up the rest of the power supply chain. The preconditioned input voltage of 14 to 24 volts is regulated at 12 volts. From that, a charge pump produces a -12 volt supply. Those +/- 12 volt supplies are regulated, but not precisely regulated. The +/- 12 volts is re-regulated to +/-6.2 volts to power the sensitive analog circuitry. An analog 5 volt supply is also derived, using a precision 5 volt reference that runs from the +12 volts. There are two more reference supplies needed by the A/D converter in order to take full advantage of its dynamic range. Those are at +/- 2.5 volts. Finally, a precision 5 volt reference output is provided for off-board circuitry.
It adds up to 12 power supply voltages total. In addition, various connections to the supplies are decoupled from each other with small-value series resistors and capacitor filters. A typical decoupling network might be a 10 ohm series resistor with a 10 uF and a 0.1 uF shunt capacitor. Decoupling networks eliminate unwanted interactions that can propagate via the power supplies.
To allow battery powered applications, total current drain must be kept low. The Model 201 draws just about 1.5 mA in sleep mode, and about 17 mA during normal operation.
Optical isolation
There are a few more power supply components on the Model 201 board, but the power for them comes from the host computer serial port. The RS232 interface provides plus and minus power to a chip which drives, and is driven by, the optocouplers for the serial communication. The reasons for needing this additional level of isolation are explained in detail in other posts on this discussion site. For now, just remember that isolation breaks ground loops and protects against potentially damaging fault currents.
The remaining circuit blocks on the Model 201 board are for digital input and output, and for expansion. There is also another layer of optical isolation for the four expansion outputs, A through D.
Overview of the microcontroller function:
Back in the '80s affordable microcontrollers did not include peripherals like serial interfaces. The PIC processor selected for the Model 201 was fast and efficient for its day, but unadorned with extra features. We built a stripped-down real-time operating system that prioritized serial communications and getting the analog data out of the A/D chip and into a small buffer. Digital input and output, plus housekeeping were handled as lower priority tasks. For data logging applications, the microcontroller can keep time and send back a pre-defined data set as fast as 1000 times per second or as slowly as once a day. Alternatively, it can passively respond to commands from the host computer.
The Model 201 is still an active product 35 years after its introduction, so it has proven itself in the marketplace. If you need just a little more than 16-bits of real resolution, you can get away with lot by starting with a 24-bit delta sigma A/D converter chip. But, if you want 22 or 23 bits of real, usable, reliable resolution, you need to do everything exactly right.
Tom Lawson
February 2022
|
|
|
Some History of Loop Control, and What it Teaches, Part 3 |
Posted by: Tom - 12-24-2021, 02:49 PM - Forum: Start Here
- No Replies
|
|
Picking up after parts 1 and 2, let's imagine you are a control loop operator following a predefined set of control rules. Your boss, the master control operator with a world of experience, does periodic rounds for performance review. He looks over your shoulder to see the state of your control loop. He observes all the input and output values at the moment. What does that tell him? Aside from the instantaneous error, basically nothing. Why is that? Because your boss needs some history of past behavior in order to judge how you are doing. So he watches for a while. Then he knows how the error term tracked over a time period, but that still does not allow him to effectively grade your performance, because chances are, not much is happening.
Instead of waiting a long time while looking over your shoulder, he may reach and turn the dial to disturb your carefully balanced loop. He can then learn a lot quickly by watching your attempts to recover stability and regain the set point. To judge your skill, the master controller will then observe not only a time sequence of the error term during restabilization, but also the pattern of your dial settings while attempting that recovery. Another day, instead of scrambling your dials, the boss might dump some cold water into your temperature-controlled frying pan to see your reaction. Your loop needs to respond appropriately to both control and process disturbances.
The lesson concealed in the above is that loop control, if it is to do an expert-level job, will need to track not only a history of the error term, but also a history of the correction applied. PID control doesn't provide a control term for that. Then again, PID control is always a step behind, trying to catch up. That is why there are no ideal PID control solutions, only compromises, some better, some worse.
Going back to a variation on the helmsman analogy, think about racing car video taken from inside the vehicle showing the driver's hands on the wheel. When the track is dry and the tires are good, the driver's hands are quiet, making smooth and steady corrections. If the tires are worn out, or the track is wet, the driver's hands will be making many quick corrections. Looking from outside, the path of the car may seem the same in the two cases, but the driver is working much harder to keep the car on the road when traction is limited. As a rule of thumb, the quieter the hands, the more skilled the driver.
Your boss, looking over your shoulder, will give you a higher score if you are not cranking the dial drastically one way and another in rapid succession. The longer you wait to act, the larger the required correction. We conclude the key to better control is to act strongly and promptly, but how does that fit in the PID scheme of things? It means the gain and differential terms are going to be large, and the integral term will be very small. Experienced PID hands can tell you that is an invitation for oscillation. For a race car driver, overcorrection means going off the track. For a helmsman, it means a crooked wake. For a control operator, it means a lousy performance review.
If you think about the typical driver, having a typical off-road incident, the usual sequence is that the wheel is turned hard in order to stay on the road, or to avoid an obstacle. That part goes OK. The accident happens when the needed counter-correction is delayed. Say the car is having trouble getting around a right turn, so the driver turns harder right, and in the end, goes of the right side of the road.
In intuitive terms, the question is what, exactly, is it that the skilled operator/driver/helmsman does differently to get the better result, and how do we reduce that behavior to a generalized set of rules which a computer can follow?
A good place to start is the concept of expected error vs instantaneous error. The skilled operator knows that disturbances will take time to correct, so you take the initial corrective action, and only make additional corrections to the extent that the system is not following the expected path back to the desired state. Consider the race driver in the wet. He feels the car start to rotate off the correct line. His flick of the wheel in the corrective direction is very brief. The steering wheel is back in its original position long before the full effect of the correction is felt. How do you quantify that skill?
Driving race cars is flashier, but for basic understanding, the frying pan example will be more useful. Let's say the dial is steady and the pan temperature is at the desired point. Then, we disturb the system and look to restabilize temperature quickly without inducing oscillation. Let's say you wanted to raise the temperature N degrees, so you would need to turn up the dial. If you knew the new setting for Temperature + N, you could go straight there and wait for the pan temperature to exponentially approach the new set point. That could take a long time. Better to turn the burner up further, then turn it back down to the new setting before the temperature overshoots.
If you had experience, maybe you know that it takes XW extra watts at the burner to maintain the pan at Temperature + N. In addition, you might know the thermal mass of the pan. If it takes WH Watthours to heat the thermal mass by N degrees, then, you could crank up the burner setting until you had injected an additional WH Watthours. Afterwards, you could return the burner setting to the original setting plus XW extra watts. During the interval, you expect the temperature to be rising, but you don't expect to be making any big corrections. instead, you are waiting to see if your calculations were correct. If they were, the temperature will settle near the new value without any extra delay.
It may sound complicated, but if there is a computer in the loop, it can do all the needed calculations more than fast enough to keep up. We call this process Predictive Energy Balancing. Advanced knowledge of system behavior is a help, but is not essential. Sophisticated PEB controls can learn to optimize themselves on the fly.
To be continued.
Tom Lawson
December, 2021
|
|
|
Some History of Loop Control, and What it Teaches, Part 2 |
Posted by: Tom - 12-17-2021, 04:23 PM - Forum: Start Here
- No Replies
|
|
As discussed earlier, an intuitive understanding of steering ships and heating frying pans can aid understanding of computerized control loops. That intuitive grasp tends to come unhinged when a control system is disturbed and falls out of regulation. I'll explain.
My mentor, in explaining the new solid state “operational amplifiers” back in the 1960's, said to imagine that an operator was working the amplifier. He would monitor two inputs, plus and minus, and adjust one output. If the plus input was larger he would do all he could to increase the output. If the minus input was larger, he would do all he could to decrease it. If you think of an op amp as an agent following a simple set of rules, that will help in understanding circuit behavior.
Like that operator, put yourself in the loop, looking at inputs and adjusting outputs. Take the simple case of the frying pan temperature described in part one. You watch the temperature, as measured at the burner, keeping in mind the desired temperature, and turn the power up and down to match the set point. Sounds easy, but complicating the situation is that the temperature sensor is in the stove, not in the pan itself. That adds delay, the heavier the pan, the more delay. If the pan had no thermal mass, it would immediately heat when the power went up, and would immediately cool when it went down. But the system does have thermal mass, so the control difficulty stems from the delayed response. You, the operator, determine the pan is too cool, so you turn up the heat. Nothing happens right away, so you turn it up more. Still too cool. Before the pan gets to the set point, you may have turned the burner all the way up. The pan temperature sails right past the desired heat and keeps on getting hotter. So, you turn down the heat more and more until the burner is off. We have here a classic oscillating control loop.
My mentor often quoted the engineer's variation on Murphy's Law - all amplifiers will oscillate and all oscillators will amplify. But that is another story. The question here is what you do, as the operative agent, in the burner control loop in order to stabilize the temperature? Intuitively what you do is decide to make smaller adjustments. That helps limit the oscillation. We call that property the gain of an amplifier or control system. A proportional control system makes adjustments in proportion to the magnitude of the difference between the present state and the desired state. Too much gain tends to encourage oscillation. If you make really tiny adjustments, it will take a very long time to get the pan to the desired temperature. But if you make enough of those smaller adjustments quickly enough, you will still end up alternating between one extreme and the other instead of finding the correct middle setting. The gain is only part of the picture. Simply reducing the gain while using proportional control is not a complete solution.
There is also the issue of time. Like the ships wheel, we need to slow down the response. In an operational amplifier circuit, you slow down response by adding filter capacitance. In a computer-in-the-loop system, you average. In addition to the proportional control, you can also pay attention to the average temperature. If the average is too high, you aim a little lower, and vice versa. Simple enough, but over what length of time does your average apply? The longer the period, the more stable the average, but also, the longer the system will take to stabilize if it is disturbed. What happens to the average when you change the set point? If you are aiming for the wrong point because of heavy averaging, that integral term can actually cause oscillation instead of curing it. The average is called an integral term in the language of control. If you pick a moderate time period for avergaing, that helps the system settle to the right place without slowing it down too badly or inducing slow oscillation.
In the helmsman's case, the wake might be perfectly straight, but the ship could still be slightly off course. The integral term could serve to correct the direction, but that average term would make only small corrections, and only slowly, so as not to disturb that nice staright wake.
Back to the frying pan, you can now get the temperature to the right place, and limit oscillations, but a disturbance still causes an instability that takes a long while to settle out. That instability looks like undershoot and/or overshoot. To limit those effects, you need to pay attention to rate of change, which, like the integral, is another time-related term. The rate of change is described as a differential term. If the temperature is too low, but is rapidly approaching the desired temperature, you could turn down the heat in advance of reaching the set point. That will limit overshoot. The idea is to approach the setpoint slowly enough to prevent the overshoot, but not so slowly that it takes a very long time to get there. This aspect of control turns out to be the hardest to get right, because the timing of adjustments is delayed by the integral and sped up by the differential. The operator can loose his bearings, like a helmsman in a fog.
Once you put all these rules in place, in particular the differential term, it becomes more likely to loose your orientation. Let's say you have turned up the burner because the temperature was too low, then turned it down because the temperature was rapidly approaching the set point. You still want the temperature to rise, but you turned the burner down, hmm. Now say the temperature is still too low but rising, so do you turn it up assuming the overshoot correction was too much, or do you turn it down assuming the rise will continue? The sensible thing to do when you don't know know whether your setting is too high or too low is to do nothing, because if you guess wrong, you are starting down the path to an ever increasing oscillation. It is easier for you, as an imagined operator, to know when you are lost than it is for a computer following a set of rules to conclude that it isn't sure what to do. You don't want your computer-in-the-loop system to end up on the virtual rocks.
Allow me to inteject a personal anecdote at this point to illustrate. I am driving on Interstate 15 in rural Idaho during a snow storm. Nobody else is on the road, and the snow is accumulating on the road surface. The safe speed has fallen to 40 MPH, or so it seems. The road takes a very minor turn to the left. I turn the wheel a bit to follow, but soon I notice that the car is still going straight. It is only my eyes that tell me. There is no change in the hiss of the tires or change in the feel of the steering wheel. The road is wide and empty, and my progress toward the shoulder is barely perceptable. It occurs to me that there is no one else anywhere near, and I that I am unlikely to be able to get the car back up on the road without help. So, I begin to make very small corrections at the wheel, really tiny adjustments. No change. The scene progresses in super-slow motion. Maybe 15 seconds have gone by already. I made the intentional decision to slow down my corrections even further, since I still had plenty of time, though the car was nearing the shoulder. I finally found the exact spot to hold the wheel where the car was rolling, not skidding. Then, I could very gently steer back toward the center of the lane. By that time I had slowed down some, and could safely keep the car on the road at the reduced speed. In context, the point is that I, the operator, did not know which way to turn the wheel in order to regain control. If I had tried larger corrections, in either direction, it is unlikely that I could have found just the right wheel position to regain control before the thicker snow on the shoulder sucked the car off the road. The hard part is knowing when to do nothing, or nearly nothing.
All of the above is towards an intuitive understanding of control loops, in general, and PID (Proportional, Integral, Differential) control more particularly. There will be situations where the control decisions are not apparent, even given plenty of time to consider. Your controls need to be able to recover from those tricky situations, or else you will find yourself in the metaphorical ditch. We find there are better ways to manage tricky control loops than PID. We call the principle Predictive Energy Balancing. PEB was developed for regulating power converters, but it has much more general applicability. Stay tuned for more on the subject, another day.
Tom Lawson
December 2021
|
|
|
Troubleshooting Tips - Start From Nothing |
Posted by: Tom - 07-01-2021, 03:28 PM - Forum: Start Here
- No Replies
|
|
Troubleshooting Tips - Start from Nothing
Back in the last century, I discovered an entertaining and informative series of columns in Electronic Design Magazine about troubleshooting electronics. The author was a fellow named Bob Pease at National Semiconductor. He had been at Philbrick for the design of the first practical operational amplifiers, and he had designed several important analog ICs for National. He was also quite the character. Sadly, he passed away suddenly in 2011. His magazine columns were turned into a book, called Troubleshooting Analog Circuits, which is worth seeking out. I do not presume to rise to that level, but I can offer some advice to beginners which may have some value for more experienced sorts, as well.
It is often best to start from nothing. By that, I mean nothing in, nothing out. A baseline may seem an uninteresting area for exploration, but noise or drift or jumps in your baseline will permeate your system, and baseline problems may be difficult to identify when looking at your output. For a data acquisition system, get as close to the Analog-to-Digital converter as you can, and ground the inputs. With zero volts in, you are looking for nothing out. Be patient. Problems can be intermittent, and drift occurs over a long time frame.
Drift tends to be thermal in nature, so you can often find a sensitive spot with a little applied heat. Fast heating is never uniform, so it may make a problem seem severe when it is barely there. Bringing a light bulb into the vicinity may be a good way to heat gradually when looking for thermal drift issues. Note that some electronic components, like diodes, can have a light sensitivity, so it is best to interpose an opaque layer if you are using light for thermal troubleshooting.
Another technique is to blow your warm breath selectively around data acquisition circuitry. A plastic or rubber tube won't short anything out and allows warm, humid air to be directed at will. For very high impedance circuits, the humidity, not the warmth, may have the larger influence by increasing surface leakage. To distinguish, a heat gun or hair dryer set at a low heat can provide the warmth without the moisture.
First, take all reasonable steps to minimize the drift. Then, you can consider improving your data with active calibration techniques. These methods can be transparent to the user, or can be done explicitly as a post-processing step. A zero voltage is measured periodically and is subtracted from the signal. That operation adds a little short-term noise, but can remove the large majority of slow thermal drift.
Other noise sources include popcorn noise, from the ICs themselves, and Johnson noise, which permeates literally everything. These noise sources are issues for circuit designers, and they establish the noise floor that you are going to have to live with. In a well-designed system the intrinsic noise should be limited to one or two bits. Two bits of noise on a 24 bit system leaves you with 22 bits, or one part in 4 million.
Once you have reduced drift problems to an acceptable minimum, look for other noise sources that you can do something about. Sometimes physical separation is a big help. Radiated noise follows the law of inverse squares, so doubling the separation reduces the noise pickup by a factor of four. Maybe just moving that power brick further away will help. Shielding and grounding are your other main tools. See the extensive shielding and grounding assistance found elsewhere on this website for guidance.
All of the above is in the interests of achieving nothing in, nothing out. Once you are satisfied with your baseline, you can inject a signal and look for the proper corresponding output. Whenever possible, examine the intermediate results first. By tracing a signal step by step through your system, you can be confident that each stage is behaving properly. If you look instead at the final result, it is easy to fool yourself with a speculative diagnosis.
To summarize, the temptation is to fire it up and see if it works. If it doesn't, the tendency is to start working backwards from the end. If you don't immediately identify the problem, I recommend working from the input side. When you start at the beginning, you can progress through a working system. Then, when you reach a problem, you know it right away. Working back from the output of a system that is not behaving properly makes it much harder, and it may be near impossible to pinpoint multiple, interacting problems.
Troubleshooting a simple problem is no big deal, no matter how you approach it. The hard part is troubleshooting when more than one thing is going wrong. For that, you will reach your destination sooner if you start at the beginning.
Tom Lawson
July, 2021
|
|
|
Some History of Loop Control, and What it Teaches |
Posted by: Tom - 06-24-2021, 01:52 PM - Forum: Start Here
- No Replies
|
|
A ship's wheel was traditionally linked to the ship's rudder by cables. The helmsman needed strength as well as skill to keep the ship on course, especially in bad weather. The industrial revolution brought the possibility of using machinery to make the job easier. The first such commercial use is said to have been fitted on SS Great Eastern in 1866. The Great Eastern was exceptionally large, so larger forces were needed to turn the rudder. A steam powered servo mechanism was built to rotate the rudder to match the position of the ship's wheel. Instead of making it easier to pilot the ship in a straight line, the new servo powered steering caused a tendency to wander off course, first to one side and then to the other. No matter how practiced the helmsman, the oscillation problem persisted.
A helmsman takes pride in holding a steady course, and that skill is immediately evident when looking at the wake of the ship. Unlike a car on a road, the path of a ship through the water leaves a disturbance visible for an extended time and distance that serves as a record of the helmsman's success. A weaving wake was not acceptable.
Engineers thought their systems needed to be more powerful to make the rudder movements faster. That way the rudder position would more closely match the wheel position. In fact, increasing the power assist made the problem worse, but that unexpected result pointed the way to the solution. When the engineers reduced the amount of power assist available, the rudder responded more slowly, and the helmsman could again steer a straight course. The lesson for us here today is that a control system needs to respond at the correct rate. Faster is not automatically better.
Once a helmsman could steer an arrow-straight course using a servo-controlled system, then it became reasonable to contemplate an autopilot that could hold a ship on course. That feat was first accomplished in the 1920s on an oil tanker owned by The Standard Oil Company. A hundred years later, automatic control loops are so numerous that we take them for granted.
Still, the underlying lesson from those first marine servo systems is largely forgotten. Modern controls often involve a computer. These are called computer-in-the-loop controls. When the loop doesn't behave as desired, the first instinct is to conclude that the computer isn't keeping up, just as the first marine engineers concluded that their servo wasn't powerful enough. The need for slower, less powerful, loop response is, and has always been, counter-intuitive.
If you have never steered a boat with a tiller or wheel, the helmsman analogy may not contribute to your intuitive understanding. Take a more everyday example. Think of a heavy frying pan on an electric stove. You turn up the heat setting when the pan isn't hot enough. If you wait until the pan reaches the desired temperature before you turn the dial setting back down, the pan will overheat. Then, if you turn the setting down in response, the pan will cool off too much. The temperature will then oscillate around the desired set point, much as the Great Eastern was weaving around the desired bearing.
If you have prior experience with the particular stove and pan, you may know in advance where to set the dial. That advance knowledge is often not available in a computer-in-the-loop control, so the loop needs a way to achieve equilibrium. For starters, the best way is to slow it down. Speeding the loop up will only result in larger oscillations. Smart systems can learn, as a practiced cook learns, how to better adjust the pan temperature, but you may be surprised how easy it is to fool a “smart” system. The control doesn't know that you poured a cup of room temperature broth into the pan, or that you have just put a lid on.
The takeaway is that if you are having trouble stabilizing your loop, resist the urge to speed it up, and begin by slowing it down.
Tom Lawson
June, 2021
|
|
|
|