A Brief Exploration of Boot Time
#1
In the 1960, computers were room-size devices. The IBM 1620 computer that I had access to used punched cards to load the operating system. If you crashed the computer it was usually caused by a divide-by-zero error. You would then need to reload the operating system in order to continue. You placed a large stack of computer cards on a card reader to restart. I remember it as a very slow process. Maybe it took 10 minutes to get up and running again. Since our student access to the computer came in one-hour blocks, a crash and reboot always seemed like a major setback.

Fast forward to 1979 when the Apple][ plus was introduced. My Apple][ isn't running at the moment due to the inevitable failed switch-mode power supply, so I can't confirm, but I remember the boot time from the floppy disk to be about 10 seconds. I started Lawson Labs with that venerable Apple][ computer. As time went on, and the business grew, I added an IPM PC, an IBM XT and an IBM AT computer. At one point, I had the four computers in a row. Boot times went up as the computers became more complex. If I started all four at once, the Apple][ would be ready to use first, and the IBM AT last. That pattern mostly tracked for user software, too. If I launched a text editor, or a spreadsheet program, the older the computer, the sooner it was ready to use. The later programs had richer displays, handled bigger files, and were more feature-laden, but for most ordinary tasks, any editor or any spreadsheet would do equally well.

Back in those days of technical optimism, it would have been sacrilege to point out that older was faster. Plus, for a while, it did feel like real progress was being made and that productivity was going in the right direction. Lawson Labs then was focused on data logging. The old way was a guy with a clipboard walking around and reading dials and recording the results on paper. The new way, using a computer to do the job automatically, was surely an improvement. For an overview, according to the numbers, US non-farm productivity growth was unusually strong in those years. But productivity growth peaked in 2002 and has not recovered by this writing twenty years later. We all know that it is far too easy to lie with statistics, but it may be that there is some truth in the idea that modern computers are actually slowing us down. How could that be?

First, lets take a step back and ask what the boot process amounts to. In early computers, permanent storage of computer code came at a premium. The Apple][ had a 2 kByte Read Only Memory chip called the boot ROM, or autostart ROM. It contained the code that would run at startup. One of its essential jobs was to allow more of the operating system to load from a floppy disk into volatile computer memory, ie RAM. The term “boot” refers to pulling ones self up by ones bootstraps, a process that may seem a bit mysterious, but Apple let in the light by printing the “firmware” programming code for the boot ROM in the manual. There were no Computer Science Departments in those days. Many of us studied that manual to learn the art of assembly language programming. Anybody know what 3D0G has to do with all this? (Hint: it is the command to start the Apple][ mini-assembler located in the monitor ROM. The code for the monitor ROM was also listed in the reference manual. 3D0 is the hexadecimal address, and G is the “go” command. Coders receive a pulse of adrenalin went they hit "G". With a mistake, you might end up anywhere.)

Early personal computers, including Apple and IBM, ran their operating systems primarily out of RAM memory. Early Personal Computers did not necessarily even have a disk drive. It is easy to forget that the IBM PC had a tape recorder connector on the back to plug in a cassette for permanent storage. Access to data on a cassette was necessarily slow, and though the disk drives of that day were much superior to a cassette, they were neither fast nor high capacity. Nonetheless, back then, if you saved something to the disk, it was actually written to the disk.

In more recent operating systems, in order to speed up operation, data written to a disk file will usually go into a buffer, and not actually be saved until later, because writing a larger block once after it has accumulated is faster that writing many smaller blocks as you go along. That particular change doesn't have much direct impact on boot time and start up, but it does slow shut down because then all file buffers must be flushed to the disk, and can indirectly effect start up time in a major way. That buffering causes indeterminate conditions for any disk files that are open when the computer crashes or when the power is cut without warning. Recovering from that sort of event requires extensive file checking the next time the system is started. Scanning a large disk drive for that class of errors can take dozens of minutes. Rebooting in those circumstances becomes extremely slow.

In a complex system with many interlocking parts, fractured data files can cause non-obvious symptoms. Further, the sequence necessary for error recovery may also be far from obvious. We are all familiar with the scenario where it takes many attempted restarts before a modern Windows operating system can recover after a crash. Here, we have situations where the boot time can be measured in hours, not seconds. We won't go into System Recovery Points here.

Next, a look at latency: Academics refer to Human-Computer Interaction, or HCI. The person hits a key on the keyboard. Then something happens. Maybe it is only that the keystroke is echoed to the screen, or maybe some other response is expected. The delay before the response is called System Response Time, or SRT. As a rule of thumb, a maximum keyboard SRT of 0.1 seconds is considered to be tolerable. An SRT of 1 second is a real annoyance, and can cause substantial reduction in productivity. Longer than a few seconds, the person's concentration is broken, and they will likely switch tasks while thinking dark thoughts.

Back when digital circuitry was first replacing analog, it was generally understood that latency needed to be unnoticeable in order to be acceptable. When you flip an analog switch, the result is expected to be immediate. (We can make an exception for fluorescent lights.) The computer keyboard has a buffer, not unlike that disk file buffer described above. The keystrokes first go into that buffer, and then are handled by the system. You won't notice any keypad latency on a pocket calculator, because the system was designed not to have any. Early PCs were much the same.

I just searched for history of computer latency to see if I was missing something before posting these thoughts. I found a study of keyboard latency done in 2017. Dan Luu, in New York City, measured latency for twenty two computers, going back to the Apple][e. The ][e came in fastest at 30 ms, while the newest computers show 90 to 170 ms latency. So it isn't just me. 

Somehow, over the decades, we have become inured to computer latency and even outright non-responsiveness. Virus scans or system updates or who-knows-what in the background can bring Win10 to a halt. Worst is when you turn on the computer in the morning and after it takes forever to boot, it then ignores your input while taking care of some invisible, long-running background task. Psychologists study the resulting human stresses. Efficiency experts tear their hair. Telephone support personnel make small talk to avoid dead air. Ordinary folks get out their cell phones while they are waiting.
        .
Maybe it is time to order a new power supply for that Apple][.


Tom Lawson
April 2022
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)