Flash Memory and Cybercrime
#1
Back in the halcyon days of computing, we had hardware, software, and in between, firmware. Firmware was the programmable aspect of the hardware. Originally, firmware instructions could be written once into programmable parts, and that was that. Programmable Read only Memory was known as PROM. According to Wikipedia, it was invented in the '50s, but it became important in the '70s as the repository of the permanent code for microprocessors. Masked ROM could be manufactured with the programming built in, but that involved big upfront expenses and long lead times. Masked ROM has always been the lowest cost firmware for high volume applications, but it is prohibitively expensive for other uses.

Erasable parts, called EPROMs also appeared in the 70's. These parts could be erased through a quartz window by UV light. Then, the same part could be reprogrammed to suit. To reprogram a part, it was first placed in an eraser, which included a strong UV source, and usually a timer. After many minutes (depending on the part and the intensity of the UV light), the memory would be checked to see if it was blank. If so, it was ready for reprogramming. Parts were capable of only a limited number of program/erase cycles, but the erasable technology made firmware development considerably less stressful. Still, there was a real premium on getting it right in the smallest number of iterations.

Prototype systems would be built with EPROM chips, but they were more expensive, and could lose their data over time. Production systems would use PROM or masked ROM for economy and reliability. This arrangement worked pretty well for a number of years. A PROM or ROM would hold the BIOS for your home computer and a keyboard PROM or ROM would hold the translation between keycodes and pixels for the alphanumeric display.

EEPROM, or Electrically Erasable Read Only Memory, was the next phase of development. Your USB memory stick is the modern form of this rather elegant technology. That USB stick uses flash memory, which is a form of EEPROM that was first commercialized in the '80s. Flash has been improved to the point that electrically erasable memory can replace a hard drive. That is a long way from 20 minutes in the UV eraser before rewriting.

When combined with internet accessibility, EEPROM memory enables field reprogrammability. Our computers are now almost all connected, and more and more of our infrastructure is network-connected, too. We have become accustomed to over-the-air firmware updates for our phones, computers, networking systems, etc. Those updates remove the need to complete product development before beginning production. Once you are close, you know you can fix it later, with an update. The developer can add features and fix bugs long after the product is shipped. The update process can be beneficial, but it is also corrosive. There is no longer a need to get it right the first time, or even the second time. In fact, the constant parade of patched problems has become a feature. It is called software as a service. The product is no longer an entity, it is a continually morphing work in progress.

In the above process, it has become more and more difficult to make the distinction between firmware and software. Maybe that distinction is becoming irrelevant, but the recent increase in the number of malicious hacks into supposedly secured systems makes me think otherwise. Mistakes find their way into the firmware that sits beneath the application code. That firmware is rarely examined. The deeper code is buried, the harder it is to sort out the rules and assumptions for its function. Code at the lower levels should be thoroughly debugged, tested and documented. Only then can it be safely forgotten. When a breach allows that underlying code to be exposed to hackers who can actually change it, good luck ever figuring out what went wrong after you have paid the ransom to recover your data.

Further, when bugs find their way into low-level code, they have a sneaky way of propagating. One of the early BASIC language implementations had a bug in the PEEK statement. PEEK was a command to read an 8-bit byte from a memory address. PEEK mistakenly treated the result as a signed 7-bit integer. Anything over 127 was shown as a negative number. That made a mess when reading A/D converter data. You might think a problem like that would be corrected promptly and permanently. In fact, that same problem turned up in several completely different versions of BASIC from different companies, even five years later. So when problems appear at the lowest levels of embedded code, watch for more trouble in the future. Thanks largely to flash memory, even the hackers don't know all the places their backdoor access may be installed.


Tom Lawson
May 2021
Reply
#2
Reading this article, the first thought that came to my mind are the small computer systems such as the Raspberry Pi, BeagleBone, and many others of the same type with IoT capability, with the well trusted Linux kernel and various options of Linux-compatible OSs. People think because they have such a system that they have no worries, but that's not so - one still needs to exercise caution. The very thing mentioned in the article about the flashing, applies, since the OS is actually flashed onto an SD card either before or after purchase of such "mini computer" system.

I make use of such a system for web-hosting from my home, not for the actual web-site hosting aspect per se, but because the Apache server software also provides an interface for use of PHP, Python, and other scripting possibilities, as well as a MySQL database and CRON-jobs, allowing me to communicate with and control "things" at my house, while away, via the web, without concern for the the typical hacking sometimes possible by the usual home IoT systems which are so popular these days, but managed in the cloud often by reputable sources, but sometimes "those" sources may be hacked by outside sources, or as I for one often wonder, possibly a disgruntle employee in charge of the management aspect of the systems.

I can't help but wonder if there is a problem these days with such "main" systems that "really" need to be concerned about hacking (power grids, pipeline management systems, etc.), which may have at least one or more  "geek-type" employees who see a solution to a problem such as monitoring a particular solenoid, regulator, video monitoring system, or whatever, having a simple solution of adding on a mini-computer to handle one or more such "simple" tasks due to it's flexibility of I/O and network/internet connectivity, making it possible to deal with a task, while at the same time communicating with the main system, or worst case scenario, undetected internet communications - all while on the "other" side of the main system's firewall or other security systems. Such "mini" systems, as already mentioned, can be configured to automatically update their software, via the internet, which is one reason why they may be, without consideration of the risks, connected to the internet, bypassing the main system's firewall without correct configuration of their own firewall.
Reply
#3
I agree. Code at the lowest levels is widely under-appreciated as a vulnerable area in connected systems.

Flash has been promoted as protection against obsolescence. Again, I'm not so sure. As you can tell, we are not big fans of over-the-air firmware updates. Our products use one-time-programmable parts. For the obsolescence case, look at the  Lawson Labs Model 201, introduced 29 years ago and still very much viable. Flash wasn't an option then, but if the Model 201 had employed flash memory at its introduction, by now it very likely would have been hacked and copied and devalued, rendering it obsolete.
Reply


Forum Jump:


Users browsing this thread: 2 Guest(s)