Some synchronization arguments we've been having at work led me to read Lampson and Redell's classic synchronization paper
. It's rightly famous for providing the first clear exposition of the priority inversion problem
. There's also a good deal of implementation detail about Mesa and Pilot, which were a programming language and operating system, respectively, for PARC's famed Alto workstation.
Most modern readers
gloss over this implementation section, since the system described is trivial by modern standards (the OS kernel is 24KLOC) and obsolete in many ways. However, the extremely tight coupling between the OS and the hardware ISA is fascinating to me. The hardware directly supports the kernel's run queue, providing a context switch in a single, one-byte instruction! Faults move the current process straight from the run queue to a hardware-supported "fault queue" and reschedule in hardware. Stacks consist of discontiguous, linked frames allocated and deallocated from a variable-sized "frame heap" maintained in, you guessed it, hardware. When the frame heap is exhausted, a "frame fault" sticks the current process on the fault queue. Not quite as trippy as the StackFrame-as-firstclass Object supported by the SmallTalk system running on the same hardware, but still pretty darned far from the post-RISC, all-C-code-all-the-time machines we're accustomed to.
This tight coupling of ISA to OS was made possible by a writable microcode store, whose internals were exposed to system software. Before I get all starry-eyed about how wonderful a microcode revival would be, I will admit up front that it's 2006. Even if it somehow were practical to expose the microcode stores of modern processors, no sane system software vendor would take advantage. Modern machines' microarchitectures are too complicated to be programmable by non-specialists, and it would be painful beyond belief trying to maintain correct implementations of the same ISA on different microarchitectures. Besides, the whole idea of adapting the ISA to a particular OS seems like a quaint holdover from the CISC days; the Alto folks were working in an environment where the hardware could execute a sequence of microcode instructions much faster than the equivalent sequence of macro-instructions, and that just isn't the world we live in anymore.
So, while writable microcode is probably a stupid thing to wish for if you're writing your own OS, it might still be useful, say, to change the behavior of already existing instructions... It's obvious Intel and AMD are relying on some microcode changes to provide the first generation of VT and SVM in minor revisions of their processors. I've complained
that VT/SVM only allow software intervention on the far side of a heavyweight hardware context. Wouldn't it be lovely if, instead of software trap handlers for virtualization-sensitive events, the VMM provided "microcode" handlers for such events? Obviously, it wouldn't really be "microcode" proper; the ISA would have to be "architected", perhaps as a subset of the x86, and run with some vaguely SMM-esque constraints on its behavior. But, if it were possible to arrive at this "pseudo-microcode" in a more expeditious manner than the full context switch out to the VMM, it might provide an opportunity to handle some of the more frivolous exits much more efficiently.