Gems of SOSP
Well, it was certainly a stimulating conference. There was a wealth of interesting and well-presented research. Here are some of the papers that interested me most. Insert disclaimer here: I'm just some guy who writes code for a living, what do I know, I'm sure I missed the point of your amazing paper, etc.
Vigilante (Costa et al., Microsoft Research) is a system for automatically detecting internet worms and stopping their spread. While network security is not a field in which I can even attempt to sound intelligent, the paper and talk were very convincing; "OK," I thought at the talk's conclusion, "go ahead and implement this and fix the internet." See an intriguing blog post from someone who might know what they're talking about...
IntroVirt (Joshi, King, Dunlap, and Chen, University of Michigan) is another clever application of virtualization from Peter Chen's group at University of Michigan. They've instrumented user-mode linux so that user-level processes within the UML guest can be instrumented with arbitrary code injected by the administrator of the VM. They present this system in a very narrow light: they use it to detect intrusions in known-vulnerable, but as-yet-unpatched software. It seems to me that the ability to run arbitrary code on arbitrary events in the guest is more generally powerful than this; I suspect the '*Virt*' crowd at Michigan has realized this, too.
Honeyfarm (Vrable et al., UCSD) is another intriguing application of virtualization. The described system uses a virtual machine "fork" primitive to create the external appearance of a very large network of vulnerable "honeypot" machines. Like the UNIX system call, this fork primitive returns twice, once in the calling virtual machine, and once in a newborne virtual machine that is in most respects a copy of its parent. Exploiting copy-on-write techniques and the fact that very few IP addresses are usually active at any given time allows them to achieve a very high virtual-to-physical resource ratio.
Rx (Qin, Tucek, Sundaresen, Zhou, UCSD) is a slightly quirky take on recovering from software failures. They observe that many software failures are easy to observe after the fact (e.g., they cause a SEGV), and that they are frequently caused by a small set of programmer errors (buffer overflows, timing assumptions, uninitialized variables, etc.). Rx does a process-level checkpoint of a server at connection establishment time, and, in the case of failure, reverts to the checkpoint, randomly perturbing the execution environment in the hopes of perturbing away the failure. (A proxy intermediates between client and server to provide the illusion of seamlessness.) According to the paper, this works much better than intuition suggests is possible. It's sort of like failure-oblivious computing, but without the, err, obliviousness.
Vigilante (Costa et al., Microsoft Research) is a system for automatically detecting internet worms and stopping their spread. While network security is not a field in which I can even attempt to sound intelligent, the paper and talk were very convincing; "OK," I thought at the talk's conclusion, "go ahead and implement this and fix the internet." See an intriguing blog post from someone who might know what they're talking about...
IntroVirt (Joshi, King, Dunlap, and Chen, University of Michigan) is another clever application of virtualization from Peter Chen's group at University of Michigan. They've instrumented user-mode linux so that user-level processes within the UML guest can be instrumented with arbitrary code injected by the administrator of the VM. They present this system in a very narrow light: they use it to detect intrusions in known-vulnerable, but as-yet-unpatched software. It seems to me that the ability to run arbitrary code on arbitrary events in the guest is more generally powerful than this; I suspect the '*Virt*' crowd at Michigan has realized this, too.
Honeyfarm (Vrable et al., UCSD) is another intriguing application of virtualization. The described system uses a virtual machine "fork" primitive to create the external appearance of a very large network of vulnerable "honeypot" machines. Like the UNIX system call, this fork primitive returns twice, once in the calling virtual machine, and once in a newborne virtual machine that is in most respects a copy of its parent. Exploiting copy-on-write techniques and the fact that very few IP addresses are usually active at any given time allows them to achieve a very high virtual-to-physical resource ratio.
Rx (Qin, Tucek, Sundaresen, Zhou, UCSD) is a slightly quirky take on recovering from software failures. They observe that many software failures are easy to observe after the fact (e.g., they cause a SEGV), and that they are frequently caused by a small set of programmer errors (buffer overflows, timing assumptions, uninitialized variables, etc.). Rx does a process-level checkpoint of a server at connection establishment time, and, in the case of failure, reverts to the checkpoint, randomly perturbing the execution environment in the hopes of perturbing away the failure. (A proxy intermediates between client and server to provide the illusion of seamlessness.) According to the paper, this works much better than intuition suggests is possible. It's sort of like failure-oblivious computing, but without the, err, obliviousness.
1 Comments:
Thanks for the heads up about the link. The way I see it, Rx roughly follows a "do no harm" principle. Going from a known bad state, it will only roll back to a "not yet known bad" state, and only roll forward if it arrives at a "not known bad" state. While it's possible that some undetectable failure is introduced in rolling it forward, I fail to see how Rx would have made the situation worse....
Post a Comment
<< Home