David Chase (chase@world.std.com)
Thu, 18 Mar 1999 11:21:32 -0500
[My apologies if a prefix of this message was sent to the list. Seems like a pretty dumb idea to have a keyboard shortcut for "Launch missiles", but I didn't write any of this software.] >Stopping threads can be hard. Not just in Java. >Safe yet effective stops require that all parts of a JVM and all other >application code except possibly the code you are cancelling needs to >be asynch-signal-safe, that is, capable of responding sensibly >(normally via rollbacks of some sort) to an asynchrononous >interruption after every bytecode. Unless you limit the number of asynchronous cancels to 1, it is actually substantially worse than that; the rollback code must also be async-signal safe, for an arbitrary number of signals. I do not know of any programming language with the control-flow constructs that allow this. Something like this was proposed for the Sparc version 9 ABI, but as far as I know, it was dropped. There, the intent was to allow compilers to be absolutely-positively sure that callee-saves registers were restored after non-local transfer of control, even if code elected not to use the register-windows in the usual way. There, the "control-flow construct" was simply to allow the handler for a given PC-range to point to the PC-range itself; thus, the register restoring sequence is broken up into a series of chunks of the form: h1: ld %r1,[%fp-K1] h2: ld %r2,[%fp-K2] h3: ld %r3,[%fp-K3] with ranges of the form: [h1,h1]:h1 [h2,h2]:h2 [h3:h3]:h3 And, as has also been pointed out, it does not take an especially clever bad guy to circumvent Thread.stop(). class Breeder extends Thread { Breeder() { start(); } public void run() { try { try { try { new Breeder(); } catch (Throwable t) { new Breeder(); } } catch (Throwable t) { new Breeder(); } } catch (Throwable t) { new Breeder(); } } } or (one of my favorites) class Breeder { ... static void breed() { new Breeder(); } public void finalize() { breed(); breed(); while (true); } } or while (true) try { while (true) try { while (true) try { while (true) {} } catch (Throwable t) {} } catch (Throwable t) {} } catch (Throwable t) {} etc. The probability of successfully killing such a thread can be made arbitrarily small by adding more try-catchall clauses around the inner loop. I don't know what the heck you do about the evil finalizer problem. Now, here is what I regard as the good part. Suppose that you add the async-safe control flow construct, so that code can always recover no matter how fast I type control-C or whatever it is that I do to tell it to go away (I'm an impatient guy). Once you do that, it is now possible to write 100% unkillable code, because it could take the form try {} absolutelypositivelycomplete { while(true); } The semantics of absolutelypositivelycomplete stmt are that if an exception arrives during the execution of "stmt", then it restarts from the beginning, just like the register-restoring handlers in the example above. Once that loop is entered, the a.p.c. clause will not complete until the loop exits normally. And, as if this were not entertaining enough, it appears that in fact bytecodes expressing just this unkillable code are legal today. I looked in the VM manual (quickly) and it did not say anything about handlers lying outside the ranges that they protect. So, looks to me like Thread.stop() is in fact not very useful at all for stopping malicious applets. I don't really know what sort of a club is required to beat it into people's heads that Thread.stop() is both dangerous and useless. Either the code cooperates, or it does not, and if it cooperates, it is vastly easier to write code that checks for interrupts than it is to write code that is async-signal safe. ------ I can imagine, actually, certain ways to limit these absolutelypositivelycomplete clauses. One is to calculate, before entering one, the number of computational units (whatever THOSE are) that will be consumed by this code, and if that number is exceeded, then the code is arbitrarily killed. One plausible measure of computational unit is "backedge or call", which corresponds to the places that a Java implementation that polls for thread switches and interrupts must check these things (I'm ignoring finalization and thread creation in my accounting). Of course, the amount of computing that must be performed might depend on the inputs, so that number might itself might also depend upon some calculation, whose length must also be bounded. There's probably some interesting computational hierarchy here (i.e., b0: all the problems that consume constant resources b1: all the problems that consume resources for which the bounds calculation is an instance of b0, b2: all the problems that consume resources for which the bounds calculation is an instance of b1, etc.) but I don't have the time to figure it out today. I cannot tell if this is of any more than theoretical interest, given the difficulty of writing async safe code. Actually, if you permit the "constant time" use of multiplication and exponentiation, ignoring for the moment the computational power inherent in those operations, then everything in P is in b1, and in fact even problems that are inherently exponential are in b1. However, if what you want is a somewhat tighter bound (for instance, ML type inference is worst-case gruesome, common-case linear) this might get more interesting. This also gets more interesting if code is required to parcel out some of that time to objects with finalizers; you might not know how much "time" the finalizer is going to need to run; it seems like we need some sort of an interface to add finalizer-time to an object as the program executes (and, perhaps, to take that time back?) Dang, this sounds complicated, and not too cheap. On the other hand, it is a relatively easy problem (within a compiler) to tell if any given piece of code contains a call to poll for interruptions on all potentially-infinite loops (and that if the flag is set, the code branches out of the loop), if you first establish the invariant that all subroutines poll for interrupts. I think that doing this requires mighty good support for checking for interruptions, else it will be too heavyweight for anyone to really consider doing. Note that this still leaves you with the impatient-person problem; if all polls leave the loops, then it is possible that I could interrupt some important cleanup behavior by pounding on the ctrl-C key. I suppose, if you wanted to, that these two things could be combined; all loops poll, but once you enter cleanup code, a second interrupt will not be delivered until that cleanup code has exceeded its computational limit. It might also be wise to make the limit visible to the interrupt-generator, since if the cleanup code declared that it would complete in 2 million seconds, you might want to know about that. David Chase NaturalBridge LLC ---------------------------------------------------------------------------- To unsubscribe (or other requests) mailto:majordomo@media.mit.edu List archives, FAQ, member list, etc. http://gee.cs.oswego.edu/dl/javares/
This archive was generated by hypermail 2.0b3 on Thu Mar 18 1999 - 11:39:54 EST