ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/jsr166y/ForkJoinWorkerThread.java
(Generate patch)

Comparing jsr166/src/jsr166y/ForkJoinWorkerThread.java (file contents):
Revision 1.4 by dl, Wed Jan 7 20:51:36 2009 UTC vs.
Revision 1.39 by dl, Sat Jul 24 20:28:18 2010 UTC

# Line 5 | Line 5
5   */
6  
7   package jsr166y;
8 < import java.util.*;
8 >
9   import java.util.concurrent.*;
10 < import java.util.concurrent.atomic.*;
11 < import java.util.concurrent.locks.*;
12 < import sun.misc.Unsafe;
13 < import java.lang.reflect.*;
10 >
11 > import java.util.Random;
12 > import java.util.Collection;
13 > import java.util.concurrent.locks.LockSupport;
14  
15   /**
16   * A thread managed by a {@link ForkJoinPool}.  This class is
17   * subclassable solely for the sake of adding functionality -- there
18 < * are no overridable methods dealing with scheduling or
19 < * execution. However, you can override initialization and termination
20 < * cleanup methods surrounding the main task processing loop.  If you
21 < * do create such a subclass, you will also need to supply a custom
22 < * ForkJoinWorkerThreadFactory to use it in a ForkJoinPool.
23 < *
24 < * <p>This class also provides methods for generating per-thread
25 < * random numbers, with the same properties as {@link
26 < * java.util.Random} but with each generator isolated from those of
27 < * other threads.
18 > * are no overridable methods dealing with scheduling or execution.
19 > * However, you can override initialization and termination methods
20 > * surrounding the main task processing loop.  If you do create such a
21 > * subclass, you will also need to supply a custom {@link
22 > * ForkJoinPool.ForkJoinWorkerThreadFactory} to use it in a {@code
23 > * ForkJoinPool}.
24 > *
25 > * @since 1.7
26 > * @author Doug Lea
27   */
28   public class ForkJoinWorkerThread extends Thread {
29      /*
30 <     * Algorithm overview:
30 >     * Overview:
31 >     *
32 >     * ForkJoinWorkerThreads are managed by ForkJoinPools and perform
33 >     * ForkJoinTasks. This class includes bookkeeping in support of
34 >     * worker activation, suspension, and lifecycle control described
35 >     * in more detail in the internal documentation of class
36 >     * ForkJoinPool. And as described further below, this class also
37 >     * includes special-cased support for some ForkJoinTask
38 >     * methods. But the main mechanics involve work-stealing:
39       *
40 <     * 1. Work-Stealing: Work-stealing queues are special forms of
41 <     * Deques that support only three of the four possible
42 <     * end-operations -- push, pop, and deq (aka steal), and only do
43 <     * so under the constraints that push and pop are called only from
44 <     * the owning thread, while deq may be called from other threads.
45 <     * (If you are unfamiliar with them, you probably want to read
46 <     * Herlihy and Shavit's book "The Art of Multiprocessor
47 <     * programming", chapter 16 describing these in more detail before
48 <     * proceeding.)  The main work-stealing queue design is roughly
49 <     * similar to "Dynamic Circular Work-Stealing Deque" by David
50 <     * Chase and Yossi Lev, SPAA 2005
51 <     * (http://research.sun.com/scalable/pubs/index.html).  The main
52 <     * difference ultimately stems from gc requirements that we null
53 <     * out taken slots as soon as we can, to maintain as small a
54 <     * footprint as possible even in programs generating huge numbers
55 <     * of tasks. To accomplish this, we shift the CAS arbitrating pop
56 <     * vs deq (steal) from being on the indices ("base" and "sp") to
57 <     * the slots themselves (mainly via method "casSlotNull()"). So,
58 <     * both a successful pop and deq mainly entail CAS'ing a nonnull
59 <     * slot to null.  Because we rely on CASes of references, we do
60 <     * not need tag bits on base or sp.  They are simple ints as used
61 <     * in any circular array-based queue (see for example ArrayDeque).
62 <     * Updates to the indices must still be ordered in a way that
63 <     * guarantees that (sp - base) > 0 means the queue is empty, but
64 <     * otherwise may err on the side of possibly making the queue
65 <     * appear nonempty when a push, pop, or deq have not fully
66 <     * committed. Note that this means that the deq operation,
67 <     * considered individually, is not wait-free. One thief cannot
68 <     * successfully continue until another in-progress one (or, if
69 <     * previously empty, a push) completes.  However, in the
70 <     * aggregate, we ensure at least probablistic non-blockingness. If
71 <     * an attempted steal fails, a thief always chooses a different
40 >     * Work-stealing queues are special forms of Deques that support
41 >     * only three of the four possible end-operations -- push, pop,
42 >     * and deq (aka steal), under the further constraints that push
43 >     * and pop are called only from the owning thread, while deq may
44 >     * be called from other threads.  (If you are unfamiliar with
45 >     * them, you probably want to read Herlihy and Shavit's book "The
46 >     * Art of Multiprocessor programming", chapter 16 describing these
47 >     * in more detail before proceeding.)  The main work-stealing
48 >     * queue design is roughly similar to those in the papers "Dynamic
49 >     * Circular Work-Stealing Deque" by Chase and Lev, SPAA 2005
50 >     * (http://research.sun.com/scalable/pubs/index.html) and
51 >     * "Idempotent work stealing" by Michael, Saraswat, and Vechev,
52 >     * PPoPP 2009 (http://portal.acm.org/citation.cfm?id=1504186).
53 >     * The main differences ultimately stem from gc requirements that
54 >     * we null out taken slots as soon as we can, to maintain as small
55 >     * a footprint as possible even in programs generating huge
56 >     * numbers of tasks. To accomplish this, we shift the CAS
57 >     * arbitrating pop vs deq (steal) from being on the indices
58 >     * ("base" and "sp") to the slots themselves (mainly via method
59 >     * "casSlotNull()"). So, both a successful pop and deq mainly
60 >     * entail a CAS of a slot from non-null to null.  Because we rely
61 >     * on CASes of references, we do not need tag bits on base or sp.
62 >     * They are simple ints as used in any circular array-based queue
63 >     * (see for example ArrayDeque).  Updates to the indices must
64 >     * still be ordered in a way that guarantees that sp == base means
65 >     * the queue is empty, but otherwise may err on the side of
66 >     * possibly making the queue appear nonempty when a push, pop, or
67 >     * deq have not fully committed. Note that this means that the deq
68 >     * operation, considered individually, is not wait-free. One thief
69 >     * cannot successfully continue until another in-progress one (or,
70 >     * if previously empty, a push) completes.  However, in the
71 >     * aggregate, we ensure at least probabilistic non-blockingness.
72 >     * If an attempted steal fails, a thief always chooses a different
73       * random victim target to try next. So, in order for one thief to
74       * progress, it suffices for any in-progress deq or new push on
75       * any empty queue to complete. One reason this works well here is
76       * that apparently-nonempty often means soon-to-be-stealable,
77 <     * which gives threads a chance to activate if necessary before
78 <     * stealing (see below).
77 >     * which gives threads a chance to set activation status if
78 >     * necessary before stealing.
79 >     *
80 >     * This approach also enables support for "async mode" where local
81 >     * task processing is in FIFO, not LIFO order; simply by using a
82 >     * version of deq rather than pop when locallyFifo is true (as set
83 >     * by the ForkJoinPool).  This allows use in message-passing
84 >     * frameworks in which tasks are never joined.
85 >     *
86 >     * When a worker would otherwise be blocked waiting to join a
87 >     * task, it first tries a form of linear helping: Each worker
88 >     * records (in field currentSteal) the most recent task it stole
89 >     * from some other worker. Plus, it records (in field currentJoin)
90 >     * the task it is currently actively joining. Method joinTask uses
91 >     * these markers to try to find a worker to help (i.e., steal back
92 >     * a task from and execute it) that could hasten completion of the
93 >     * actively joined task. In essence, the joiner executes a task
94 >     * that would be on its own local deque had the to-be-joined task
95 >     * not been stolen. This may be seen as a conservative variant of
96 >     * the approach in Wagner & Calder "Leapfrogging: a portable
97 >     * technique for implementing efficient futures" SIGPLAN Notices,
98 >     * 1993 (http://portal.acm.org/citation.cfm?id=155354). It differs
99 >     * in that: (1) We only maintain dependency links across workers
100 >     * upon steals, rather than use per-task bookkeeping.  This may
101 >     * require a linear scan of workers array to locate stealers, but
102 >     * usually doesn't because stealers leave hints (that may become
103 >     * stale/wrong) of where to locate them. This isolates cost to
104 >     * when it is needed, rather than adding to per-task overhead.
105 >     * (2) It is "shallow", ignoring nesting and potentially cyclic
106 >     * mutual steals.  (3) It is intentionally racy: field currentJoin
107 >     * is updated only while actively joining, which means that we
108 >     * miss links in the chain during long-lived tasks, GC stalls etc
109 >     * (which is OK since blocking in such cases is usually a good
110 >     * idea).  (4) We bound the number of attempts to find work (see
111 >     * MAX_HELP_DEPTH) and fall back to suspending the worker and if
112 >     * necessary replacing it with a spare (see
113 >     * ForkJoinPool.tryAwaitJoin).
114       *
115 <     * Efficient implementation of this approach currently relies on
116 <     * an uncomfortable amount of "Unsafe" mechanics. To maintain
115 >     * Efficient implementation of these algorithms currently relies
116 >     * on an uncomfortable amount of "Unsafe" mechanics. To maintain
117       * correct orderings, reads and writes of variable base require
118 <     * volatile ordering.  Variable sp does not require volatile write
119 <     * but needs cheaper store-ordering on writes.  Because they are
120 <     * protected by volatile base reads, reads of the queue array and
121 <     * its slots do not need volatile load semantics, but writes (in
122 <     * push) require store order and CASes (in pop and deq) require
123 <     * (volatile) CAS semantics. Since these combinations aren't
124 <     * supported using ordinary volatiles, the only way to accomplish
125 <     * these effciently is to use direct Unsafe calls. (Using external
126 <     * AtomicIntegers and AtomicReferenceArrays for the indices and
127 <     * array is significantly slower because of memory locality and
128 <     * indirection effects.) Further, performance on most platforms is
129 <     * very sensitive to placement and sizing of the (resizable) queue
130 <     * array.  Even though these queues don't usually become all that
131 <     * big, the initial size must be large enough to counteract cache
118 >     * volatile ordering.  Variable sp does not require volatile
119 >     * writes but still needs store-ordering, which we accomplish by
120 >     * pre-incrementing sp before filling the slot with an ordered
121 >     * store.  (Pre-incrementing also enables backouts used in
122 >     * joinTask.)  Because they are protected by volatile base reads,
123 >     * reads of the queue array and its slots by other threads do not
124 >     * need volatile load semantics, but writes (in push) require
125 >     * store order and CASes (in pop and deq) require (volatile) CAS
126 >     * semantics.  (Michael, Saraswat, and Vechev's algorithm has
127 >     * similar properties, but without support for nulling slots.)
128 >     * Since these combinations aren't supported using ordinary
129 >     * volatiles, the only way to accomplish these efficiently is to
130 >     * use direct Unsafe calls. (Using external AtomicIntegers and
131 >     * AtomicReferenceArrays for the indices and array is
132 >     * significantly slower because of memory locality and indirection
133 >     * effects.)
134 >     *
135 >     * Further, performance on most platforms is very sensitive to
136 >     * placement and sizing of the (resizable) queue array.  Even
137 >     * though these queues don't usually become all that big, the
138 >     * initial size must be large enough to counteract cache
139       * contention effects across multiple queues (especially in the
140       * presence of GC cardmarking). Also, to improve thread-locality,
141 <     * queues are currently initialized immediately after the thread
142 <     * gets the initial signal to start processing tasks.  However,
143 <     * all queue-related methods except pushTask are written in a way
144 <     * that allows them to instead be lazily allocated and/or disposed
145 <     * of when empty. All together, these low-level implementation
146 <     * choices produce as much as a factor of 4 performance
147 <     * improvement compared to naive implementations, and enable the
148 <     * processing of billions of tasks per second, sometimes at the
149 <     * expense of ugliness.
150 <     *
151 <     * 2. Run control: The primary run control is based on a global
152 <     * counter (activeCount) held by the pool. It uses an algorithm
153 <     * similar to that in Herlihy and Shavit section 17.6 to cause
154 <     * threads to eventually block when all threads declare they are
155 <     * inactive. (See variable "scans".)  For this to work, threads
156 <     * must be declared active when executing tasks, and before
157 <     * stealing a task. They must be inactive before blocking on the
158 <     * Pool Barrier (awaiting a new submission or other Pool
159 <     * event). In between, there is some free play which we take
160 <     * advantage of to avoid contention and rapid flickering of the
161 <     * global activeCount: If inactive, we activate only if a victim
162 <     * queue appears to be nonempty (see above).  Similarly, a thread
163 <     * tries to inactivate only after a full scan of other threads.
164 <     * The net effect is that contention on activeCount is rarely a
165 <     * measurable performance issue. (There are also a few other cases
166 <     * where we scan for work rather than retry/block upon
167 <     * contention.)
168 <     *
169 <     * 3. Selection control. We maintain policy of always choosing to
170 <     * run local tasks rather than stealing, and always trying to
171 <     * steal tasks before trying to run a new submission. All steals
172 <     * are currently performed in randomly-chosen deq-order. It may be
173 <     * worthwhile to bias these with locality / anti-locality
124 <     * information, but doing this well probably requires more
125 <     * lower-level information from JVMs than currently provided.
141 >     * queues are initialized after starting.  All together, these
142 >     * low-level implementation choices produce as much as a factor of
143 >     * 4 performance improvement compared to naive implementations,
144 >     * and enable the processing of billions of tasks per second,
145 >     * sometimes at the expense of ugliness.
146 >     */
147 >
148 >    /**
149 >     * Generator for initial random seeds for random victim
150 >     * selection. This is used only to create initial seeds. Random
151 >     * steals use a cheaper xorshift generator per steal attempt. We
152 >     * expect only rare contention on seedGenerator, so just use a
153 >     * plain Random.
154 >     */
155 >    private static final Random seedGenerator = new Random();
156 >
157 >    /**
158 >     * The timeout value for suspending spares. Spare workers that
159 >     * remain unsignalled for more than this time may be trimmed
160 >     * (killed and removed from pool).  Since our goal is to avoid
161 >     * long-term thread buildup, the exact value of timeout does not
162 >     * matter too much so long as it avoids most false-alarm timeouts
163 >     * under GC stalls or momentarily high system load.
164 >     */
165 >    private static final long SPARE_KEEPALIVE_NANOS =
166 >        5L * 1000L * 1000L * 1000L; // 5 secs
167 >
168 >    /**
169 >     * The maximum stolen->joining link depth allowed in helpJoinTask.
170 >     * Depths for legitimate chains are unbounded, but we use a fixed
171 >     * constant to avoid (otherwise unchecked) cycles and bound
172 >     * staleness of traversal parameters at the expense of sometimes
173 >     * blocking when we could be helping.
174       */
175 +    private static final int MAX_HELP_DEPTH = 8;
176  
177      /**
178       * Capacity of work-stealing queue array upon initialization.
179 <     * Must be a power of two. Initial size must be at least 2, but is
179 >     * Must be a power of two. Initial size must be at least 4, but is
180       * padded to minimize cache effects.
181       */
182      private static final int INITIAL_QUEUE_CAPACITY = 1 << 13;
183  
184      /**
185       * Maximum work-stealing queue array size.  Must be less than or
186 <     * equal to 1 << 30 to ensure lack of index wraparound.
186 >     * equal to 1 << 28 to ensure lack of index wraparound. (This
187 >     * is less than usual bounds, because we need leftshift by 3
188 >     * to be in int range).
189       */
190 <    private static final int MAXIMUM_QUEUE_CAPACITY = 1 << 30;
190 >    private static final int MAXIMUM_QUEUE_CAPACITY = 1 << 28;
191  
192      /**
193 <     * Generator of seeds for per-thread random numbers.
193 >     * The pool this thread works in. Accessed directly by ForkJoinTask.
194       */
195 <    private static final Random randomSeedGenerator = new Random();
195 >    final ForkJoinPool pool;
196  
197      /**
198       * The work-stealing queue array. Size must be a power of two.
199 +     * Initialized in onStart, to improve memory locality.
200       */
201      private ForkJoinTask<?>[] queue;
202  
203      /**
152     * Index (mod queue.length) of next queue slot to push to or pop
153     * from. It is written only by owner thread, via ordered store.
154     * Both sp and base are allowed to wrap around on overflow, but
155     * (sp - base) still estimates size.
156     */
157    private volatile int sp;
158
159    /**
204       * Index (mod queue.length) of least valid queue slot, which is
205       * always the next position to steal from if nonempty.
206       */
207      private volatile int base;
208  
209      /**
210 <     * The pool this thread works in.
210 >     * Index (mod queue.length) of next queue slot to push to or pop
211 >     * from. It is written only by owner thread, and accessed by other
212 >     * threads only after reading (volatile) base.  Both sp and base
213 >     * are allowed to wrap around on overflow, but (sp - base) still
214 >     * estimates size.
215       */
216 <    final ForkJoinPool pool;
216 >    private int sp;
217  
218      /**
219 <     * Index of this worker in pool array. Set once by pool before
220 <     * running, and accessed directly by pool during cleanup etc
219 >     * The index of most recent stealer, used as a hint to avoid
220 >     * traversal in method helpJoinTask. This is only a hint because a
221 >     * worker might have had multiple steals and this only holds one
222 >     * of them (usually the most current). Declared non-volatile,
223 >     * relying on other prevailing sync to keep reasonably current.
224       */
225 <    int poolIndex;
225 >    private int stealHint;
226  
227      /**
228 <     * Run state of this worker. Supports simple versions of the usual
229 <     * shutdown/shutdownNow control.
228 >     * Run state of this worker. In addition to the usual run levels,
229 >     * tracks if this worker is suspended as a spare, and if it was
230 >     * killed (trimmed) while suspended. However, "active" status is
231 >     * maintained separately.
232       */
233      private volatile int runState;
234  
235 <    // Runstate values. Order matters
236 <    private static final int RUNNING     = 0;
237 <    private static final int SHUTDOWN    = 1;
238 <    private static final int TERMINATING = 2;
239 <    private static final int TERMINATED  = 3;
235 >    private static final int TERMINATING = 0x01;
236 >    private static final int TERMINATED  = 0x02;
237 >    private static final int SUSPENDED   = 0x04; // inactive spare
238 >    private static final int TRIMMED     = 0x08; // killed while suspended
239 >
240 >    /**
241 >     * Number of LockSupport.park calls to block this thread for
242 >     * suspension or event waits. Used for internal instrumention;
243 >     * currently not exported but included because volatile write upon
244 >     * park also provides a workaround for a JVM bug.
245 >     */
246 >    volatile int parkCount;
247 >
248 >    /**
249 >     * Number of steals, transferred and reset in pool callbacks pool
250 >     * when idle Accessed directly by pool.
251 >     */
252 >    int stealCount;
253 >
254 >    /**
255 >     * Seed for random number generator for choosing steal victims.
256 >     * Uses Marsaglia xorshift. Must be initialized as nonzero.
257 >     */
258 >    private int seed;
259  
260      /**
261       * Activity status. When true, this worker is considered active.
262 <     * Must be false upon construction. It must be true when executing
191 <     * tasks, and BEFORE stealing a task. It must be false before
192 <     * blocking on the Pool Barrier.
262 >     * Accessed directly by pool.  Must be false upon construction.
263       */
264 <    private boolean active;
264 >    boolean active;
265  
266      /**
267 <     * Number of steals, transferred to pool when idle
267 >     * True if use local fifo, not default lifo, for local polling.
268 >     * Shadows value from ForkJoinPool.
269       */
270 <    private int stealCount;
270 >    private final boolean locallyFifo;
271  
272      /**
273 <     * Seed for random number generator for choosing steal victims
273 >     * Index of this worker in pool array. Set once by pool before
274 >     * running, and accessed directly by pool to locate this worker in
275 >     * its workers array.
276       */
277 <    private int randomVictimSeed;
277 >    int poolIndex;
278  
279      /**
280 <     * Seed for embedded Jurandom
280 >     * The last pool event waited for. Accessed only by pool in
281 >     * callback methods invoked within this thread.
282       */
283 <    private long juRandomSeed;
283 >    int lastEventCount;
284  
285      /**
286 <     * The last barrier event waited for
286 >     * Encoded index and event count of next event waiter. Used only
287 >     * by ForkJoinPool for managing event waiters.
288       */
289 <    private long eventCount;
289 >    volatile long nextWaiter;
290 >
291 >    /**
292 >     * The task currently being joined, set only when actively trying
293 >     * to helpStealer. Written only by current thread, but read by
294 >     * others.
295 >     */
296 >    private volatile ForkJoinTask<?> currentJoin;
297 >
298 >    /**
299 >     * The task most recently stolen from another worker (or
300 >     * submission queue).  Not volatile because always read/written in
301 >     * presence of related volatiles in those cases where it matters.
302 >     */
303 >    private ForkJoinTask<?> currentSteal;
304  
305      /**
306       * Creates a ForkJoinWorkerThread operating in the given pool.
307 +     *
308       * @param pool the pool this thread works in
309       * @throws NullPointerException if pool is null
310       */
311      protected ForkJoinWorkerThread(ForkJoinPool pool) {
222        if (pool == null) throw new NullPointerException();
312          this.pool = pool;
313 <        // remaining initialization deferred to onStart
313 >        this.locallyFifo = pool.locallyFifo;
314 >        // To avoid exposing construction details to subclasses,
315 >        // remaining initialization is in start() and onStart()
316 >    }
317 >
318 >    /**
319 >     * Performs additional initialization and starts this thread
320 >     */
321 >    final void start(int poolIndex, UncaughtExceptionHandler ueh) {
322 >        this.poolIndex = poolIndex;
323 >        setDaemon(true);
324 >        if (ueh != null)
325 >            setUncaughtExceptionHandler(ueh);
326 >        start();
327      }
328  
329 <    // public access methods
329 >    // Public/protected methods
330  
331      /**
332 <     * Returns the pool hosting this thread
332 >     * Returns the pool hosting this thread.
333 >     *
334       * @return the pool
335       */
336      public ForkJoinPool getPool() {
# Line 239 | Line 342 | public class ForkJoinWorkerThread extend
342       * returned value ranges from zero to the maximum number of
343       * threads (minus one) that have ever been created in the pool.
344       * This method may be useful for applications that track status or
345 <     * collect results on a per-worker basis.
346 <     * @return the index number.
345 >     * collect results per-worker rather than per-task.
346 >     *
347 >     * @return the index number
348       */
349      public int getPoolIndex() {
350          return poolIndex;
351      }
352  
353 <    //  Access methods used by Pool
353 >    /**
354 >     * Initializes internal state after construction but before
355 >     * processing any tasks. If you override this method, you must
356 >     * invoke super.onStart() at the beginning of the method.
357 >     * Initialization requires care: Most fields must have legal
358 >     * default values, to ensure that attempted accesses from other
359 >     * threads work correctly even before this thread starts
360 >     * processing tasks.
361 >     */
362 >    protected void onStart() {
363 >        int rs = seedGenerator.nextInt();
364 >        seed = rs == 0? 1 : rs; // seed must be nonzero
365 >
366 >        // Allocate name string and arrays in this thread
367 >        String pid = Integer.toString(pool.getPoolNumber());
368 >        String wid = Integer.toString(poolIndex);
369 >        setName("ForkJoinPool-" + pid + "-worker-" + wid);
370 >
371 >        queue = new ForkJoinTask<?>[INITIAL_QUEUE_CAPACITY];
372 >    }
373 >
374 >    /**
375 >     * Performs cleanup associated with termination of this worker
376 >     * thread.  If you override this method, you must invoke
377 >     * {@code super.onTermination} at the end of the overridden method.
378 >     *
379 >     * @param exception the exception causing this thread to abort due
380 >     * to an unrecoverable error, or {@code null} if completed normally
381 >     */
382 >    protected void onTermination(Throwable exception) {
383 >        try {
384 >            cancelTasks();
385 >            setTerminated();
386 >            pool.workerTerminated(this);
387 >        } catch (Throwable ex) {        // Shouldn't ever happen
388 >            if (exception == null)      // but if so, at least rethrown
389 >                exception = ex;
390 >        } finally {
391 >            if (exception != null)
392 >                UNSAFE.throwException(exception);
393 >        }
394 >    }
395  
396      /**
397 <     * Get and clear steal count for accumulation by pool.  Called
398 <     * only when known to be idle (in pool.sync and termination).
397 >     * This method is required to be public, but should never be
398 >     * called explicitly. It performs the main run loop to execute
399 >     * ForkJoinTasks.
400       */
401 <    final int getAndClearStealCount() {
402 <        int sc = stealCount;
403 <        stealCount = 0;
404 <        return sc;
401 >    public void run() {
402 >        Throwable exception = null;
403 >        try {
404 >            onStart();
405 >            mainLoop();
406 >        } catch (Throwable ex) {
407 >            exception = ex;
408 >        } finally {
409 >            onTermination(exception);
410 >        }
411      }
412  
413 +    // helpers for run()
414 +
415      /**
416 <     * Returns estimate of the number of tasks in the queue, without
263 <     * correcting for transient negative values
416 >     * Find and execute tasks and check status while running
417       */
418 <    final int getRawQueueSize() {
419 <        return sp - base;
418 >    private void mainLoop() {
419 >        int emptyScans = 0; // consecutive times failed to find work
420 >        ForkJoinPool p = pool;
421 >        for (;;) {
422 >            p.preStep(this, emptyScans);
423 >            if (runState != 0)
424 >                return;
425 >            ForkJoinTask<?> t; // try to get and run stolen or submitted task
426 >            if ((t = scan()) != null || (t = pollSubmission()) != null) {
427 >                t.tryExec();
428 >                if (base != sp)
429 >                    runLocalTasks();
430 >                currentSteal = null;
431 >                emptyScans = 0;
432 >            }
433 >            else
434 >                ++emptyScans;
435 >        }
436      }
437  
438 <    // Intrinsics-based support for queue operations.
439 <    // Currently these three (setSp, setSlot, casSlotNull) are
440 <    // usually manually inlined to improve performance
438 >    /**
439 >     * Runs local tasks until queue is empty or shut down.  Call only
440 >     * while active.
441 >     */
442 >    private void runLocalTasks() {
443 >        while (runState == 0) {
444 >            ForkJoinTask<?> t = locallyFifo? locallyDeqTask() : popTask();
445 >            if (t != null)
446 >                t.tryExec();
447 >            else if (base == sp)
448 >                break;
449 >        }
450 >    }
451  
452      /**
453 <     * Sets sp in store-order.
453 >     * If a submission exists, try to activate and take it
454 >     *
455 >     * @return a task, if available
456       */
457 <    private void setSp(int s) {
458 <        _unsafe.putOrderedInt(this, spOffset, s);
457 >    private ForkJoinTask<?> pollSubmission() {
458 >        ForkJoinPool p = pool;
459 >        while (p.hasQueuedSubmissions()) {
460 >            if (active || (active = p.tryIncrementActiveCount())) {
461 >                ForkJoinTask<?> t = p.pollSubmission();
462 >                if (t != null) {
463 >                    currentSteal = t;
464 >                    return t;
465 >                }
466 >                return scan(); // if missed, rescan
467 >            }
468 >        }
469 >        return null;
470      }
471  
472 +    /*
473 +     * Intrinsics-based atomic writes for queue slots. These are
474 +     * basically the same as methods in AtomicObjectArray, but
475 +     * specialized for (1) ForkJoinTask elements (2) requirement that
476 +     * nullness and bounds checks have already been performed by
477 +     * callers and (3) effective offsets are known not to overflow
478 +     * from int to long (because of MAXIMUM_QUEUE_CAPACITY). We don't
479 +     * need corresponding version for reads: plain array reads are OK
480 +     * because they protected by other volatile reads and are
481 +     * confirmed by CASes.
482 +     *
483 +     * Most uses don't actually call these methods, but instead contain
484 +     * inlined forms that enable more predictable optimization.  We
485 +     * don't define the version of write used in pushTask at all, but
486 +     * instead inline there a store-fenced array slot write.
487 +     */
488 +
489      /**
490 <     * Add in store-order the given task at given slot of q to
491 <     * null. Caller must ensure q is nonnull and index is in range.
490 >     * CASes slot i of array q from t to null. Caller must ensure q is
491 >     * non-null and index is in range.
492       */
493 <    private static void setSlot(ForkJoinTask<?>[] q, int i,
494 <                                ForkJoinTask<?> t){
495 <        _unsafe.putOrderedObject(q, (i << qShift) + qBase, t);
493 >    private static final boolean casSlotNull(ForkJoinTask<?>[] q, int i,
494 >                                             ForkJoinTask<?> t) {
495 >        return UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase, t, null);
496      }
497  
498      /**
499 <     * CAS given slot of q to null. Caller must ensure q is nonnull
500 <     * and index is in range.
499 >     * Performs a volatile write of the given task at given slot of
500 >     * array q.  Caller must ensure q is non-null and index is in
501 >     * range. This method is used only during resets and backouts.
502       */
503 <    private static boolean casSlotNull(ForkJoinTask<?>[] q, int i,
504 <                                       ForkJoinTask<?> t) {
505 <        return _unsafe.compareAndSwapObject(q, (i << qShift) + qBase, t, null);
503 >    private static final void writeSlot(ForkJoinTask<?>[] q, int i,
504 >                                              ForkJoinTask<?> t) {
505 >        UNSAFE.putObjectVolatile(q, (i << qShift) + qBase, t);
506      }
507  
508 <    // Main queue methods
508 >    // queue methods
509  
510      /**
511 <     * Pushes a task. Called only by current thread.
512 <     * @param t the task. Caller must ensure nonnull
511 >     * Pushes a task. Call only from this thread.
512 >     *
513 >     * @param t the task. Caller must ensure non-null.
514       */
515      final void pushTask(ForkJoinTask<?> t) {
516          ForkJoinTask<?>[] q = queue;
517 <        int mask = q.length - 1;
518 <        int s = sp;
519 <        _unsafe.putOrderedObject(q, ((s & mask) << qShift) + qBase, t);
520 <        _unsafe.putOrderedInt(this, spOffset, ++s);
521 <        if ((s -= base) == 1)
522 <            pool.signalNonEmptyWorkerQueue();
523 <        else if (s >= mask)
313 <            growQueue();
517 >        int mask = q.length - 1; // implicit assert q != null
518 >        int s = sp++;            // ok to increment sp before slot write
519 >        UNSAFE.putOrderedObject(q, ((s & mask) << qShift) + qBase, t);
520 >        if ((s -= base) == 0)
521 >            pool.signalWork();   // was empty
522 >        else if (s == mask)
523 >            growQueue();         // is full
524      }
525  
526      /**
527       * Tries to take a task from the base of the queue, failing if
528 <     * either empty or contended.
529 <     * @return a task, or null if none or contended.
528 >     * empty or contended. Note: Specializations of this code appear
529 >     * in locallyDeqTask and elsewhere.
530 >     *
531 >     * @return a task, or null if none or contended
532       */
533 <    private ForkJoinTask<?> deqTask() {
322 <        ForkJoinTask<?>[] q;
533 >    final ForkJoinTask<?> deqTask() {
534          ForkJoinTask<?> t;
535 <        int i;
536 <        int b;
537 <        if (sp != (b = base) &&
535 >        ForkJoinTask<?>[] q;
536 >        int b, i;
537 >        if ((b = base) != sp &&
538              (q = queue) != null && // must read q after b
539 <            (t = q[i = (q.length - 1) & b]) != null &&
540 <            _unsafe.compareAndSwapObject(q, (i << qShift) + qBase, t, null)) {
539 >            (t = q[i = (q.length - 1) & b]) != null && base == b &&
540 >            UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase, t, null)) {
541              base = b + 1;
542              return t;
543          }
# Line 334 | Line 545 | public class ForkJoinWorkerThread extend
545      }
546  
547      /**
548 <     * Returns a popped task, or null if empty.  Called only by
549 <     * current thread.
548 >     * Tries to take a task from the base of own queue. Assumes active
549 >     * status.  Called only by current thread.
550 >     *
551 >     * @return a task, or null if none
552       */
553 <    final ForkJoinTask<?> popTask() {
341 <        ForkJoinTask<?> t;
342 <        int i;
553 >    final ForkJoinTask<?> locallyDeqTask() {
554          ForkJoinTask<?>[] q = queue;
555 <        int mask = q.length - 1;
556 <        int s = sp;
557 <        if (s != base &&
558 <            (t = q[i = (s - 1) & mask]) != null &&
559 <            _unsafe.compareAndSwapObject(q, (i << qShift) + qBase, t, null)) {
560 <            _unsafe.putOrderedInt(this, spOffset, s - 1);
561 <            return t;
555 >        if (q != null) {
556 >            ForkJoinTask<?> t;
557 >            int b, i;
558 >            while (sp != (b = base)) {
559 >                if ((t = q[i = (q.length - 1) & b]) != null && base == b &&
560 >                    UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase,
561 >                                                t, null)) {
562 >                    base = b + 1;
563 >                    return t;
564 >                }
565 >            }
566          }
567          return null;
568      }
569  
570      /**
571 <     * Specialized version of popTask to pop only if
572 <     * topmost element is the given task. Called only
573 <     * by current thread.
574 <     * @param t the task. Caller must ensure nonnull
571 >     * Returns a popped task, or null if empty. Assumes active status.
572 >     * Called only by current thread.
573 >     */
574 >    final ForkJoinTask<?> popTask() {
575 >        int s;
576 >        ForkJoinTask<?>[] q;
577 >        if (base != (s = sp) && (q = queue) != null) {
578 >            int i = (q.length - 1) & --s;
579 >            ForkJoinTask<?> t = q[i];
580 >            if (t != null && UNSAFE.compareAndSwapObject
581 >                (q, (i << qShift) + qBase, t, null)) {
582 >                sp = s;
583 >                return t;
584 >            }
585 >        }
586 >        return null;
587 >    }
588 >
589 >    /**
590 >     * Specialized version of popTask to pop only if topmost element
591 >     * is the given task. Called only by current thread while
592 >     * active.
593 >     *
594 >     * @param t the task. Caller must ensure non-null.
595       */
596      final boolean unpushTask(ForkJoinTask<?> t) {
597 <        ForkJoinTask<?>[] q = queue;
598 <        int mask = q.length - 1;
599 <        int s = sp - 1;
600 <        if (_unsafe.compareAndSwapObject(q, ((s & mask) << qShift) + qBase,
601 <                                         t, null)) {
602 <            _unsafe.putOrderedInt(this, spOffset, s);
597 >        int s;
598 >        ForkJoinTask<?>[] q;
599 >        if (base != (s = sp) && (q = queue) != null &&
600 >            UNSAFE.compareAndSwapObject
601 >            (q, (((q.length - 1) & --s) << qShift) + qBase, t, null)) {
602 >            sp = s;
603              return true;
604          }
605          return false;
606      }
607  
608      /**
609 <     * Returns next task to pop.
609 >     * Returns next task or null if empty or contended
610       */
611      final ForkJoinTask<?> peekTask() {
612          ForkJoinTask<?>[] q = queue;
613 <        return q == null? null : q[(sp - 1) & (q.length - 1)];
613 >        if (q == null)
614 >            return null;
615 >        int mask = q.length - 1;
616 >        int i = locallyFifo ? base : (sp - 1);
617 >        return q[i & mask];
618      }
619  
620      /**
# Line 400 | Line 639 | public class ForkJoinWorkerThread extend
639              ForkJoinTask<?> t = oldQ[oldIndex];
640              if (t != null && !casSlotNull(oldQ, oldIndex, t))
641                  t = null;
642 <            setSlot(newQ, b & newMask, t);
642 >            writeSlot(newQ, b & newMask, t);
643          } while (++b != bf);
644 <        pool.signalIdleWorkers(false);
644 >        pool.signalWork();
645      }
646  
408    // Runstate management
409
410    final boolean isShutdown()    { return runState >= SHUTDOWN;  }
411    final boolean isTerminating() { return runState >= TERMINATING;  }
412    final boolean isTerminated()  { return runState == TERMINATED; }
413    final boolean shutdown()      { return transitionRunStateTo(SHUTDOWN); }
414    final boolean shutdownNow()   { return transitionRunStateTo(TERMINATING); }
415
647      /**
648 <     * Transition to at least the given state. Return true if not
649 <     * already at least given state.
648 >     * Computes next value for random victim probe in scan().  Scans
649 >     * don't require a very high quality generator, but also not a
650 >     * crummy one.  Marsaglia xor-shift is cheap and works well enough.
651 >     * Note: This is manually inlined in scan()
652       */
653 <    private boolean transitionRunStateTo(int state) {
654 <        for (;;) {
655 <            int s = runState;
656 <            if (s >= state)
424 <                return false;
425 <            if (_unsafe.compareAndSwapInt(this, runStateOffset, s, state))
426 <                return true;
427 <        }
653 >    private static final int xorShift(int r) {
654 >        r ^= r << 13;
655 >        r ^= r >>> 17;
656 >        return r ^ (r << 5);
657      }
658  
659      /**
660 <     * Ensure status is active and if necessary adjust pool active count
660 >     * Tries to steal a task from another worker. Starts at a random
661 >     * index of workers array, and probes workers until finding one
662 >     * with non-empty queue or finding that all are empty.  It
663 >     * randomly selects the first n probes. If these are empty, it
664 >     * resorts to a circular sweep, which is necessary to accurately
665 >     * set active status. (The circular sweep uses steps of
666 >     * approximately half the array size plus 1, to avoid bias
667 >     * stemming from leftmost packing of the array in ForkJoinPool.)
668 >     *
669 >     * This method must be both fast and quiet -- usually avoiding
670 >     * memory accesses that could disrupt cache sharing etc other than
671 >     * those needed to check for and take tasks (or to activate if not
672 >     * already active). This accounts for, among other things,
673 >     * updating random seed in place without storing it until exit.
674 >     *
675 >     * @return a task, or null if none found
676       */
677 <    final void activate() {
678 <        if (!active) {
679 <            active = true;
680 <            pool.incrementActiveCount();
677 >    private ForkJoinTask<?> scan() {
678 >        ForkJoinPool p = pool;
679 >        ForkJoinWorkerThread[] ws;        // worker array
680 >        int n;                            // upper bound of #workers
681 >        if ((ws = p.workers) != null && (n = ws.length) > 1) {
682 >            boolean canSteal = active;    // shadow active status
683 >            int r = seed;                 // extract seed once
684 >            int mask = n - 1;
685 >            int j = -n;                   // loop counter
686 >            int k = r;                    // worker index, random if j < 0
687 >            for (;;) {
688 >                ForkJoinWorkerThread v = ws[k & mask];
689 >                r ^= r << 13; r ^= r >>> 17; r ^= r << 5; // inline xorshift
690 >                if (v != null && v.base != v.sp) {
691 >                    if (canSteal ||       // ensure active status
692 >                        (canSteal = active = p.tryIncrementActiveCount())) {
693 >                        int b = v.base;   // inline specialized deqTask
694 >                        ForkJoinTask<?>[] q;
695 >                        if (b != v.sp && (q = v.queue) != null) {
696 >                            ForkJoinTask<?> t;
697 >                            int i = (q.length - 1) & b;
698 >                            long u = (i << qShift) + qBase; // raw offset
699 >                            if ((t = q[i]) != null && v.base == b &&
700 >                                UNSAFE.compareAndSwapObject(q, u, t, null)) {
701 >                                currentSteal = t;
702 >                                v.stealHint = poolIndex;
703 >                                v.base = b + 1;
704 >                                seed = r;
705 >                                ++stealCount;
706 >                                return t;
707 >                            }
708 >                        }
709 >                    }
710 >                    j = -n;
711 >                    k = r;                // restart on contention
712 >                }
713 >                else if (++j <= 0)
714 >                    k = r;
715 >                else if (j <= n)
716 >                    k += (n >>> 1) | 1;
717 >                else
718 >                    break;
719 >            }
720          }
721 +        return null;
722      }
723  
724 +    // Run State management
725 +
726 +    // status check methods used mainly by ForkJoinPool
727 +    final boolean isTerminating() { return (runState & TERMINATING) != 0; }
728 +    final boolean isTerminated()  { return (runState & TERMINATED) != 0; }
729 +    final boolean isSuspended()   { return (runState & SUSPENDED) != 0; }
730 +    final boolean isTrimmed()     { return (runState & TRIMMED) != 0; }
731 +
732      /**
733 <     * Ensure status is inactive and if necessary adjust pool active count
733 >     * Sets state to TERMINATING, also resuming if suspended.
734       */
735 <    final void inactivate() {
736 <        if (active) {
737 <            active = false;
738 <            pool.decrementActiveCount();
735 >    final void shutdown() {
736 >        for (;;) {
737 >            int s = runState;
738 >            if ((s & SUSPENDED) != 0) { // kill and wakeup if suspended
739 >                if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
740 >                                             (s & ~SUSPENDED) |
741 >                                             (TRIMMED|TERMINATING))) {
742 >                    LockSupport.unpark(this);
743 >                    break;
744 >                }
745 >            }
746 >            else if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
747 >                                              s | TERMINATING))
748 >                break;
749          }
750      }
751  
450    // Lifecycle methods
451
752      /**
753 <     * Initializes internal state after construction but before
454 <     * processing any tasks. If you override this method, you must
455 <     * invoke super.onStart() at the beginning of the method.
456 <     * Initialization requires care: Most fields must have legal
457 <     * default values, to ensure that attempted accesses from other
458 <     * threads work correctly even before this thread starts
459 <     * processing tasks.
753 >     * Sets state to TERMINATED. Called only by this thread.
754       */
755 <    protected void onStart() {
756 <        juRandomSeed = randomSeedGenerator.nextLong();
757 <        do;while((randomVictimSeed = nextRandomInt()) == 0); // must be nonzero
758 <        if (queue == null)
759 <            queue = new ForkJoinTask<?>[INITIAL_QUEUE_CAPACITY];
755 >    private void setTerminated() {
756 >        int s;
757 >        do {} while (!UNSAFE.compareAndSwapInt(this, runStateOffset,
758 >                                               s = runState,
759 >                                               s | (TERMINATING|TERMINATED)));
760 >    }
761  
762 <        // Heuristically allow one initial thread to warm up; others wait
763 <        if (poolIndex < pool.getParallelism() - 1) {
764 <            eventCount = pool.sync(this, 0);
765 <            activate();
766 <        }
762 >    /**
763 >     * Instrumented version of park used by ForkJoinPool.eventSync
764 >     */
765 >    final void doPark() {
766 >        ++parkCount;
767 >        LockSupport.park(this);
768      }
769  
770      /**
771 <     * Perform cleanup associated with termination of this worker
476 <     * thread.  If you override this method, you must invoke
477 <     * super.onTermination at the end of the overridden method.
771 >     * If suspended, tries to set status to unsuspended and unparks.
772       *
773 <     * @param exception the exception causing this thread to abort due
480 <     * to an unrecoverable error, or null if completed normally.
773 >     * @return true if successful
774       */
775 <    protected void onTermination(Throwable exception) {
776 <        try {
777 <            clearLocalTasks();
778 <            inactivate();
779 <            cancelTasks();
780 <        } finally {
781 <            terminate(exception);
775 >    final boolean tryResumeSpare() {
776 >        int s = runState;
777 >        if ((s & SUSPENDED) != 0 &&
778 >            UNSAFE.compareAndSwapInt(this, runStateOffset, s,
779 >                                     s & ~SUSPENDED)) {
780 >            LockSupport.unpark(this);
781 >            return true;
782          }
783 +        return false;
784      }
785  
786      /**
787 <     * Notify pool of termination and, if exception is nonnull,
788 <     * rethrow it to trigger this thread's uncaughtExceptionHandler
787 >     * Sets suspended status and blocks as spare until resumed,
788 >     * shutdown, or timed out.
789 >     *
790 >     * @return false if trimmed
791       */
792 <    private void terminate(Throwable exception) {
793 <        transitionRunStateTo(TERMINATED);
794 <        try {
795 <            pool.workerTerminated(this);
796 <        } finally {
797 <            if (exception != null)
798 <                ForkJoinTask.rethrowException(exception);
792 >    final boolean suspendAsSpare() {
793 >        for (;;) {               // set suspended unless terminating
794 >            int s = runState;
795 >            if ((s & TERMINATING) != 0) { // must kill
796 >                if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
797 >                                             s | (TRIMMED | TERMINATING)))
798 >                    return false;
799 >            }
800 >            else if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
801 >                                              s | SUSPENDED))
802 >                break;
803 >        }
804 >        int pc = pool.parallelism;
805 >        pool.accumulateStealCount(this);
806 >        boolean timed;
807 >        long nanos;
808 >        long startTime;
809 >        if (poolIndex < pc) { // untimed wait for core threads
810 >            timed = false;
811 >            nanos = 0L;
812 >            startTime = 0L;
813 >        }
814 >        else {                // timed wait for added threads
815 >            timed = true;
816 >            nanos = SPARE_KEEPALIVE_NANOS;
817 >            startTime = System.nanoTime();
818 >        }
819 >        lastEventCount = 0;      // reset upon resume
820 >        interrupted();           // clear/ignore interrupts
821 >        while ((runState & SUSPENDED) != 0) {
822 >            ++parkCount;
823 >            if (!timed)
824 >                LockSupport.park(this);
825 >            else if ((nanos -= (System.nanoTime() - startTime)) > 0)
826 >                LockSupport.parkNanos(this, nanos);
827 >            else { // try to trim on timeout
828 >                int s = runState;
829 >                if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
830 >                                             (s & ~SUSPENDED) |
831 >                                             (TRIMMED|TERMINATING)))
832 >                    return false;
833 >            }
834          }
835 +        return true;
836      }
837  
838 +    // Misc support methods for ForkJoinPool
839 +
840      /**
841 <     * Run local tasks on exit from main.
841 >     * Returns an estimate of the number of tasks in the queue.  Also
842 >     * used by ForkJoinTask.
843       */
844 <    private void clearLocalTasks() {
845 <        while (base != sp && !pool.isTerminating()) {
511 <            ForkJoinTask<?> t = popTask();
512 <            if (t != null) {
513 <                activate(); // ensure active status
514 <                t.quietlyExec();
515 <            }
516 <        }
844 >    final int getQueueSize() {
845 >        return -base + sp;
846      }
847  
848      /**
# Line 521 | Line 850 | public class ForkJoinWorkerThread extend
850       * thread.
851       */
852      final void cancelTasks() {
853 +        ForkJoinTask<?> cj = currentJoin; // try to kill live tasks
854 +        if (cj != null) {
855 +            currentJoin = null;
856 +            cj.cancelIgnoringExceptions();
857 +        }
858 +        ForkJoinTask<?> cs = currentSteal;
859 +        if (cs != null) {
860 +            currentSteal = null;
861 +            cs.cancelIgnoringExceptions();
862 +        }
863          while (base != sp) {
864              ForkJoinTask<?> t = deqTask();
865              if (t != null)
# Line 529 | Line 868 | public class ForkJoinWorkerThread extend
868      }
869  
870      /**
871 <     * This method is required to be public, but should never be
872 <     * called explicitly. It performs the main run loop to execute
873 <     * ForkJoinTasks.
871 >     * Drains tasks to given collection c.
872 >     *
873 >     * @return the number of tasks drained
874       */
875 <    public void run() {
876 <        Throwable exception = null;
877 <        try {
878 <            onStart();
879 <            while (!isShutdown())
880 <                step();
881 <        } catch (Throwable ex) {
882 <            exception = ex;
544 <        } finally {
545 <            onTermination(exception);
875 >    final int drainTasksTo(Collection<? super ForkJoinTask<?>> c) {
876 >        int n = 0;
877 >        while (base != sp) {
878 >            ForkJoinTask<?> t = deqTask();
879 >            if (t != null) {
880 >                c.add(t);
881 >                ++n;
882 >            }
883          }
884 +        return n;
885      }
886  
887 +    // Support methods for ForkJoinTask
888 +
889      /**
890 <     * Main top-level action.
890 >     * Gets and removes a local task.
891 >     *
892 >     * @return a task, if available
893       */
894 <    private void step() {
895 <        ForkJoinTask<?> t = sp != base? popTask() : null;
896 <        if (t != null || (t = scan(null, true)) != null) {
897 <            activate();
556 <            t.quietlyExec();
557 <        }
558 <        else {
559 <            inactivate();
560 <            eventCount = pool.sync(this, eventCount);
894 >    final ForkJoinTask<?> pollLocalTask() {
895 >        while (sp != base) {
896 >            if (active || (active = pool.tryIncrementActiveCount()))
897 >                return locallyFifo? locallyDeqTask() : popTask();
898          }
899 +        return null;
900      }
901  
564    // scanning for and stealing tasks
565
902      /**
903 <     * Computes next value for random victim probe. Scans don't
568 <     * require a very high quality generator, but also not a crummy
569 <     * one. Marsaglia xor-shift is cheap and works well.
903 >     * Gets and removes a local or stolen task.
904       *
905 <     * This is currently unused, and manually inlined
905 >     * @return a task, if available
906       */
907 <    private static int xorShift(int r) {
908 <        r ^= r << 1;
909 <        r ^= r >>> 3;
910 <        r ^= r << 10;
911 <        return r;
907 >    final ForkJoinTask<?> pollTask() {
908 >        ForkJoinTask<?> t = pollLocalTask();
909 >        if (t == null) {
910 >            t = scan();
911 >            currentSteal = null; // cannot retain/track
912 >        }
913 >        return t;
914      }
915  
916      /**
917 <     * Tries to steal a task from another worker and/or, if enabled,
918 <     * submission queue. Starts at a random index of workers array,
919 <     * and probes workers until finding one with non-empty queue or
920 <     * finding that all are empty.  It randomly selects the first n-1
921 <     * probes. If these are empty, it resorts to full circular
922 <     * traversal, which is necessary to accurately set active status
587 <     * by caller. Also restarts if pool barrier has tripped since last
588 <     * scan, which forces refresh of workers array, in case barrier
589 <     * was associated with resize.
590 <     *
591 <     * This method must be both fast and quiet -- usually avoiding
592 <     * memory accesses that could disrupt cache sharing etc other than
593 <     * those needed to check for and take tasks. This accounts for,
594 <     * among other things, updating random seed in place without
595 <     * storing it until exit. (Note that we only need to store it if
596 <     * we found a task; otherwise it doesn't matter if we start at the
597 <     * same place next time.)
917 >     * Possibly runs some tasks and/or blocks, until task is done.
918 >     * The main body is basically a big spinloop, alternating between
919 >     * calls to helpJoinTask and pool.tryAwaitJoin with increased
920 >     * patience parameters until either the task is done without
921 >     * waiting, or we have, if necessary, created or resumed a
922 >     * replacement for this thread while it blocks.
923       *
924 <     * @param joinMe if non null; exit early if done
925 <     * @param checkSubmissions true if OK to take submissions
601 <     * @return a task, or null if none found
924 >     * @param joinMe the task to join
925 >     * @return task status on exit
926       */
927 <    private ForkJoinTask<?> scan(ForkJoinTask<?> joinMe,
928 <                                 boolean checkSubmissions) {
929 <        ForkJoinPool p = pool;
930 <        if (p == null)                    // Never null, but avoids
931 <            return null;                  //   implicit nullchecks below
932 <        int r = randomVictimSeed;         // extract once to keep scan quiet
933 <        restart:                          // outer loop refreshes ws array
934 <        while (joinMe == null || joinMe.status >= 0) {
935 <            int mask;
936 <            ForkJoinWorkerThread[] ws = p.workers;
937 <            if (ws != null && (mask = ws.length - 1) > 0) {
938 <                int probes = -mask;       // use random index while negative
939 <                int idx = r;
616 <                for (;;) {
617 <                    ForkJoinWorkerThread v;
618 <                    // inlined xorshift to update seed
619 <                    r ^= r << 1;  r ^= r >>> 3; r ^= r << 10;
620 <                    if ((v = ws[mask & idx]) != null && v.sp != v.base) {
621 <                        ForkJoinTask<?> t;
622 <                        activate();
623 <                        if ((joinMe == null || joinMe.status >= 0) &&
624 <                            (t = v.deqTask()) != null) {
625 <                            randomVictimSeed = r;
626 <                            ++stealCount;
627 <                            return t;
628 <                        }
629 <                        continue restart; // restart on contention
630 <                    }
631 <                    if ((probes >> 1) <= mask) // n-1 random then circular
632 <                        idx = (probes++ < 0)? r : (idx + 1);
633 <                    else
634 <                        break;
635 <                }
636 <            }
637 <            if (checkSubmissions && p.hasQueuedSubmissions()) {
638 <                activate();
639 <                ForkJoinTask<?> t = p.pollSubmission();
640 <                if (t != null)
641 <                    return t;
642 <            }
643 <            else {
644 <                long ec = eventCount;     // restart on pool event
645 <                if ((eventCount = p.getEventCount()) == ec)
927 >     final int joinTask(ForkJoinTask<?> joinMe) {
928 >        int stat;
929 >        ForkJoinTask<?> prevJoin = currentJoin;
930 >        // Only written by this thread; only need ordered store
931 >        UNSAFE.putOrderedObject(this, currentJoinOffset, joinMe);
932 >        if ((stat = joinMe.status) >= 0 &&
933 >            (sp == base || (stat = localHelpJoinTask(joinMe)) >= 0)) {
934 >            for (int retries = 0; ; ++retries) {
935 >                helpJoinTask(joinMe, retries);
936 >                if ((stat = joinMe.status) < 0)
937 >                    break;
938 >                pool.tryAwaitJoin(joinMe, retries);
939 >                if ((stat = joinMe.status) < 0)
940                      break;
941 +                Thread.yield(); // tame unbounded loop
942              }
943          }
944 <        return null;
944 >        UNSAFE.putOrderedObject(this, currentJoinOffset, prevJoin);
945 >        return stat;
946      }
947  
948      /**
949 <     * Callback from pool.sync to rescan before blocking.  If a
950 <     * task is found, it is pushed so it can be executed upon return.
951 <     * @return true if found and pushed a task
952 <     */
953 <    final boolean prescan() {
954 <        ForkJoinTask<?> t = scan(null, true);
955 <        if (t != null) {
956 <            pushTask(t);
957 <            return true;
958 <        }
959 <        else {
960 <            inactivate();
961 <            return false;
949 >     * Run tasks in local queue until given task is done.
950 >     *
951 >     * @param joinMe the task to join
952 >     * @return task status on exit
953 >     */
954 >    private int localHelpJoinTask(ForkJoinTask<?> joinMe) {
955 >        int stat, s;
956 >        ForkJoinTask<?>[] q;
957 >        while ((stat = joinMe.status) >= 0 &&
958 >               base != (s = sp) && (q = queue) != null) {
959 >            ForkJoinTask<?> t;
960 >            int i = (q.length - 1) & --s;
961 >            long u = (i << qShift) + qBase; // raw offset
962 >            if ((t = q[i]) != null &&
963 >                UNSAFE.compareAndSwapObject(q, u, t, null)) {
964 >                /*
965 >                 * This recheck (and similarly in helpJoinTask)
966 >                 * handles cases where joinMe is independently
967 >                 * cancelled or forced even though there is other work
968 >                 * available. Back out of the pop by putting t back
969 >                 * into slot before we commit by writing sp.
970 >                 */
971 >                if ((stat = joinMe.status) < 0) {
972 >                    UNSAFE.putObjectVolatile(q, u, t);
973 >                    break;
974 >                }
975 >                sp = s;
976 >                t.tryExec();
977 >            }
978          }
979 +        return stat;
980      }
981  
669    // Support for ForkJoinTask methods
670
982      /**
983 <     * Scan, returning early if joinMe done
983 >     * Tries to locate and help perform tasks for a stealer of the
984 >     * given task, or in turn one of its stealers.  Traces
985 >     * currentSteal->currentJoin links looking for a thread working on
986 >     * a descendant of the given task and with a non-empty queue to
987 >     * steal back and execute tasks from. Restarts search upon
988 >     * encountering chains that are stale, unknown, or of length
989 >     * greater than MAX_HELP_DEPTH links, to avoid unbounded cycles.
990 >     *
991 >     * The implementation is very branchy to cope with the restart
992 >     * cases.  Returns void, not task status (which must be reread by
993 >     * caller anyway) to slightly simplify control paths.
994 >     *
995 >     * @param joinMe the task to join
996 >     * @param rescans the number of times to recheck for work
997       */
998 <    final ForkJoinTask<?> scanWhileJoining(ForkJoinTask<?> joinMe) {
999 <        ForkJoinTask<?> t = scan(joinMe, false);
1000 <        if (t != null && joinMe.status < 0 && sp == base) {
1001 <            pushTask(t); // unsteal if done and this task would be stealable
1002 <            t = null;
998 >    private void helpJoinTask(ForkJoinTask<?> joinMe, int rescans) {
999 >        ForkJoinWorkerThread[] ws = pool.workers;
1000 >        int n;
1001 >        if (ws == null || (n = ws.length) <= 1)
1002 >            return;                   // need at least 2 workers
1003 >        restart:while (rescans-- >= 0 && joinMe.status >= 0) {
1004 >            ForkJoinTask<?> task = joinMe;        // base of chain
1005 >            ForkJoinWorkerThread thread = this;   // thread with stolen task
1006 >            for (int depth = 0; depth < MAX_HELP_DEPTH; ++depth) {
1007 >                // Try to find v, the stealer of task, by first using hint
1008 >                ForkJoinWorkerThread v = ws[thread.stealHint & (n - 1)];
1009 >                if (v == null || v.currentSteal != task) {
1010 >                    for (int j = 0; ; ++j) {      // search array
1011 >                        if (task.status < 0 || j == n)
1012 >                            continue restart;     // stale or no stealer
1013 >                        if ((v = ws[j]) != null && v.currentSteal == task) {
1014 >                            thread.stealHint = j; // save for next time
1015 >                            break;
1016 >                        }
1017 >                    }
1018 >                }
1019 >                // Try to help v, using specialized form of deqTask
1020 >                int b;
1021 >                ForkJoinTask<?>[] q;
1022 >                while ((b = v.base) != v.sp && (q = v.queue) != null) {
1023 >                    int i = (q.length - 1) & b;
1024 >                    long u = (i << qShift) + qBase;
1025 >                    ForkJoinTask<?> t = q[i];
1026 >                    if (task.status < 0)          // stale
1027 >                        continue restart;
1028 >                    if (t != null) {
1029 >                        if (v.base == b &&
1030 >                            UNSAFE.compareAndSwapObject(q, u, t, null)) {
1031 >                            if (joinMe.status < 0) {
1032 >                                UNSAFE.putObjectVolatile(q, u, t);
1033 >                                return;           // back out on cancel
1034 >                            }
1035 >                            ForkJoinTask<?> prevSteal = currentSteal;
1036 >                            currentSteal = t;
1037 >                            v.stealHint = poolIndex;
1038 >                            v.base = b + 1;
1039 >                            t.tryExec();
1040 >                            currentSteal = prevSteal;
1041 >                        }
1042 >                    }
1043 >                    else if (v.base == b)          // producer stalled
1044 >                        continue restart;          // retry via restart
1045 >                    if (joinMe.status < 0)
1046 >                        return;
1047 >                }
1048 >                // Try to descend to find v's stealer
1049 >                ForkJoinTask<?> next = v.currentJoin;
1050 >                if (next == null || next == task || task.status < 0)
1051 >                    continue restart;             // no descendent or stale
1052 >                if (joinMe.status < 0)
1053 >                    return;
1054 >                task = next;
1055 >                thread = v;
1056 >            }
1057          }
680        return t;
1058      }
1059 <    
1059 >
1060      /**
1061 <     * Pops or steals a task
1062 <     * @return task, or null if none available
1061 >     * Returns an estimate of the number of tasks, offset by a
1062 >     * function of number of idle workers.
1063 >     *
1064 >     * This method provides a cheap heuristic guide for task
1065 >     * partitioning when programmers, frameworks, tools, or languages
1066 >     * have little or no idea about task granularity.  In essence by
1067 >     * offering this method, we ask users only about tradeoffs in
1068 >     * overhead vs expected throughput and its variance, rather than
1069 >     * how finely to partition tasks.
1070 >     *
1071 >     * In a steady state strict (tree-structured) computation, each
1072 >     * thread makes available for stealing enough tasks for other
1073 >     * threads to remain active. Inductively, if all threads play by
1074 >     * the same rules, each thread should make available only a
1075 >     * constant number of tasks.
1076 >     *
1077 >     * The minimum useful constant is just 1. But using a value of 1
1078 >     * would require immediate replenishment upon each steal to
1079 >     * maintain enough tasks, which is infeasible.  Further,
1080 >     * partitionings/granularities of offered tasks should minimize
1081 >     * steal rates, which in general means that threads nearer the top
1082 >     * of computation tree should generate more than those nearer the
1083 >     * bottom. In perfect steady state, each thread is at
1084 >     * approximately the same level of computation tree. However,
1085 >     * producing extra tasks amortizes the uncertainty of progress and
1086 >     * diffusion assumptions.
1087 >     *
1088 >     * So, users will want to use values larger, but not much larger
1089 >     * than 1 to both smooth over transient shortages and hedge
1090 >     * against uneven progress; as traded off against the cost of
1091 >     * extra task overhead. We leave the user to pick a threshold
1092 >     * value to compare with the results of this call to guide
1093 >     * decisions, but recommend values such as 3.
1094 >     *
1095 >     * When all threads are active, it is on average OK to estimate
1096 >     * surplus strictly locally. In steady-state, if one thread is
1097 >     * maintaining say 2 surplus tasks, then so are others. So we can
1098 >     * just use estimated queue length (although note that (sp - base)
1099 >     * can be an overestimate because of stealers lagging increments
1100 >     * of base).  However, this strategy alone leads to serious
1101 >     * mis-estimates in some non-steady-state conditions (ramp-up,
1102 >     * ramp-down, other stalls). We can detect many of these by
1103 >     * further considering the number of "idle" threads, that are
1104 >     * known to have zero queued tasks, so compensate by a factor of
1105 >     * (#idle/#active) threads.
1106       */
1107 <    final ForkJoinTask<?> pollLocalOrStolenTask() {
1108 <        ForkJoinTask<?> t;
689 <        return (t = popTask()) == null? scan(null, false) : t;
1107 >    final int getEstimatedSurplusTaskCount() {
1108 >        return sp - base - pool.idlePerActive();
1109      }
1110  
1111      /**
1112 <     * Runs tasks until pool isQuiescent
1112 >     * Runs tasks until {@code pool.isQuiescent()}.
1113       */
1114      final void helpQuiescePool() {
1115          for (;;) {
1116 <            ForkJoinTask<?> t = pollLocalOrStolenTask();
1117 <            if (t != null) {
1118 <                activate();
1119 <                t.quietlyExec();
1116 >            ForkJoinTask<?> t = pollLocalTask();
1117 >            if (t != null || (t = scan()) != null) {
1118 >                t.tryExec();
1119 >                currentSteal = null;
1120              }
1121              else {
1122 <                inactivate();
1123 <                if (pool.isQuiescent()) {
1124 <                    activate(); // re-activate on exit
1125 <                    break;
1122 >                ForkJoinPool p = pool;
1123 >                if (active) {
1124 >                    active = false; // inactivate
1125 >                    do {} while (!p.tryDecrementActiveCount());
1126 >                }
1127 >                if (p.isQuiescent()) {
1128 >                    active = true; // re-activate
1129 >                    do {} while (!p.tryIncrementActiveCount());
1130 >                    return;
1131                  }
1132              }
1133          }
1134      }
1135  
1136 <    /**
713 <     * Returns an estimate of the number of tasks in the queue.
714 <     */
715 <    final int getQueueSize() {
716 <        int n = sp - base;
717 <        return n <= 0? 0 : n; // suppress momentarily negative values
718 <    }
1136 >    // Unsafe mechanics
1137  
1138 <    /**
1139 <     * Returns an estimate of the number of tasks, offset by a
1140 <     * function of number of idle workers.
1141 <     */
1142 <    final int getEstimatedSurplusTaskCount() {
1143 <        // The halving approximates weighting idle vs non-idle workers
1144 <        return (sp - base) - (pool.getIdleThreadCount() >>> 1);
1145 <    }
728 <
729 <    // Per-worker exported random numbers
730 <
731 <    // Same constants as java.util.Random
732 <    final static long JURandomMultiplier = 0x5DEECE66DL;
733 <    final static long JURandomAddend = 0xBL;
734 <    final static long JURandomMask = (1L << 48) - 1;
735 <
736 <    private final int nextJURandom(int bits) {
737 <        long next = (juRandomSeed * JURandomMultiplier + JURandomAddend) &
738 <            JURandomMask;
739 <        juRandomSeed = next;
740 <        return (int)(next >>> (48 - bits));
741 <    }
742 <
743 <    private final int nextJURandomInt(int n) {
744 <        if (n <= 0)
745 <            throw new IllegalArgumentException("n must be positive");
746 <        int bits = nextJURandom(31);
747 <        if ((n & -n) == n)
748 <            return (int)((n * (long)bits) >> 31);
1138 >    private static final sun.misc.Unsafe UNSAFE = getUnsafe();
1139 >    private static final long runStateOffset =
1140 >        objectFieldOffset("runState", ForkJoinWorkerThread.class);
1141 >    private static final long currentJoinOffset =
1142 >        objectFieldOffset("currentJoin", ForkJoinWorkerThread.class);
1143 >    private static final long qBase =
1144 >        UNSAFE.arrayBaseOffset(ForkJoinTask[].class);
1145 >    private static final int qShift;
1146  
1147 <        for (;;) {
1148 <            int val = bits % n;
1149 <            if (bits - val + (n-1) >= 0)
1150 <                return val;
1151 <            bits = nextJURandom(31);
755 <        }
756 <    }
757 <
758 <    private final long nextJURandomLong() {
759 <        return ((long)(nextJURandom(32)) << 32) + nextJURandom(32);
1147 >    static {
1148 >        int s = UNSAFE.arrayIndexScale(ForkJoinTask[].class);
1149 >        if ((s & (s-1)) != 0)
1150 >            throw new Error("data type scale not a power of two");
1151 >        qShift = 31 - Integer.numberOfLeadingZeros(s);
1152      }
1153  
1154 <    private final long nextJURandomLong(long n) {
1155 <        if (n <= 0)
1156 <            throw new IllegalArgumentException("n must be positive");
1157 <        long offset = 0;
1158 <        while (n >= Integer.MAX_VALUE) { // randomly pick half range
1159 <            int bits = nextJURandom(2); // 2nd bit for odd vs even split
1160 <            long half = n >>> 1;
1161 <            long nextn = ((bits & 2) == 0)? half : n - half;
770 <            if ((bits & 1) == 0)
771 <                offset += n - nextn;
772 <            n = nextn;
1154 >    private static long objectFieldOffset(String field, Class<?> klazz) {
1155 >        try {
1156 >            return UNSAFE.objectFieldOffset(klazz.getDeclaredField(field));
1157 >        } catch (NoSuchFieldException e) {
1158 >            // Convert Exception to corresponding Error
1159 >            NoSuchFieldError error = new NoSuchFieldError(field);
1160 >            error.initCause(e);
1161 >            throw error;
1162          }
774        return offset + nextJURandomInt((int)n);
775    }
776
777    private final double nextJURandomDouble() {
778        return (((long)(nextJURandom(26)) << 27) + nextJURandom(27))
779            / (double)(1L << 53);
780    }
781
782    /**
783     * Returns a random integer using a per-worker random
784     * number generator with the same properties as
785     * {@link java.util.Random#nextInt}
786     * @return the next pseudorandom, uniformly distributed {@code int}
787     *         value from this worker's random number generator's sequence
788     */
789    public static int nextRandomInt() {
790        return ((ForkJoinWorkerThread)(Thread.currentThread())).
791            nextJURandom(32);
792    }
793
794    /**
795     * Returns a random integer using a per-worker random
796     * number generator with the same properties as
797     * {@link java.util.Random#nextInt(int)}
798     * @param n the bound on the random number to be returned.  Must be
799     *        positive.
800     * @return the next pseudorandom, uniformly distributed {@code int}
801     *         value between {@code 0} (inclusive) and {@code n} (exclusive)
802     *         from this worker's random number generator's sequence
803     * @throws IllegalArgumentException if n is not positive
804     */
805    public static int nextRandomInt(int n) {
806        return ((ForkJoinWorkerThread)(Thread.currentThread())).
807            nextJURandomInt(n);
1163      }
1164  
1165      /**
1166 <     * Returns a random long using a per-worker random
1167 <     * number generator with the same properties as
1168 <     * {@link java.util.Random#nextLong}
1169 <     * @return the next pseudorandom, uniformly distributed {@code long}
1170 <     *         value from this worker's random number generator's sequence
816 <     */
817 <    public static long nextRandomLong() {
818 <        return ((ForkJoinWorkerThread)(Thread.currentThread())).
819 <            nextJURandomLong();
820 <    }
821 <
822 <    /**
823 <     * Returns a random integer using a per-worker random
824 <     * number generator with the same properties as
825 <     * {@link java.util.Random#nextInt(int)}
826 <     * @param n the bound on the random number to be returned.  Must be
827 <     *        positive.
828 <     * @return the next pseudorandom, uniformly distributed {@code int}
829 <     *         value between {@code 0} (inclusive) and {@code n} (exclusive)
830 <     *         from this worker's random number generator's sequence
831 <     * @throws IllegalArgumentException if n is not positive
832 <     */
833 <    public static long nextRandomLong(long n) {
834 <        return ((ForkJoinWorkerThread)(Thread.currentThread())).
835 <            nextJURandomLong(n);
836 <    }
837 <
838 <    /**
839 <     * Returns a random double using a per-worker random
840 <     * number generator with the same properties as
841 <     * {@link java.util.Random#nextDouble}
842 <     * @return the next pseudorandom, uniformly distributed {@code double}
843 <     *         value between {@code 0.0} and {@code 1.0} from this
844 <     *         worker's random number generator's sequence
1166 >     * Returns a sun.misc.Unsafe.  Suitable for use in a 3rd party package.
1167 >     * Replace with a simple call to Unsafe.getUnsafe when integrating
1168 >     * into a jdk.
1169 >     *
1170 >     * @return a sun.misc.Unsafe
1171       */
1172 <    public static double nextRandomDouble() {
847 <        return ((ForkJoinWorkerThread)(Thread.currentThread())).
848 <            nextJURandomDouble();
849 <    }
850 <
851 <    // Temporary Unsafe mechanics for preliminary release
852 <
853 <    static final Unsafe _unsafe;
854 <    static final long baseOffset;
855 <    static final long spOffset;
856 <    static final long qBase;
857 <    static final int qShift;
858 <    static final long runStateOffset;
859 <    static {
1172 >    private static sun.misc.Unsafe getUnsafe() {
1173          try {
1174 <            if (ForkJoinWorkerThread.class.getClassLoader() != null) {
1175 <                Field f = Unsafe.class.getDeclaredField("theUnsafe");
1176 <                f.setAccessible(true);
1177 <                _unsafe = (Unsafe)f.get(null);
1174 >            return sun.misc.Unsafe.getUnsafe();
1175 >        } catch (SecurityException se) {
1176 >            try {
1177 >                return java.security.AccessController.doPrivileged
1178 >                    (new java.security
1179 >                     .PrivilegedExceptionAction<sun.misc.Unsafe>() {
1180 >                        public sun.misc.Unsafe run() throws Exception {
1181 >                            java.lang.reflect.Field f = sun.misc
1182 >                                .Unsafe.class.getDeclaredField("theUnsafe");
1183 >                            f.setAccessible(true);
1184 >                            return (sun.misc.Unsafe) f.get(null);
1185 >                        }});
1186 >            } catch (java.security.PrivilegedActionException e) {
1187 >                throw new RuntimeException("Could not initialize intrinsics",
1188 >                                           e.getCause());
1189              }
866            else
867                _unsafe = Unsafe.getUnsafe();
868            baseOffset = _unsafe.objectFieldOffset
869                (ForkJoinWorkerThread.class.getDeclaredField("base"));
870            spOffset = _unsafe.objectFieldOffset
871                (ForkJoinWorkerThread.class.getDeclaredField("sp"));
872            runStateOffset = _unsafe.objectFieldOffset
873                (ForkJoinWorkerThread.class.getDeclaredField("runState"));
874            qBase = _unsafe.arrayBaseOffset(ForkJoinTask[].class);
875            int s = _unsafe.arrayIndexScale(ForkJoinTask[].class);
876            if ((s & (s-1)) != 0)
877                throw new Error("data type scale not a power of two");
878            qShift = 31 - Integer.numberOfLeadingZeros(s);
879        } catch (Exception e) {
880            throw new RuntimeException("Could not initialize intrinsics", e);
1190          }
1191      }
1192   }

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines