ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ForkJoinWorkerThread.java
Revision: 1.37
Committed: Thu Nov 18 00:39:15 2010 UTC (13 years, 6 months ago) by jsr166
Branch: MAIN
Changes since 1.36: +1 -1 lines
Log Message:
whitespace

File Contents

# User Rev Content
1 jsr166 1.1 /*
2     * Written by Doug Lea with assistance from members of JCP JSR-166
3     * Expert Group and released to the public domain, as explained at
4     * http://creativecommons.org/licenses/publicdomain
5     */
6    
7     package java.util.concurrent;
8    
9 dl 1.14 import java.util.Random;
10 jsr166 1.1 import java.util.Collection;
11 dl 1.14 import java.util.concurrent.locks.LockSupport;
12 dl 1.34 import java.util.concurrent.RejectedExecutionException;
13 jsr166 1.1
14     /**
15     * A thread managed by a {@link ForkJoinPool}. This class is
16     * subclassable solely for the sake of adding functionality -- there
17 jsr166 1.7 * are no overridable methods dealing with scheduling or execution.
18     * However, you can override initialization and termination methods
19     * surrounding the main task processing loop. If you do create such a
20     * subclass, you will also need to supply a custom {@link
21     * ForkJoinPool.ForkJoinWorkerThreadFactory} to use it in a {@code
22     * ForkJoinPool}.
23 jsr166 1.1 *
24     * @since 1.7
25     * @author Doug Lea
26     */
27     public class ForkJoinWorkerThread extends Thread {
28     /*
29 dl 1.14 * Overview:
30 jsr166 1.1 *
31 dl 1.14 * ForkJoinWorkerThreads are managed by ForkJoinPools and perform
32     * ForkJoinTasks. This class includes bookkeeping in support of
33     * worker activation, suspension, and lifecycle control described
34     * in more detail in the internal documentation of class
35     * ForkJoinPool. And as described further below, this class also
36     * includes special-cased support for some ForkJoinTask
37     * methods. But the main mechanics involve work-stealing:
38     *
39     * Work-stealing queues are special forms of Deques that support
40     * only three of the four possible end-operations -- push, pop,
41     * and deq (aka steal), under the further constraints that push
42     * and pop are called only from the owning thread, while deq may
43     * be called from other threads. (If you are unfamiliar with
44     * them, you probably want to read Herlihy and Shavit's book "The
45     * Art of Multiprocessor programming", chapter 16 describing these
46     * in more detail before proceeding.) The main work-stealing
47     * queue design is roughly similar to those in the papers "Dynamic
48     * Circular Work-Stealing Deque" by Chase and Lev, SPAA 2005
49     * (http://research.sun.com/scalable/pubs/index.html) and
50     * "Idempotent work stealing" by Michael, Saraswat, and Vechev,
51     * PPoPP 2009 (http://portal.acm.org/citation.cfm?id=1504186).
52     * The main differences ultimately stem from gc requirements that
53     * we null out taken slots as soon as we can, to maintain as small
54     * a footprint as possible even in programs generating huge
55     * numbers of tasks. To accomplish this, we shift the CAS
56     * arbitrating pop vs deq (steal) from being on the indices
57     * ("base" and "sp") to the slots themselves (mainly via method
58     * "casSlotNull()"). So, both a successful pop and deq mainly
59     * entail a CAS of a slot from non-null to null. Because we rely
60     * on CASes of references, we do not need tag bits on base or sp.
61     * They are simple ints as used in any circular array-based queue
62     * (see for example ArrayDeque). Updates to the indices must
63     * still be ordered in a way that guarantees that sp == base means
64     * the queue is empty, but otherwise may err on the side of
65     * possibly making the queue appear nonempty when a push, pop, or
66     * deq have not fully committed. Note that this means that the deq
67     * operation, considered individually, is not wait-free. One thief
68     * cannot successfully continue until another in-progress one (or,
69     * if previously empty, a push) completes. However, in the
70     * aggregate, we ensure at least probabilistic non-blockingness.
71     * If an attempted steal fails, a thief always chooses a different
72     * random victim target to try next. So, in order for one thief to
73     * progress, it suffices for any in-progress deq or new push on
74     * any empty queue to complete. One reason this works well here is
75     * that apparently-nonempty often means soon-to-be-stealable,
76     * which gives threads a chance to set activation status if
77     * necessary before stealing.
78 jsr166 1.1 *
79 jsr166 1.6 * This approach also enables support for "async mode" where local
80     * task processing is in FIFO, not LIFO order; simply by using a
81     * version of deq rather than pop when locallyFifo is true (as set
82     * by the ForkJoinPool). This allows use in message-passing
83     * frameworks in which tasks are never joined.
84     *
85 dl 1.17 * When a worker would otherwise be blocked waiting to join a
86     * task, it first tries a form of linear helping: Each worker
87 dl 1.18 * records (in field currentSteal) the most recent task it stole
88     * from some other worker. Plus, it records (in field currentJoin)
89     * the task it is currently actively joining. Method joinTask uses
90 dl 1.17 * these markers to try to find a worker to help (i.e., steal back
91     * a task from and execute it) that could hasten completion of the
92     * actively joined task. In essence, the joiner executes a task
93     * that would be on its own local deque had the to-be-joined task
94     * not been stolen. This may be seen as a conservative variant of
95     * the approach in Wagner & Calder "Leapfrogging: a portable
96     * technique for implementing efficient futures" SIGPLAN Notices,
97     * 1993 (http://portal.acm.org/citation.cfm?id=155354). It differs
98     * in that: (1) We only maintain dependency links across workers
99 dl 1.18 * upon steals, rather than use per-task bookkeeping. This may
100     * require a linear scan of workers array to locate stealers, but
101     * usually doesn't because stealers leave hints (that may become
102     * stale/wrong) of where to locate them. This isolates cost to
103     * when it is needed, rather than adding to per-task overhead.
104     * (2) It is "shallow", ignoring nesting and potentially cyclic
105     * mutual steals. (3) It is intentionally racy: field currentJoin
106     * is updated only while actively joining, which means that we
107     * miss links in the chain during long-lived tasks, GC stalls etc
108     * (which is OK since blocking in such cases is usually a good
109     * idea). (4) We bound the number of attempts to find work (see
110     * MAX_HELP_DEPTH) and fall back to suspending the worker and if
111     * necessary replacing it with a spare (see
112 dl 1.20 * ForkJoinPool.awaitJoin).
113 dl 1.17 *
114 dl 1.18 * Efficient implementation of these algorithms currently relies
115     * on an uncomfortable amount of "Unsafe" mechanics. To maintain
116 jsr166 1.1 * correct orderings, reads and writes of variable base require
117 dl 1.14 * volatile ordering. Variable sp does not require volatile
118     * writes but still needs store-ordering, which we accomplish by
119     * pre-incrementing sp before filling the slot with an ordered
120     * store. (Pre-incrementing also enables backouts used in
121 dl 1.18 * joinTask.) Because they are protected by volatile base reads,
122     * reads of the queue array and its slots by other threads do not
123     * need volatile load semantics, but writes (in push) require
124     * store order and CASes (in pop and deq) require (volatile) CAS
125     * semantics. (Michael, Saraswat, and Vechev's algorithm has
126     * similar properties, but without support for nulling slots.)
127     * Since these combinations aren't supported using ordinary
128     * volatiles, the only way to accomplish these efficiently is to
129     * use direct Unsafe calls. (Using external AtomicIntegers and
130     * AtomicReferenceArrays for the indices and array is
131     * significantly slower because of memory locality and indirection
132     * effects.)
133 jsr166 1.9 *
134 jsr166 1.8 * Further, performance on most platforms is very sensitive to
135     * placement and sizing of the (resizable) queue array. Even
136     * though these queues don't usually become all that big, the
137     * initial size must be large enough to counteract cache
138 jsr166 1.1 * contention effects across multiple queues (especially in the
139     * presence of GC cardmarking). Also, to improve thread-locality,
140 dl 1.14 * queues are initialized after starting. All together, these
141     * low-level implementation choices produce as much as a factor of
142     * 4 performance improvement compared to naive implementations,
143     * and enable the processing of billions of tasks per second,
144     * sometimes at the expense of ugliness.
145 jsr166 1.1 */
146    
147     /**
148 dl 1.14 * Generator for initial random seeds for random victim
149     * selection. This is used only to create initial seeds. Random
150     * steals use a cheaper xorshift generator per steal attempt. We
151     * expect only rare contention on seedGenerator, so just use a
152     * plain Random.
153     */
154     private static final Random seedGenerator = new Random();
155    
156     /**
157 dl 1.18 * The maximum stolen->joining link depth allowed in helpJoinTask.
158     * Depths for legitimate chains are unbounded, but we use a fixed
159     * constant to avoid (otherwise unchecked) cycles and bound
160     * staleness of traversal parameters at the expense of sometimes
161     * blocking when we could be helping.
162 dl 1.14 */
163 dl 1.18 private static final int MAX_HELP_DEPTH = 8;
164    
165     /**
166 jsr166 1.1 * Capacity of work-stealing queue array upon initialization.
167 dl 1.17 * Must be a power of two. Initial size must be at least 4, but is
168 jsr166 1.1 * padded to minimize cache effects.
169     */
170     private static final int INITIAL_QUEUE_CAPACITY = 1 << 13;
171    
172     /**
173     * Maximum work-stealing queue array size. Must be less than or
174 dl 1.24 * equal to 1 << (31 - width of array entry) to ensure lack of
175     * index wraparound. The value is set in the static block
176     * at the end of this file after obtaining width.
177 jsr166 1.1 */
178 dl 1.24 private static final int MAXIMUM_QUEUE_CAPACITY;
179 jsr166 1.1
180     /**
181     * The pool this thread works in. Accessed directly by ForkJoinTask.
182     */
183     final ForkJoinPool pool;
184    
185     /**
186     * The work-stealing queue array. Size must be a power of two.
187 dl 1.14 * Initialized in onStart, to improve memory locality.
188 jsr166 1.1 */
189     private ForkJoinTask<?>[] queue;
190    
191     /**
192 dl 1.14 * Index (mod queue.length) of least valid queue slot, which is
193     * always the next position to steal from if nonempty.
194     */
195     private volatile int base;
196    
197     /**
198 jsr166 1.1 * Index (mod queue.length) of next queue slot to push to or pop
199 dl 1.14 * from. It is written only by owner thread, and accessed by other
200     * threads only after reading (volatile) base. Both sp and base
201     * are allowed to wrap around on overflow, but (sp - base) still
202     * estimates size.
203     */
204     private int sp;
205 jsr166 1.1
206     /**
207 dl 1.18 * The index of most recent stealer, used as a hint to avoid
208     * traversal in method helpJoinTask. This is only a hint because a
209     * worker might have had multiple steals and this only holds one
210     * of them (usually the most current). Declared non-volatile,
211     * relying on other prevailing sync to keep reasonably current.
212     */
213     private int stealHint;
214    
215     /**
216 dl 1.14 * Run state of this worker. In addition to the usual run levels,
217     * tracks if this worker is suspended as a spare, and if it was
218     * killed (trimmed) while suspended. However, "active" status is
219 dl 1.19 * maintained separately and modified only in conjunction with
220 dl 1.20 * CASes of the pool's runState (which are currently sadly
221     * manually inlined for performance.) Accessed directly by pool
222     * to simplify checks for normal (zero) status.
223 jsr166 1.1 */
224 dl 1.20 volatile int runState;
225 dl 1.14
226     private static final int TERMINATING = 0x01;
227     private static final int TERMINATED = 0x02;
228     private static final int SUSPENDED = 0x04; // inactive spare
229     private static final int TRIMMED = 0x08; // killed while suspended
230 jsr166 1.1
231     /**
232 dl 1.21 * Number of steals. Directly accessed (and reset) by
233     * pool.tryAccumulateStealCount when idle.
234 jsr166 1.1 */
235 dl 1.14 int stealCount;
236 jsr166 1.1
237     /**
238     * Seed for random number generator for choosing steal victims.
239 dl 1.14 * Uses Marsaglia xorshift. Must be initialized as nonzero.
240 jsr166 1.1 */
241     private int seed;
242    
243     /**
244 dl 1.14 * Activity status. When true, this worker is considered active.
245     * Accessed directly by pool. Must be false upon construction.
246     */
247     boolean active;
248    
249     /**
250     * True if use local fifo, not default lifo, for local polling.
251 dl 1.18 * Shadows value from ForkJoinPool.
252 jsr166 1.1 */
253 dl 1.17 private final boolean locallyFifo;
254 dl 1.18
255 jsr166 1.1 /**
256     * Index of this worker in pool array. Set once by pool before
257 dl 1.14 * running, and accessed directly by pool to locate this worker in
258     * its workers array.
259 jsr166 1.1 */
260     int poolIndex;
261    
262     /**
263 dl 1.14 * The last pool event waited for. Accessed only by pool in
264     * callback methods invoked within this thread.
265 jsr166 1.1 */
266 dl 1.14 int lastEventCount;
267 jsr166 1.1
268     /**
269 dl 1.20 * Encoded index and event count of next event waiter. Accessed
270     * only by ForkJoinPool for managing event waiters.
271 jsr166 1.1 */
272 dl 1.14 volatile long nextWaiter;
273 jsr166 1.1
274     /**
275 dl 1.20 * Number of times this thread suspended as spare. Accessed only
276     * by pool.
277 dl 1.18 */
278     int spareCount;
279    
280     /**
281 dl 1.20 * Encoded index and count of next spare waiter. Accessed only
282 dl 1.18 * by ForkJoinPool for managing spares.
283     */
284     volatile int nextSpare;
285    
286     /**
287     * The task currently being joined, set only when actively trying
288 dl 1.26 * to help other stealers in helpJoinTask. Written only by this
289 dl 1.21 * thread, but read by others.
290 dl 1.18 */
291     private volatile ForkJoinTask<?> currentJoin;
292    
293     /**
294     * The task most recently stolen from another worker (or
295 dl 1.26 * submission queue). Written only by this thread, but read by
296 dl 1.20 * others.
297 dl 1.18 */
298 dl 1.20 private volatile ForkJoinTask<?> currentSteal;
299 dl 1.18
300     /**
301 jsr166 1.1 * Creates a ForkJoinWorkerThread operating in the given pool.
302     *
303     * @param pool the pool this thread works in
304     * @throws NullPointerException if pool is null
305     */
306     protected ForkJoinWorkerThread(ForkJoinPool pool) {
307     this.pool = pool;
308 dl 1.17 this.locallyFifo = pool.locallyFifo;
309 dl 1.18 setDaemon(true);
310 dl 1.14 // To avoid exposing construction details to subclasses,
311     // remaining initialization is in start() and onStart()
312 jsr166 1.1 }
313    
314 dl 1.14 /**
315 jsr166 1.25 * Performs additional initialization and starts this thread.
316 dl 1.14 */
317 dl 1.17 final void start(int poolIndex, UncaughtExceptionHandler ueh) {
318 dl 1.14 this.poolIndex = poolIndex;
319     if (ueh != null)
320     setUncaughtExceptionHandler(ueh);
321     start();
322     }
323    
324     // Public/protected methods
325 jsr166 1.1
326     /**
327     * Returns the pool hosting this thread.
328     *
329     * @return the pool
330     */
331     public ForkJoinPool getPool() {
332     return pool;
333     }
334    
335     /**
336     * Returns the index number of this thread in its pool. The
337     * returned value ranges from zero to the maximum number of
338     * threads (minus one) that have ever been created in the pool.
339     * This method may be useful for applications that track status or
340     * collect results per-worker rather than per-task.
341     *
342     * @return the index number
343     */
344     public int getPoolIndex() {
345     return poolIndex;
346     }
347    
348     /**
349 dl 1.14 * Initializes internal state after construction but before
350     * processing any tasks. If you override this method, you must
351 dl 1.21 * invoke @code{super.onStart()} at the beginning of the method.
352 dl 1.14 * Initialization requires care: Most fields must have legal
353     * default values, to ensure that attempted accesses from other
354     * threads work correctly even before this thread starts
355     * processing tasks.
356 jsr166 1.1 */
357 dl 1.14 protected void onStart() {
358     int rs = seedGenerator.nextInt();
359     seed = rs == 0? 1 : rs; // seed must be nonzero
360 jsr166 1.1
361 dl 1.17 // Allocate name string and arrays in this thread
362 dl 1.14 String pid = Integer.toString(pool.getPoolNumber());
363     String wid = Integer.toString(poolIndex);
364     setName("ForkJoinPool-" + pid + "-worker-" + wid);
365 jsr166 1.1
366 dl 1.14 queue = new ForkJoinTask<?>[INITIAL_QUEUE_CAPACITY];
367     }
368 jsr166 1.1
369     /**
370 dl 1.14 * Performs cleanup associated with termination of this worker
371     * thread. If you override this method, you must invoke
372     * {@code super.onTermination} at the end of the overridden method.
373 jsr166 1.4 *
374 dl 1.14 * @param exception the exception causing this thread to abort due
375     * to an unrecoverable error, or {@code null} if completed normally
376 jsr166 1.1 */
377 dl 1.14 protected void onTermination(Throwable exception) {
378     try {
379 dl 1.19 ForkJoinPool p = pool;
380     if (active) {
381     int a; // inline p.tryDecrementActiveCount
382     active = false;
383 jsr166 1.22 do {} while (!UNSAFE.compareAndSwapInt
384     (p, poolRunStateOffset, a = p.runState, a - 1));
385 dl 1.19 }
386 dl 1.14 cancelTasks();
387     setTerminated();
388 dl 1.19 p.workerTerminated(this);
389 dl 1.14 } catch (Throwable ex) { // Shouldn't ever happen
390     if (exception == null) // but if so, at least rethrown
391     exception = ex;
392     } finally {
393     if (exception != null)
394     UNSAFE.throwException(exception);
395 jsr166 1.1 }
396     }
397    
398     /**
399     * This method is required to be public, but should never be
400     * called explicitly. It performs the main run loop to execute
401     * ForkJoinTasks.
402     */
403     public void run() {
404     Throwable exception = null;
405     try {
406     onStart();
407     mainLoop();
408     } catch (Throwable ex) {
409     exception = ex;
410     } finally {
411     onTermination(exception);
412     }
413     }
414    
415 dl 1.14 // helpers for run()
416    
417 jsr166 1.1 /**
418 jsr166 1.25 * Finds and executes tasks, and checks status while running.
419 jsr166 1.1 */
420     private void mainLoop() {
421 dl 1.20 boolean ran = false; // true if ran a task on last step
422 dl 1.14 ForkJoinPool p = pool;
423     for (;;) {
424 dl 1.20 p.preStep(this, ran);
425 dl 1.14 if (runState != 0)
426 dl 1.18 break;
427 dl 1.20 ran = tryExecSteal() || tryExecSubmission();
428 jsr166 1.1 }
429     }
430    
431     /**
432 jsr166 1.25 * Tries to steal a task and execute it.
433 dl 1.18 *
434     * @return true if ran a task
435 jsr166 1.1 */
436 dl 1.18 private boolean tryExecSteal() {
437     ForkJoinTask<?> t;
438 dl 1.20 if ((t = scan()) != null) {
439 dl 1.18 t.quietlyExec();
440 dl 1.20 UNSAFE.putOrderedObject(this, currentStealOffset, null);
441 dl 1.18 if (sp != base)
442     execLocalTasks();
443     return true;
444 dl 1.14 }
445 dl 1.18 return false;
446 jsr166 1.1 }
447    
448     /**
449 dl 1.21 * If a submission exists, try to activate and run it.
450 jsr166 1.1 *
451 dl 1.18 * @return true if ran a task
452 jsr166 1.1 */
453 dl 1.18 private boolean tryExecSubmission() {
454 dl 1.14 ForkJoinPool p = pool;
455 dl 1.21 // This loop is needed in case attempt to activate fails, in
456     // which case we only retry if there still appears to be a
457     // submission.
458 dl 1.14 while (p.hasQueuedSubmissions()) {
459 dl 1.19 ForkJoinTask<?> t; int a;
460 dl 1.20 if (active || // inline p.tryIncrementActiveCount
461 dl 1.19 (active = UNSAFE.compareAndSwapInt(p, poolRunStateOffset,
462     a = p.runState, a + 1))) {
463 dl 1.18 if ((t = p.pollSubmission()) != null) {
464 dl 1.20 UNSAFE.putOrderedObject(this, currentStealOffset, t);
465 dl 1.18 t.quietlyExec();
466 dl 1.20 UNSAFE.putOrderedObject(this, currentStealOffset, null);
467 dl 1.18 if (sp != base)
468     execLocalTasks();
469     return true;
470     }
471 jsr166 1.1 }
472     }
473 dl 1.18 return false;
474     }
475    
476     /**
477     * Runs local tasks until queue is empty or shut down. Call only
478     * while active.
479     */
480     private void execLocalTasks() {
481     while (runState == 0) {
482 jsr166 1.23 ForkJoinTask<?> t = locallyFifo ? locallyDeqTask() : popTask();
483 dl 1.18 if (t != null)
484     t.quietlyExec();
485     else if (sp == base)
486     break;
487     }
488 jsr166 1.1 }
489    
490 dl 1.14 /*
491     * Intrinsics-based atomic writes for queue slots. These are
492 jsr166 1.28 * basically the same as methods in AtomicReferenceArray, but
493 dl 1.14 * specialized for (1) ForkJoinTask elements (2) requirement that
494     * nullness and bounds checks have already been performed by
495     * callers and (3) effective offsets are known not to overflow
496     * from int to long (because of MAXIMUM_QUEUE_CAPACITY). We don't
497     * need corresponding version for reads: plain array reads are OK
498 jsr166 1.27 * because they are protected by other volatile reads and are
499 dl 1.14 * confirmed by CASes.
500     *
501     * Most uses don't actually call these methods, but instead contain
502     * inlined forms that enable more predictable optimization. We
503     * don't define the version of write used in pushTask at all, but
504     * instead inline there a store-fenced array slot write.
505 jsr166 1.1 */
506    
507     /**
508 dl 1.14 * CASes slot i of array q from t to null. Caller must ensure q is
509     * non-null and index is in range.
510 jsr166 1.1 */
511 dl 1.14 private static final boolean casSlotNull(ForkJoinTask<?>[] q, int i,
512     ForkJoinTask<?> t) {
513     return UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase, t, null);
514 jsr166 1.1 }
515    
516     /**
517 dl 1.14 * Performs a volatile write of the given task at given slot of
518     * array q. Caller must ensure q is non-null and index is in
519     * range. This method is used only during resets and backouts.
520 jsr166 1.1 */
521 dl 1.14 private static final void writeSlot(ForkJoinTask<?>[] q, int i,
522 jsr166 1.25 ForkJoinTask<?> t) {
523 dl 1.14 UNSAFE.putObjectVolatile(q, (i << qShift) + qBase, t);
524 jsr166 1.1 }
525    
526 dl 1.14 // queue methods
527 jsr166 1.1
528     /**
529 dl 1.14 * Pushes a task. Call only from this thread.
530 jsr166 1.1 *
531     * @param t the task. Caller must ensure non-null.
532     */
533     final void pushTask(ForkJoinTask<?> t) {
534     ForkJoinTask<?>[] q = queue;
535 dl 1.14 int mask = q.length - 1; // implicit assert q != null
536 dl 1.17 int s = sp++; // ok to increment sp before slot write
537     UNSAFE.putOrderedObject(q, ((s & mask) << qShift) + qBase, t);
538     if ((s -= base) == 0)
539     pool.signalWork(); // was empty
540     else if (s == mask)
541     growQueue(); // is full
542 jsr166 1.1 }
543    
544     /**
545     * Tries to take a task from the base of the queue, failing if
546 dl 1.14 * empty or contended. Note: Specializations of this code appear
547 dl 1.17 * in locallyDeqTask and elsewhere.
548 jsr166 1.1 *
549     * @return a task, or null if none or contended
550     */
551     final ForkJoinTask<?> deqTask() {
552     ForkJoinTask<?> t;
553     ForkJoinTask<?>[] q;
554 dl 1.14 int b, i;
555 dl 1.18 if (sp != (b = base) &&
556 jsr166 1.1 (q = queue) != null && // must read q after b
557 dl 1.17 (t = q[i = (q.length - 1) & b]) != null && base == b &&
558 dl 1.14 UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase, t, null)) {
559 jsr166 1.1 base = b + 1;
560     return t;
561     }
562     return null;
563     }
564    
565     /**
566 dl 1.14 * Tries to take a task from the base of own queue. Assumes active
567 dl 1.26 * status. Called only by this thread.
568 jsr166 1.6 *
569     * @return a task, or null if none
570     */
571     final ForkJoinTask<?> locallyDeqTask() {
572 dl 1.14 ForkJoinTask<?>[] q = queue;
573     if (q != null) {
574     ForkJoinTask<?> t;
575     int b, i;
576     while (sp != (b = base)) {
577 dl 1.17 if ((t = q[i = (q.length - 1) & b]) != null && base == b &&
578 dl 1.14 UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase,
579     t, null)) {
580 jsr166 1.6 base = b + 1;
581     return t;
582     }
583     }
584     }
585     return null;
586     }
587    
588     /**
589 dl 1.14 * Returns a popped task, or null if empty. Assumes active status.
590 dl 1.26 * Called only by this thread.
591 jsr166 1.1 */
592 dl 1.18 private ForkJoinTask<?> popTask() {
593     ForkJoinTask<?>[] q = queue;
594     if (q != null) {
595     int s;
596     while ((s = sp) != base) {
597     int i = (q.length - 1) & --s;
598     long u = (i << qShift) + qBase; // raw offset
599     ForkJoinTask<?> t = q[i];
600     if (t == null) // lost to stealer
601     break;
602     if (UNSAFE.compareAndSwapObject(q, u, t, null)) {
603     sp = s; // putOrderedInt may encourage more timely write
604     // UNSAFE.putOrderedInt(this, spOffset, s);
605     return t;
606     }
607 jsr166 1.1 }
608     }
609     return null;
610     }
611    
612     /**
613 dl 1.16 * Specialized version of popTask to pop only if topmost element
614 dl 1.26 * is the given task. Called only by this thread while active.
615 jsr166 1.1 *
616     * @param t the task. Caller must ensure non-null.
617     */
618     final boolean unpushTask(ForkJoinTask<?> t) {
619 dl 1.14 int s;
620 dl 1.18 ForkJoinTask<?>[] q = queue;
621     if ((s = sp) != base && q != null &&
622 dl 1.16 UNSAFE.compareAndSwapObject
623     (q, (((q.length - 1) & --s) << qShift) + qBase, t, null)) {
624 dl 1.20 sp = s; // putOrderedInt may encourage more timely write
625 dl 1.18 // UNSAFE.putOrderedInt(this, spOffset, s);
626 jsr166 1.1 return true;
627     }
628     return false;
629     }
630    
631     /**
632 jsr166 1.25 * Returns next task, or null if empty or contended.
633 jsr166 1.1 */
634     final ForkJoinTask<?> peekTask() {
635     ForkJoinTask<?>[] q = queue;
636     if (q == null)
637     return null;
638     int mask = q.length - 1;
639     int i = locallyFifo ? base : (sp - 1);
640     return q[i & mask];
641     }
642    
643     /**
644     * Doubles queue array size. Transfers elements by emulating
645     * steals (deqs) from old array and placing, oldest first, into
646     * new array.
647     */
648     private void growQueue() {
649     ForkJoinTask<?>[] oldQ = queue;
650     int oldSize = oldQ.length;
651     int newSize = oldSize << 1;
652     if (newSize > MAXIMUM_QUEUE_CAPACITY)
653     throw new RejectedExecutionException("Queue capacity exceeded");
654     ForkJoinTask<?>[] newQ = queue = new ForkJoinTask<?>[newSize];
655    
656     int b = base;
657     int bf = b + oldSize;
658     int oldMask = oldSize - 1;
659     int newMask = newSize - 1;
660     do {
661     int oldIndex = b & oldMask;
662     ForkJoinTask<?> t = oldQ[oldIndex];
663     if (t != null && !casSlotNull(oldQ, oldIndex, t))
664     t = null;
665 dl 1.14 writeSlot(newQ, b & newMask, t);
666 jsr166 1.1 } while (++b != bf);
667     pool.signalWork();
668     }
669    
670     /**
671 dl 1.14 * Computes next value for random victim probe in scan(). Scans
672     * don't require a very high quality generator, but also not a
673     * crummy one. Marsaglia xor-shift is cheap and works well enough.
674 jsr166 1.25 * Note: This is manually inlined in scan().
675 dl 1.14 */
676     private static final int xorShift(int r) {
677     r ^= r << 13;
678     r ^= r >>> 17;
679     return r ^ (r << 5);
680     }
681    
682     /**
683 jsr166 1.1 * Tries to steal a task from another worker. Starts at a random
684     * index of workers array, and probes workers until finding one
685     * with non-empty queue or finding that all are empty. It
686     * randomly selects the first n probes. If these are empty, it
687 dl 1.14 * resorts to a circular sweep, which is necessary to accurately
688     * set active status. (The circular sweep uses steps of
689     * approximately half the array size plus 1, to avoid bias
690     * stemming from leftmost packing of the array in ForkJoinPool.)
691 jsr166 1.1 *
692     * This method must be both fast and quiet -- usually avoiding
693     * memory accesses that could disrupt cache sharing etc other than
694 dl 1.14 * those needed to check for and take tasks (or to activate if not
695     * already active). This accounts for, among other things,
696     * updating random seed in place without storing it until exit.
697 jsr166 1.1 *
698     * @return a task, or null if none found
699     */
700     private ForkJoinTask<?> scan() {
701 dl 1.14 ForkJoinPool p = pool;
702 dl 1.16 ForkJoinWorkerThread[] ws; // worker array
703     int n; // upper bound of #workers
704     if ((ws = p.workers) != null && (n = ws.length) > 1) {
705     boolean canSteal = active; // shadow active status
706     int r = seed; // extract seed once
707     int mask = n - 1;
708     int j = -n; // loop counter
709     int k = r; // worker index, random if j < 0
710     for (;;) {
711     ForkJoinWorkerThread v = ws[k & mask];
712     r ^= r << 13; r ^= r >>> 17; r ^= r << 5; // inline xorshift
713 dl 1.20 ForkJoinTask<?>[] q; ForkJoinTask<?> t; int b, a;
714     if (v != null && (b = v.base) != v.sp &&
715     (q = v.queue) != null) {
716     int i = (q.length - 1) & b;
717     long u = (i << qShift) + qBase; // raw offset
718     int pid = poolIndex;
719     if ((t = q[i]) != null) {
720     if (!canSteal && // inline p.tryIncrementActiveCount
721     UNSAFE.compareAndSwapInt(p, poolRunStateOffset,
722     a = p.runState, a + 1))
723     canSteal = active = true;
724     if (canSteal && v.base == b++ &&
725 dl 1.18 UNSAFE.compareAndSwapObject(q, u, t, null)) {
726 dl 1.20 v.base = b;
727 dl 1.18 v.stealHint = pid;
728 dl 1.20 UNSAFE.putOrderedObject(this,
729     currentStealOffset, t);
730 dl 1.18 seed = r;
731     ++stealCount;
732     return t;
733 dl 1.17 }
734 jsr166 1.1 }
735 dl 1.16 j = -n;
736     k = r; // restart on contention
737 jsr166 1.1 }
738 dl 1.16 else if (++j <= 0)
739     k = r;
740     else if (j <= n)
741     k += (n >>> 1) | 1;
742     else
743     break;
744 jsr166 1.1 }
745 dl 1.14 }
746     return null;
747 jsr166 1.1 }
748    
749 dl 1.14 // Run State management
750    
751     // status check methods used mainly by ForkJoinPool
752 jsr166 1.33 final boolean isRunning() { return runState == 0; }
753     final boolean isTerminated() { return (runState & TERMINATED) != 0; }
754     final boolean isSuspended() { return (runState & SUSPENDED) != 0; }
755     final boolean isTrimmed() { return (runState & TRIMMED) != 0; }
756 dl 1.14
757 jsr166 1.31 final boolean isTerminating() {
758 dl 1.30 if ((runState & TERMINATING) != 0)
759     return true;
760     if (pool.isAtLeastTerminating()) { // propagate pool state
761     shutdown();
762     return true;
763     }
764     return false;
765     }
766    
767 jsr166 1.1 /**
768 dl 1.19 * Sets state to TERMINATING. Does NOT unpark or interrupt
769 dl 1.20 * to wake up if currently blocked. Callers must do so if desired.
770 dl 1.14 */
771 dl 1.19 final void shutdown() {
772 dl 1.14 for (;;) {
773     int s = runState;
774 dl 1.18 if ((s & (TERMINATING|TERMINATED)) != 0)
775     break;
776 dl 1.14 if ((s & SUSPENDED) != 0) { // kill and wakeup if suspended
777     if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
778     (s & ~SUSPENDED) |
779 dl 1.18 (TRIMMED|TERMINATING)))
780 dl 1.14 break;
781     }
782     else if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
783     s | TERMINATING))
784     break;
785     }
786     }
787    
788     /**
789 jsr166 1.25 * Sets state to TERMINATED. Called only by onTermination().
790 dl 1.14 */
791     private void setTerminated() {
792     int s;
793     do {} while (!UNSAFE.compareAndSwapInt(this, runStateOffset,
794     s = runState,
795     s | (TERMINATING|TERMINATED)));
796     }
797    
798     /**
799 dl 1.19 * If suspended, tries to set status to unsuspended.
800 dl 1.20 * Does NOT wake up if blocked.
801 jsr166 1.1 *
802 dl 1.14 * @return true if successful
803 jsr166 1.1 */
804 dl 1.14 final boolean tryUnsuspend() {
805 dl 1.18 int s;
806     while (((s = runState) & SUSPENDED) != 0) {
807     if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
808     s & ~SUSPENDED))
809     return true;
810     }
811 dl 1.17 return false;
812 jsr166 1.1 }
813    
814     /**
815 dl 1.18 * Sets suspended status and blocks as spare until resumed
816     * or shutdown.
817 jsr166 1.1 */
818 dl 1.19 final void suspendAsSpare() {
819 dl 1.18 for (;;) { // set suspended unless terminating
820 dl 1.14 int s = runState;
821     if ((s & TERMINATING) != 0) { // must kill
822     if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
823     s | (TRIMMED | TERMINATING)))
824 dl 1.19 return;
825 dl 1.14 }
826     else if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
827     s | SUSPENDED))
828     break;
829     }
830 dl 1.18 ForkJoinPool p = pool;
831     p.pushSpare(this);
832 dl 1.14 while ((runState & SUSPENDED) != 0) {
833 dl 1.19 if (p.tryAccumulateStealCount(this)) {
834     interrupted(); // clear/ignore interrupts
835 dl 1.18 if ((runState & SUSPENDED) == 0)
836     break;
837 dl 1.20 LockSupport.park(this);
838 dl 1.14 }
839 jsr166 1.1 }
840 dl 1.14 }
841    
842     // Misc support methods for ForkJoinPool
843    
844     /**
845     * Returns an estimate of the number of tasks in the queue. Also
846     * used by ForkJoinTask.
847     */
848     final int getQueueSize() {
849 dl 1.18 int n; // external calls must read base first
850     return (n = -base + sp) <= 0 ? 0 : n;
851 jsr166 1.1 }
852    
853 dl 1.14 /**
854 jsr166 1.1 * Removes and cancels all tasks in queue. Can be called from any
855     * thread.
856     */
857     final void cancelTasks() {
858 dl 1.18 ForkJoinTask<?> cj = currentJoin; // try to cancel ongoing tasks
859     if (cj != null) {
860     currentJoin = null;
861     cj.cancelIgnoringExceptions();
862     try {
863     this.interrupt(); // awaken wait
864     } catch (SecurityException ignore) {
865     }
866     }
867     ForkJoinTask<?> cs = currentSteal;
868     if (cs != null) {
869     currentSteal = null;
870     cs.cancelIgnoringExceptions();
871     }
872 dl 1.14 while (base != sp) {
873     ForkJoinTask<?> t = deqTask();
874     if (t != null)
875     t.cancelIgnoringExceptions();
876     }
877 jsr166 1.1 }
878    
879     /**
880     * Drains tasks to given collection c.
881     *
882     * @return the number of tasks drained
883     */
884 jsr166 1.5 final int drainTasksTo(Collection<? super ForkJoinTask<?>> c) {
885 jsr166 1.1 int n = 0;
886 dl 1.14 while (base != sp) {
887     ForkJoinTask<?> t = deqTask();
888     if (t != null) {
889     c.add(t);
890     ++n;
891     }
892 jsr166 1.1 }
893     return n;
894     }
895    
896 dl 1.14 // Support methods for ForkJoinTask
897    
898 jsr166 1.1 /**
899 dl 1.18 * Gets and removes a local task.
900     *
901     * @return a task, if available
902     */
903     final ForkJoinTask<?> pollLocalTask() {
904 dl 1.19 ForkJoinPool p = pool;
905 dl 1.18 while (sp != base) {
906 dl 1.19 int a; // inline p.tryIncrementActiveCount
907     if (active ||
908     (active = UNSAFE.compareAndSwapInt(p, poolRunStateOffset,
909     a = p.runState, a + 1)))
910 jsr166 1.23 return locallyFifo ? locallyDeqTask() : popTask();
911 dl 1.18 }
912     return null;
913     }
914    
915     /**
916     * Gets and removes a local or stolen task.
917     *
918     * @return a task, if available
919     */
920     final ForkJoinTask<?> pollTask() {
921     ForkJoinTask<?> t = pollLocalTask();
922     if (t == null) {
923     t = scan();
924 dl 1.20 // cannot retain/track/help steal
925     UNSAFE.putOrderedObject(this, currentStealOffset, null);
926 dl 1.18 }
927     return t;
928     }
929    
930     /**
931 dl 1.17 * Possibly runs some tasks and/or blocks, until task is done.
932     *
933     * @param joinMe the task to join
934 dl 1.34 * @param timed true if use timed wait
935     * @param nanos wait time if timed
936 dl 1.17 */
937 dl 1.34 final void joinTask(ForkJoinTask<?> joinMe, boolean timed, long nanos) {
938 dl 1.18 // currentJoin only written by this thread; only need ordered store
939     ForkJoinTask<?> prevJoin = currentJoin;
940     UNSAFE.putOrderedObject(this, currentJoinOffset, joinMe);
941 dl 1.32 if (isTerminating()) // cancel if shutting down
942     joinMe.cancelIgnoringExceptions();
943 jsr166 1.37 else
944 dl 1.35 pool.awaitJoin(joinMe, this, timed, nanos);
945 dl 1.18 UNSAFE.putOrderedObject(this, currentJoinOffset, prevJoin);
946     }
947    
948     /**
949 dl 1.32 * Tries to locate and help perform tasks for a stealer of the
950     * given task, or in turn one of its stealers. Traces
951     * currentSteal->currentJoin links looking for a thread working on
952     * a descendant of the given task and with a non-empty queue to
953     * steal back and execute tasks from.
954 dl 1.18 *
955 dl 1.20 * The implementation is very branchy to cope with potential
956 dl 1.18 * inconsistencies or loops encountering chains that are stale,
957     * unknown, or of length greater than MAX_HELP_DEPTH links. All
958     * of these cases are dealt with by just returning back to the
959     * caller, who is expected to retry if other join mechanisms also
960     * don't work out.
961 dl 1.17 *
962     * @param joinMe the task to join
963     */
964 dl 1.18 final void helpJoinTask(ForkJoinTask<?> joinMe) {
965 dl 1.20 ForkJoinWorkerThread[] ws;
966     int n;
967     if (joinMe.status < 0) // already done
968     return;
969     if ((ws = pool.workers) == null || (n = ws.length) <= 1)
970     return; // need at least 2 workers
971    
972     ForkJoinTask<?> task = joinMe; // base of chain
973     ForkJoinWorkerThread thread = this; // thread with stolen task
974     for (int d = 0; d < MAX_HELP_DEPTH; ++d) { // chain length
975     // Try to find v, the stealer of task, by first using hint
976     ForkJoinWorkerThread v = ws[thread.stealHint & (n - 1)];
977     if (v == null || v.currentSteal != task) {
978     for (int j = 0; ; ++j) { // search array
979     if (j < n) {
980     ForkJoinTask<?> vs;
981 dl 1.36 if ((v = ws[j]) != null && v != this &&
982 dl 1.20 (vs = v.currentSteal) != null) {
983     if (joinMe.status < 0 || task.status < 0)
984     return; // stale or done
985     if (vs == task) {
986     thread.stealHint = j;
987     break; // save hint for next time
988 dl 1.18 }
989     }
990 dl 1.17 }
991 dl 1.20 else
992     return; // no stealer
993 dl 1.17 }
994 dl 1.20 }
995     for (;;) { // Try to help v, using specialized form of deqTask
996     if (joinMe.status < 0)
997     return;
998     int b = v.base;
999     ForkJoinTask<?>[] q = v.queue;
1000     if (b == v.sp || q == null)
1001     break;
1002     int i = (q.length - 1) & b;
1003     long u = (i << qShift) + qBase;
1004     ForkJoinTask<?> t = q[i];
1005     int pid = poolIndex;
1006     ForkJoinTask<?> ps = currentSteal;
1007     if (task.status < 0)
1008     return; // stale or done
1009     if (t != null && v.base == b++ &&
1010     UNSAFE.compareAndSwapObject(q, u, t, null)) {
1011     if (joinMe.status < 0) {
1012     UNSAFE.putObjectVolatile(q, u, t);
1013     return; // back out on cancel
1014 dl 1.17 }
1015 dl 1.20 v.base = b;
1016     v.stealHint = pid;
1017     UNSAFE.putOrderedObject(this, currentStealOffset, t);
1018     t.quietlyExec();
1019     UNSAFE.putOrderedObject(this, currentStealOffset, ps);
1020 dl 1.17 }
1021     }
1022 dl 1.20 // Try to descend to find v's stealer
1023     ForkJoinTask<?> next = v.currentJoin;
1024     if (task.status < 0 || next == null || next == task ||
1025     joinMe.status < 0)
1026     return;
1027     task = next;
1028     thread = v;
1029 dl 1.17 }
1030     }
1031    
1032     /**
1033 dl 1.29 * Implements ForkJoinTask.getSurplusQueuedTaskCount().
1034 dl 1.14 * Returns an estimate of the number of tasks, offset by a
1035     * function of number of idle workers.
1036     *
1037     * This method provides a cheap heuristic guide for task
1038     * partitioning when programmers, frameworks, tools, or languages
1039     * have little or no idea about task granularity. In essence by
1040     * offering this method, we ask users only about tradeoffs in
1041     * overhead vs expected throughput and its variance, rather than
1042     * how finely to partition tasks.
1043     *
1044     * In a steady state strict (tree-structured) computation, each
1045     * thread makes available for stealing enough tasks for other
1046     * threads to remain active. Inductively, if all threads play by
1047     * the same rules, each thread should make available only a
1048     * constant number of tasks.
1049     *
1050     * The minimum useful constant is just 1. But using a value of 1
1051     * would require immediate replenishment upon each steal to
1052     * maintain enough tasks, which is infeasible. Further,
1053     * partitionings/granularities of offered tasks should minimize
1054     * steal rates, which in general means that threads nearer the top
1055     * of computation tree should generate more than those nearer the
1056     * bottom. In perfect steady state, each thread is at
1057     * approximately the same level of computation tree. However,
1058     * producing extra tasks amortizes the uncertainty of progress and
1059     * diffusion assumptions.
1060     *
1061     * So, users will want to use values larger, but not much larger
1062     * than 1 to both smooth over transient shortages and hedge
1063     * against uneven progress; as traded off against the cost of
1064     * extra task overhead. We leave the user to pick a threshold
1065     * value to compare with the results of this call to guide
1066     * decisions, but recommend values such as 3.
1067     *
1068     * When all threads are active, it is on average OK to estimate
1069     * surplus strictly locally. In steady-state, if one thread is
1070     * maintaining say 2 surplus tasks, then so are others. So we can
1071     * just use estimated queue length (although note that (sp - base)
1072     * can be an overestimate because of stealers lagging increments
1073     * of base). However, this strategy alone leads to serious
1074     * mis-estimates in some non-steady-state conditions (ramp-up,
1075     * ramp-down, other stalls). We can detect many of these by
1076     * further considering the number of "idle" threads, that are
1077     * known to have zero queued tasks, so compensate by a factor of
1078     * (#idle/#active) threads.
1079 jsr166 1.1 */
1080 dl 1.14 final int getEstimatedSurplusTaskCount() {
1081     return sp - base - pool.idlePerActive();
1082 jsr166 1.1 }
1083    
1084     /**
1085     * Runs tasks until {@code pool.isQuiescent()}.
1086     */
1087     final void helpQuiescePool() {
1088 dl 1.20 ForkJoinTask<?> ps = currentSteal; // to restore below
1089 jsr166 1.1 for (;;) {
1090 dl 1.14 ForkJoinTask<?> t = pollLocalTask();
1091 dl 1.20 if (t != null || (t = scan()) != null)
1092 dl 1.18 t.quietlyExec();
1093 dl 1.14 else {
1094     ForkJoinPool p = pool;
1095 dl 1.19 int a; // to inline CASes
1096 dl 1.14 if (active) {
1097 dl 1.19 if (!UNSAFE.compareAndSwapInt
1098     (p, poolRunStateOffset, a = p.runState, a - 1))
1099 dl 1.18 continue; // retry later
1100 dl 1.14 active = false; // inactivate
1101 dl 1.20 UNSAFE.putOrderedObject(this, currentStealOffset, ps);
1102 dl 1.14 }
1103     if (p.isQuiescent()) {
1104     active = true; // re-activate
1105 jsr166 1.22 do {} while (!UNSAFE.compareAndSwapInt
1106     (p, poolRunStateOffset, a = p.runState, a+1));
1107 dl 1.14 return;
1108     }
1109     }
1110 jsr166 1.1 }
1111     }
1112    
1113     // Unsafe mechanics
1114    
1115     private static final sun.misc.Unsafe UNSAFE = sun.misc.Unsafe.getUnsafe();
1116 dl 1.18 private static final long spOffset =
1117     objectFieldOffset("sp", ForkJoinWorkerThread.class);
1118 jsr166 1.2 private static final long runStateOffset =
1119 jsr166 1.3 objectFieldOffset("runState", ForkJoinWorkerThread.class);
1120 dl 1.18 private static final long currentJoinOffset =
1121     objectFieldOffset("currentJoin", ForkJoinWorkerThread.class);
1122     private static final long currentStealOffset =
1123     objectFieldOffset("currentSteal", ForkJoinWorkerThread.class);
1124 dl 1.14 private static final long qBase =
1125     UNSAFE.arrayBaseOffset(ForkJoinTask[].class);
1126 dl 1.19 private static final long poolRunStateOffset = // to inline CAS
1127     objectFieldOffset("runState", ForkJoinPool.class);
1128 dl 1.18
1129 jsr166 1.2 private static final int qShift;
1130 jsr166 1.1
1131     static {
1132     int s = UNSAFE.arrayIndexScale(ForkJoinTask[].class);
1133     if ((s & (s-1)) != 0)
1134     throw new Error("data type scale not a power of two");
1135     qShift = 31 - Integer.numberOfLeadingZeros(s);
1136 dl 1.24 MAXIMUM_QUEUE_CAPACITY = 1 << (31 - qShift);
1137 jsr166 1.1 }
1138 jsr166 1.3
1139     private static long objectFieldOffset(String field, Class<?> klazz) {
1140     try {
1141     return UNSAFE.objectFieldOffset(klazz.getDeclaredField(field));
1142     } catch (NoSuchFieldException e) {
1143     // Convert Exception to corresponding Error
1144     NoSuchFieldError error = new NoSuchFieldError(field);
1145     error.initCause(e);
1146     throw error;
1147     }
1148     }
1149 jsr166 1.1 }