ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/jsr166y/ForkJoinWorkerThread.java
Revision: 1.56
Committed: Thu Nov 18 00:39:15 2010 UTC (13 years, 6 months ago) by jsr166
Branch: MAIN
Changes since 1.55: +1 -1 lines
Log Message:
whitespace

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain, as explained at
4 * http://creativecommons.org/licenses/publicdomain
5 */
6
7 package jsr166y;
8
9 import java.util.Random;
10 import java.util.Collection;
11 import java.util.concurrent.locks.LockSupport;
12 import java.util.concurrent.RejectedExecutionException;
13
14 /**
15 * A thread managed by a {@link ForkJoinPool}. This class is
16 * subclassable solely for the sake of adding functionality -- there
17 * are no overridable methods dealing with scheduling or execution.
18 * However, you can override initialization and termination methods
19 * surrounding the main task processing loop. If you do create such a
20 * subclass, you will also need to supply a custom {@link
21 * ForkJoinPool.ForkJoinWorkerThreadFactory} to use it in a {@code
22 * ForkJoinPool}.
23 *
24 * @since 1.7
25 * @author Doug Lea
26 */
27 public class ForkJoinWorkerThread extends Thread {
28 /*
29 * Overview:
30 *
31 * ForkJoinWorkerThreads are managed by ForkJoinPools and perform
32 * ForkJoinTasks. This class includes bookkeeping in support of
33 * worker activation, suspension, and lifecycle control described
34 * in more detail in the internal documentation of class
35 * ForkJoinPool. And as described further below, this class also
36 * includes special-cased support for some ForkJoinTask
37 * methods. But the main mechanics involve work-stealing:
38 *
39 * Work-stealing queues are special forms of Deques that support
40 * only three of the four possible end-operations -- push, pop,
41 * and deq (aka steal), under the further constraints that push
42 * and pop are called only from the owning thread, while deq may
43 * be called from other threads. (If you are unfamiliar with
44 * them, you probably want to read Herlihy and Shavit's book "The
45 * Art of Multiprocessor programming", chapter 16 describing these
46 * in more detail before proceeding.) The main work-stealing
47 * queue design is roughly similar to those in the papers "Dynamic
48 * Circular Work-Stealing Deque" by Chase and Lev, SPAA 2005
49 * (http://research.sun.com/scalable/pubs/index.html) and
50 * "Idempotent work stealing" by Michael, Saraswat, and Vechev,
51 * PPoPP 2009 (http://portal.acm.org/citation.cfm?id=1504186).
52 * The main differences ultimately stem from gc requirements that
53 * we null out taken slots as soon as we can, to maintain as small
54 * a footprint as possible even in programs generating huge
55 * numbers of tasks. To accomplish this, we shift the CAS
56 * arbitrating pop vs deq (steal) from being on the indices
57 * ("base" and "sp") to the slots themselves (mainly via method
58 * "casSlotNull()"). So, both a successful pop and deq mainly
59 * entail a CAS of a slot from non-null to null. Because we rely
60 * on CASes of references, we do not need tag bits on base or sp.
61 * They are simple ints as used in any circular array-based queue
62 * (see for example ArrayDeque). Updates to the indices must
63 * still be ordered in a way that guarantees that sp == base means
64 * the queue is empty, but otherwise may err on the side of
65 * possibly making the queue appear nonempty when a push, pop, or
66 * deq have not fully committed. Note that this means that the deq
67 * operation, considered individually, is not wait-free. One thief
68 * cannot successfully continue until another in-progress one (or,
69 * if previously empty, a push) completes. However, in the
70 * aggregate, we ensure at least probabilistic non-blockingness.
71 * If an attempted steal fails, a thief always chooses a different
72 * random victim target to try next. So, in order for one thief to
73 * progress, it suffices for any in-progress deq or new push on
74 * any empty queue to complete. One reason this works well here is
75 * that apparently-nonempty often means soon-to-be-stealable,
76 * which gives threads a chance to set activation status if
77 * necessary before stealing.
78 *
79 * This approach also enables support for "async mode" where local
80 * task processing is in FIFO, not LIFO order; simply by using a
81 * version of deq rather than pop when locallyFifo is true (as set
82 * by the ForkJoinPool). This allows use in message-passing
83 * frameworks in which tasks are never joined.
84 *
85 * When a worker would otherwise be blocked waiting to join a
86 * task, it first tries a form of linear helping: Each worker
87 * records (in field currentSteal) the most recent task it stole
88 * from some other worker. Plus, it records (in field currentJoin)
89 * the task it is currently actively joining. Method joinTask uses
90 * these markers to try to find a worker to help (i.e., steal back
91 * a task from and execute it) that could hasten completion of the
92 * actively joined task. In essence, the joiner executes a task
93 * that would be on its own local deque had the to-be-joined task
94 * not been stolen. This may be seen as a conservative variant of
95 * the approach in Wagner & Calder "Leapfrogging: a portable
96 * technique for implementing efficient futures" SIGPLAN Notices,
97 * 1993 (http://portal.acm.org/citation.cfm?id=155354). It differs
98 * in that: (1) We only maintain dependency links across workers
99 * upon steals, rather than use per-task bookkeeping. This may
100 * require a linear scan of workers array to locate stealers, but
101 * usually doesn't because stealers leave hints (that may become
102 * stale/wrong) of where to locate them. This isolates cost to
103 * when it is needed, rather than adding to per-task overhead.
104 * (2) It is "shallow", ignoring nesting and potentially cyclic
105 * mutual steals. (3) It is intentionally racy: field currentJoin
106 * is updated only while actively joining, which means that we
107 * miss links in the chain during long-lived tasks, GC stalls etc
108 * (which is OK since blocking in such cases is usually a good
109 * idea). (4) We bound the number of attempts to find work (see
110 * MAX_HELP_DEPTH) and fall back to suspending the worker and if
111 * necessary replacing it with a spare (see
112 * ForkJoinPool.awaitJoin).
113 *
114 * Efficient implementation of these algorithms currently relies
115 * on an uncomfortable amount of "Unsafe" mechanics. To maintain
116 * correct orderings, reads and writes of variable base require
117 * volatile ordering. Variable sp does not require volatile
118 * writes but still needs store-ordering, which we accomplish by
119 * pre-incrementing sp before filling the slot with an ordered
120 * store. (Pre-incrementing also enables backouts used in
121 * joinTask.) Because they are protected by volatile base reads,
122 * reads of the queue array and its slots by other threads do not
123 * need volatile load semantics, but writes (in push) require
124 * store order and CASes (in pop and deq) require (volatile) CAS
125 * semantics. (Michael, Saraswat, and Vechev's algorithm has
126 * similar properties, but without support for nulling slots.)
127 * Since these combinations aren't supported using ordinary
128 * volatiles, the only way to accomplish these efficiently is to
129 * use direct Unsafe calls. (Using external AtomicIntegers and
130 * AtomicReferenceArrays for the indices and array is
131 * significantly slower because of memory locality and indirection
132 * effects.)
133 *
134 * Further, performance on most platforms is very sensitive to
135 * placement and sizing of the (resizable) queue array. Even
136 * though these queues don't usually become all that big, the
137 * initial size must be large enough to counteract cache
138 * contention effects across multiple queues (especially in the
139 * presence of GC cardmarking). Also, to improve thread-locality,
140 * queues are initialized after starting. All together, these
141 * low-level implementation choices produce as much as a factor of
142 * 4 performance improvement compared to naive implementations,
143 * and enable the processing of billions of tasks per second,
144 * sometimes at the expense of ugliness.
145 */
146
147 /**
148 * Generator for initial random seeds for random victim
149 * selection. This is used only to create initial seeds. Random
150 * steals use a cheaper xorshift generator per steal attempt. We
151 * expect only rare contention on seedGenerator, so just use a
152 * plain Random.
153 */
154 private static final Random seedGenerator = new Random();
155
156 /**
157 * The maximum stolen->joining link depth allowed in helpJoinTask.
158 * Depths for legitimate chains are unbounded, but we use a fixed
159 * constant to avoid (otherwise unchecked) cycles and bound
160 * staleness of traversal parameters at the expense of sometimes
161 * blocking when we could be helping.
162 */
163 private static final int MAX_HELP_DEPTH = 8;
164
165 /**
166 * Capacity of work-stealing queue array upon initialization.
167 * Must be a power of two. Initial size must be at least 4, but is
168 * padded to minimize cache effects.
169 */
170 private static final int INITIAL_QUEUE_CAPACITY = 1 << 13;
171
172 /**
173 * Maximum work-stealing queue array size. Must be less than or
174 * equal to 1 << (31 - width of array entry) to ensure lack of
175 * index wraparound. The value is set in the static block
176 * at the end of this file after obtaining width.
177 */
178 private static final int MAXIMUM_QUEUE_CAPACITY;
179
180 /**
181 * The pool this thread works in. Accessed directly by ForkJoinTask.
182 */
183 final ForkJoinPool pool;
184
185 /**
186 * The work-stealing queue array. Size must be a power of two.
187 * Initialized in onStart, to improve memory locality.
188 */
189 private ForkJoinTask<?>[] queue;
190
191 /**
192 * Index (mod queue.length) of least valid queue slot, which is
193 * always the next position to steal from if nonempty.
194 */
195 private volatile int base;
196
197 /**
198 * Index (mod queue.length) of next queue slot to push to or pop
199 * from. It is written only by owner thread, and accessed by other
200 * threads only after reading (volatile) base. Both sp and base
201 * are allowed to wrap around on overflow, but (sp - base) still
202 * estimates size.
203 */
204 private int sp;
205
206 /**
207 * The index of most recent stealer, used as a hint to avoid
208 * traversal in method helpJoinTask. This is only a hint because a
209 * worker might have had multiple steals and this only holds one
210 * of them (usually the most current). Declared non-volatile,
211 * relying on other prevailing sync to keep reasonably current.
212 */
213 private int stealHint;
214
215 /**
216 * Run state of this worker. In addition to the usual run levels,
217 * tracks if this worker is suspended as a spare, and if it was
218 * killed (trimmed) while suspended. However, "active" status is
219 * maintained separately and modified only in conjunction with
220 * CASes of the pool's runState (which are currently sadly
221 * manually inlined for performance.) Accessed directly by pool
222 * to simplify checks for normal (zero) status.
223 */
224 volatile int runState;
225
226 private static final int TERMINATING = 0x01;
227 private static final int TERMINATED = 0x02;
228 private static final int SUSPENDED = 0x04; // inactive spare
229 private static final int TRIMMED = 0x08; // killed while suspended
230
231 /**
232 * Number of steals. Directly accessed (and reset) by
233 * pool.tryAccumulateStealCount when idle.
234 */
235 int stealCount;
236
237 /**
238 * Seed for random number generator for choosing steal victims.
239 * Uses Marsaglia xorshift. Must be initialized as nonzero.
240 */
241 private int seed;
242
243 /**
244 * Activity status. When true, this worker is considered active.
245 * Accessed directly by pool. Must be false upon construction.
246 */
247 boolean active;
248
249 /**
250 * True if use local fifo, not default lifo, for local polling.
251 * Shadows value from ForkJoinPool.
252 */
253 private final boolean locallyFifo;
254
255 /**
256 * Index of this worker in pool array. Set once by pool before
257 * running, and accessed directly by pool to locate this worker in
258 * its workers array.
259 */
260 int poolIndex;
261
262 /**
263 * The last pool event waited for. Accessed only by pool in
264 * callback methods invoked within this thread.
265 */
266 int lastEventCount;
267
268 /**
269 * Encoded index and event count of next event waiter. Accessed
270 * only by ForkJoinPool for managing event waiters.
271 */
272 volatile long nextWaiter;
273
274 /**
275 * Number of times this thread suspended as spare. Accessed only
276 * by pool.
277 */
278 int spareCount;
279
280 /**
281 * Encoded index and count of next spare waiter. Accessed only
282 * by ForkJoinPool for managing spares.
283 */
284 volatile int nextSpare;
285
286 /**
287 * The task currently being joined, set only when actively trying
288 * to help other stealers in helpJoinTask. Written only by this
289 * thread, but read by others.
290 */
291 private volatile ForkJoinTask<?> currentJoin;
292
293 /**
294 * The task most recently stolen from another worker (or
295 * submission queue). Written only by this thread, but read by
296 * others.
297 */
298 private volatile ForkJoinTask<?> currentSteal;
299
300 /**
301 * Creates a ForkJoinWorkerThread operating in the given pool.
302 *
303 * @param pool the pool this thread works in
304 * @throws NullPointerException if pool is null
305 */
306 protected ForkJoinWorkerThread(ForkJoinPool pool) {
307 this.pool = pool;
308 this.locallyFifo = pool.locallyFifo;
309 setDaemon(true);
310 // To avoid exposing construction details to subclasses,
311 // remaining initialization is in start() and onStart()
312 }
313
314 /**
315 * Performs additional initialization and starts this thread.
316 */
317 final void start(int poolIndex, UncaughtExceptionHandler ueh) {
318 this.poolIndex = poolIndex;
319 if (ueh != null)
320 setUncaughtExceptionHandler(ueh);
321 start();
322 }
323
324 // Public/protected methods
325
326 /**
327 * Returns the pool hosting this thread.
328 *
329 * @return the pool
330 */
331 public ForkJoinPool getPool() {
332 return pool;
333 }
334
335 /**
336 * Returns the index number of this thread in its pool. The
337 * returned value ranges from zero to the maximum number of
338 * threads (minus one) that have ever been created in the pool.
339 * This method may be useful for applications that track status or
340 * collect results per-worker rather than per-task.
341 *
342 * @return the index number
343 */
344 public int getPoolIndex() {
345 return poolIndex;
346 }
347
348 /**
349 * Initializes internal state after construction but before
350 * processing any tasks. If you override this method, you must
351 * invoke @code{super.onStart()} at the beginning of the method.
352 * Initialization requires care: Most fields must have legal
353 * default values, to ensure that attempted accesses from other
354 * threads work correctly even before this thread starts
355 * processing tasks.
356 */
357 protected void onStart() {
358 int rs = seedGenerator.nextInt();
359 seed = rs == 0? 1 : rs; // seed must be nonzero
360
361 // Allocate name string and arrays in this thread
362 String pid = Integer.toString(pool.getPoolNumber());
363 String wid = Integer.toString(poolIndex);
364 setName("ForkJoinPool-" + pid + "-worker-" + wid);
365
366 queue = new ForkJoinTask<?>[INITIAL_QUEUE_CAPACITY];
367 }
368
369 /**
370 * Performs cleanup associated with termination of this worker
371 * thread. If you override this method, you must invoke
372 * {@code super.onTermination} at the end of the overridden method.
373 *
374 * @param exception the exception causing this thread to abort due
375 * to an unrecoverable error, or {@code null} if completed normally
376 */
377 protected void onTermination(Throwable exception) {
378 try {
379 ForkJoinPool p = pool;
380 if (active) {
381 int a; // inline p.tryDecrementActiveCount
382 active = false;
383 do {} while (!UNSAFE.compareAndSwapInt
384 (p, poolRunStateOffset, a = p.runState, a - 1));
385 }
386 cancelTasks();
387 setTerminated();
388 p.workerTerminated(this);
389 } catch (Throwable ex) { // Shouldn't ever happen
390 if (exception == null) // but if so, at least rethrown
391 exception = ex;
392 } finally {
393 if (exception != null)
394 UNSAFE.throwException(exception);
395 }
396 }
397
398 /**
399 * This method is required to be public, but should never be
400 * called explicitly. It performs the main run loop to execute
401 * ForkJoinTasks.
402 */
403 public void run() {
404 Throwable exception = null;
405 try {
406 onStart();
407 mainLoop();
408 } catch (Throwable ex) {
409 exception = ex;
410 } finally {
411 onTermination(exception);
412 }
413 }
414
415 // helpers for run()
416
417 /**
418 * Finds and executes tasks, and checks status while running.
419 */
420 private void mainLoop() {
421 boolean ran = false; // true if ran a task on last step
422 ForkJoinPool p = pool;
423 for (;;) {
424 p.preStep(this, ran);
425 if (runState != 0)
426 break;
427 ran = tryExecSteal() || tryExecSubmission();
428 }
429 }
430
431 /**
432 * Tries to steal a task and execute it.
433 *
434 * @return true if ran a task
435 */
436 private boolean tryExecSteal() {
437 ForkJoinTask<?> t;
438 if ((t = scan()) != null) {
439 t.quietlyExec();
440 UNSAFE.putOrderedObject(this, currentStealOffset, null);
441 if (sp != base)
442 execLocalTasks();
443 return true;
444 }
445 return false;
446 }
447
448 /**
449 * If a submission exists, try to activate and run it.
450 *
451 * @return true if ran a task
452 */
453 private boolean tryExecSubmission() {
454 ForkJoinPool p = pool;
455 // This loop is needed in case attempt to activate fails, in
456 // which case we only retry if there still appears to be a
457 // submission.
458 while (p.hasQueuedSubmissions()) {
459 ForkJoinTask<?> t; int a;
460 if (active || // inline p.tryIncrementActiveCount
461 (active = UNSAFE.compareAndSwapInt(p, poolRunStateOffset,
462 a = p.runState, a + 1))) {
463 if ((t = p.pollSubmission()) != null) {
464 UNSAFE.putOrderedObject(this, currentStealOffset, t);
465 t.quietlyExec();
466 UNSAFE.putOrderedObject(this, currentStealOffset, null);
467 if (sp != base)
468 execLocalTasks();
469 return true;
470 }
471 }
472 }
473 return false;
474 }
475
476 /**
477 * Runs local tasks until queue is empty or shut down. Call only
478 * while active.
479 */
480 private void execLocalTasks() {
481 while (runState == 0) {
482 ForkJoinTask<?> t = locallyFifo ? locallyDeqTask() : popTask();
483 if (t != null)
484 t.quietlyExec();
485 else if (sp == base)
486 break;
487 }
488 }
489
490 /*
491 * Intrinsics-based atomic writes for queue slots. These are
492 * basically the same as methods in AtomicReferenceArray, but
493 * specialized for (1) ForkJoinTask elements (2) requirement that
494 * nullness and bounds checks have already been performed by
495 * callers and (3) effective offsets are known not to overflow
496 * from int to long (because of MAXIMUM_QUEUE_CAPACITY). We don't
497 * need corresponding version for reads: plain array reads are OK
498 * because they are protected by other volatile reads and are
499 * confirmed by CASes.
500 *
501 * Most uses don't actually call these methods, but instead contain
502 * inlined forms that enable more predictable optimization. We
503 * don't define the version of write used in pushTask at all, but
504 * instead inline there a store-fenced array slot write.
505 */
506
507 /**
508 * CASes slot i of array q from t to null. Caller must ensure q is
509 * non-null and index is in range.
510 */
511 private static final boolean casSlotNull(ForkJoinTask<?>[] q, int i,
512 ForkJoinTask<?> t) {
513 return UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase, t, null);
514 }
515
516 /**
517 * Performs a volatile write of the given task at given slot of
518 * array q. Caller must ensure q is non-null and index is in
519 * range. This method is used only during resets and backouts.
520 */
521 private static final void writeSlot(ForkJoinTask<?>[] q, int i,
522 ForkJoinTask<?> t) {
523 UNSAFE.putObjectVolatile(q, (i << qShift) + qBase, t);
524 }
525
526 // queue methods
527
528 /**
529 * Pushes a task. Call only from this thread.
530 *
531 * @param t the task. Caller must ensure non-null.
532 */
533 final void pushTask(ForkJoinTask<?> t) {
534 ForkJoinTask<?>[] q = queue;
535 int mask = q.length - 1; // implicit assert q != null
536 int s = sp++; // ok to increment sp before slot write
537 UNSAFE.putOrderedObject(q, ((s & mask) << qShift) + qBase, t);
538 if ((s -= base) == 0)
539 pool.signalWork(); // was empty
540 else if (s == mask)
541 growQueue(); // is full
542 }
543
544 /**
545 * Tries to take a task from the base of the queue, failing if
546 * empty or contended. Note: Specializations of this code appear
547 * in locallyDeqTask and elsewhere.
548 *
549 * @return a task, or null if none or contended
550 */
551 final ForkJoinTask<?> deqTask() {
552 ForkJoinTask<?> t;
553 ForkJoinTask<?>[] q;
554 int b, i;
555 if (sp != (b = base) &&
556 (q = queue) != null && // must read q after b
557 (t = q[i = (q.length - 1) & b]) != null && base == b &&
558 UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase, t, null)) {
559 base = b + 1;
560 return t;
561 }
562 return null;
563 }
564
565 /**
566 * Tries to take a task from the base of own queue. Assumes active
567 * status. Called only by this thread.
568 *
569 * @return a task, or null if none
570 */
571 final ForkJoinTask<?> locallyDeqTask() {
572 ForkJoinTask<?>[] q = queue;
573 if (q != null) {
574 ForkJoinTask<?> t;
575 int b, i;
576 while (sp != (b = base)) {
577 if ((t = q[i = (q.length - 1) & b]) != null && base == b &&
578 UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase,
579 t, null)) {
580 base = b + 1;
581 return t;
582 }
583 }
584 }
585 return null;
586 }
587
588 /**
589 * Returns a popped task, or null if empty. Assumes active status.
590 * Called only by this thread.
591 */
592 private ForkJoinTask<?> popTask() {
593 ForkJoinTask<?>[] q = queue;
594 if (q != null) {
595 int s;
596 while ((s = sp) != base) {
597 int i = (q.length - 1) & --s;
598 long u = (i << qShift) + qBase; // raw offset
599 ForkJoinTask<?> t = q[i];
600 if (t == null) // lost to stealer
601 break;
602 if (UNSAFE.compareAndSwapObject(q, u, t, null)) {
603 sp = s; // putOrderedInt may encourage more timely write
604 // UNSAFE.putOrderedInt(this, spOffset, s);
605 return t;
606 }
607 }
608 }
609 return null;
610 }
611
612 /**
613 * Specialized version of popTask to pop only if topmost element
614 * is the given task. Called only by this thread while active.
615 *
616 * @param t the task. Caller must ensure non-null.
617 */
618 final boolean unpushTask(ForkJoinTask<?> t) {
619 int s;
620 ForkJoinTask<?>[] q = queue;
621 if ((s = sp) != base && q != null &&
622 UNSAFE.compareAndSwapObject
623 (q, (((q.length - 1) & --s) << qShift) + qBase, t, null)) {
624 sp = s; // putOrderedInt may encourage more timely write
625 // UNSAFE.putOrderedInt(this, spOffset, s);
626 return true;
627 }
628 return false;
629 }
630
631 /**
632 * Returns next task, or null if empty or contended.
633 */
634 final ForkJoinTask<?> peekTask() {
635 ForkJoinTask<?>[] q = queue;
636 if (q == null)
637 return null;
638 int mask = q.length - 1;
639 int i = locallyFifo ? base : (sp - 1);
640 return q[i & mask];
641 }
642
643 /**
644 * Doubles queue array size. Transfers elements by emulating
645 * steals (deqs) from old array and placing, oldest first, into
646 * new array.
647 */
648 private void growQueue() {
649 ForkJoinTask<?>[] oldQ = queue;
650 int oldSize = oldQ.length;
651 int newSize = oldSize << 1;
652 if (newSize > MAXIMUM_QUEUE_CAPACITY)
653 throw new RejectedExecutionException("Queue capacity exceeded");
654 ForkJoinTask<?>[] newQ = queue = new ForkJoinTask<?>[newSize];
655
656 int b = base;
657 int bf = b + oldSize;
658 int oldMask = oldSize - 1;
659 int newMask = newSize - 1;
660 do {
661 int oldIndex = b & oldMask;
662 ForkJoinTask<?> t = oldQ[oldIndex];
663 if (t != null && !casSlotNull(oldQ, oldIndex, t))
664 t = null;
665 writeSlot(newQ, b & newMask, t);
666 } while (++b != bf);
667 pool.signalWork();
668 }
669
670 /**
671 * Computes next value for random victim probe in scan(). Scans
672 * don't require a very high quality generator, but also not a
673 * crummy one. Marsaglia xor-shift is cheap and works well enough.
674 * Note: This is manually inlined in scan().
675 */
676 private static final int xorShift(int r) {
677 r ^= r << 13;
678 r ^= r >>> 17;
679 return r ^ (r << 5);
680 }
681
682 /**
683 * Tries to steal a task from another worker. Starts at a random
684 * index of workers array, and probes workers until finding one
685 * with non-empty queue or finding that all are empty. It
686 * randomly selects the first n probes. If these are empty, it
687 * resorts to a circular sweep, which is necessary to accurately
688 * set active status. (The circular sweep uses steps of
689 * approximately half the array size plus 1, to avoid bias
690 * stemming from leftmost packing of the array in ForkJoinPool.)
691 *
692 * This method must be both fast and quiet -- usually avoiding
693 * memory accesses that could disrupt cache sharing etc other than
694 * those needed to check for and take tasks (or to activate if not
695 * already active). This accounts for, among other things,
696 * updating random seed in place without storing it until exit.
697 *
698 * @return a task, or null if none found
699 */
700 private ForkJoinTask<?> scan() {
701 ForkJoinPool p = pool;
702 ForkJoinWorkerThread[] ws; // worker array
703 int n; // upper bound of #workers
704 if ((ws = p.workers) != null && (n = ws.length) > 1) {
705 boolean canSteal = active; // shadow active status
706 int r = seed; // extract seed once
707 int mask = n - 1;
708 int j = -n; // loop counter
709 int k = r; // worker index, random if j < 0
710 for (;;) {
711 ForkJoinWorkerThread v = ws[k & mask];
712 r ^= r << 13; r ^= r >>> 17; r ^= r << 5; // inline xorshift
713 ForkJoinTask<?>[] q; ForkJoinTask<?> t; int b, a;
714 if (v != null && (b = v.base) != v.sp &&
715 (q = v.queue) != null) {
716 int i = (q.length - 1) & b;
717 long u = (i << qShift) + qBase; // raw offset
718 int pid = poolIndex;
719 if ((t = q[i]) != null) {
720 if (!canSteal && // inline p.tryIncrementActiveCount
721 UNSAFE.compareAndSwapInt(p, poolRunStateOffset,
722 a = p.runState, a + 1))
723 canSteal = active = true;
724 if (canSteal && v.base == b++ &&
725 UNSAFE.compareAndSwapObject(q, u, t, null)) {
726 v.base = b;
727 v.stealHint = pid;
728 UNSAFE.putOrderedObject(this,
729 currentStealOffset, t);
730 seed = r;
731 ++stealCount;
732 return t;
733 }
734 }
735 j = -n;
736 k = r; // restart on contention
737 }
738 else if (++j <= 0)
739 k = r;
740 else if (j <= n)
741 k += (n >>> 1) | 1;
742 else
743 break;
744 }
745 }
746 return null;
747 }
748
749 // Run State management
750
751 // status check methods used mainly by ForkJoinPool
752 final boolean isRunning() { return runState == 0; }
753 final boolean isTerminated() { return (runState & TERMINATED) != 0; }
754 final boolean isSuspended() { return (runState & SUSPENDED) != 0; }
755 final boolean isTrimmed() { return (runState & TRIMMED) != 0; }
756
757 final boolean isTerminating() {
758 if ((runState & TERMINATING) != 0)
759 return true;
760 if (pool.isAtLeastTerminating()) { // propagate pool state
761 shutdown();
762 return true;
763 }
764 return false;
765 }
766
767 /**
768 * Sets state to TERMINATING. Does NOT unpark or interrupt
769 * to wake up if currently blocked. Callers must do so if desired.
770 */
771 final void shutdown() {
772 for (;;) {
773 int s = runState;
774 if ((s & (TERMINATING|TERMINATED)) != 0)
775 break;
776 if ((s & SUSPENDED) != 0) { // kill and wakeup if suspended
777 if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
778 (s & ~SUSPENDED) |
779 (TRIMMED|TERMINATING)))
780 break;
781 }
782 else if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
783 s | TERMINATING))
784 break;
785 }
786 }
787
788 /**
789 * Sets state to TERMINATED. Called only by onTermination().
790 */
791 private void setTerminated() {
792 int s;
793 do {} while (!UNSAFE.compareAndSwapInt(this, runStateOffset,
794 s = runState,
795 s | (TERMINATING|TERMINATED)));
796 }
797
798 /**
799 * If suspended, tries to set status to unsuspended.
800 * Does NOT wake up if blocked.
801 *
802 * @return true if successful
803 */
804 final boolean tryUnsuspend() {
805 int s;
806 while (((s = runState) & SUSPENDED) != 0) {
807 if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
808 s & ~SUSPENDED))
809 return true;
810 }
811 return false;
812 }
813
814 /**
815 * Sets suspended status and blocks as spare until resumed
816 * or shutdown.
817 */
818 final void suspendAsSpare() {
819 for (;;) { // set suspended unless terminating
820 int s = runState;
821 if ((s & TERMINATING) != 0) { // must kill
822 if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
823 s | (TRIMMED | TERMINATING)))
824 return;
825 }
826 else if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
827 s | SUSPENDED))
828 break;
829 }
830 ForkJoinPool p = pool;
831 p.pushSpare(this);
832 while ((runState & SUSPENDED) != 0) {
833 if (p.tryAccumulateStealCount(this)) {
834 interrupted(); // clear/ignore interrupts
835 if ((runState & SUSPENDED) == 0)
836 break;
837 LockSupport.park(this);
838 }
839 }
840 }
841
842 // Misc support methods for ForkJoinPool
843
844 /**
845 * Returns an estimate of the number of tasks in the queue. Also
846 * used by ForkJoinTask.
847 */
848 final int getQueueSize() {
849 int n; // external calls must read base first
850 return (n = -base + sp) <= 0 ? 0 : n;
851 }
852
853 /**
854 * Removes and cancels all tasks in queue. Can be called from any
855 * thread.
856 */
857 final void cancelTasks() {
858 ForkJoinTask<?> cj = currentJoin; // try to cancel ongoing tasks
859 if (cj != null) {
860 currentJoin = null;
861 cj.cancelIgnoringExceptions();
862 try {
863 this.interrupt(); // awaken wait
864 } catch (SecurityException ignore) {
865 }
866 }
867 ForkJoinTask<?> cs = currentSteal;
868 if (cs != null) {
869 currentSteal = null;
870 cs.cancelIgnoringExceptions();
871 }
872 while (base != sp) {
873 ForkJoinTask<?> t = deqTask();
874 if (t != null)
875 t.cancelIgnoringExceptions();
876 }
877 }
878
879 /**
880 * Drains tasks to given collection c.
881 *
882 * @return the number of tasks drained
883 */
884 final int drainTasksTo(Collection<? super ForkJoinTask<?>> c) {
885 int n = 0;
886 while (base != sp) {
887 ForkJoinTask<?> t = deqTask();
888 if (t != null) {
889 c.add(t);
890 ++n;
891 }
892 }
893 return n;
894 }
895
896 // Support methods for ForkJoinTask
897
898 /**
899 * Gets and removes a local task.
900 *
901 * @return a task, if available
902 */
903 final ForkJoinTask<?> pollLocalTask() {
904 ForkJoinPool p = pool;
905 while (sp != base) {
906 int a; // inline p.tryIncrementActiveCount
907 if (active ||
908 (active = UNSAFE.compareAndSwapInt(p, poolRunStateOffset,
909 a = p.runState, a + 1)))
910 return locallyFifo ? locallyDeqTask() : popTask();
911 }
912 return null;
913 }
914
915 /**
916 * Gets and removes a local or stolen task.
917 *
918 * @return a task, if available
919 */
920 final ForkJoinTask<?> pollTask() {
921 ForkJoinTask<?> t = pollLocalTask();
922 if (t == null) {
923 t = scan();
924 // cannot retain/track/help steal
925 UNSAFE.putOrderedObject(this, currentStealOffset, null);
926 }
927 return t;
928 }
929
930 /**
931 * Possibly runs some tasks and/or blocks, until task is done.
932 *
933 * @param joinMe the task to join
934 * @param timed true if use timed wait
935 * @param nanos wait time if timed
936 */
937 final void joinTask(ForkJoinTask<?> joinMe, boolean timed, long nanos) {
938 // currentJoin only written by this thread; only need ordered store
939 ForkJoinTask<?> prevJoin = currentJoin;
940 UNSAFE.putOrderedObject(this, currentJoinOffset, joinMe);
941 if (isTerminating()) // cancel if shutting down
942 joinMe.cancelIgnoringExceptions();
943 else
944 pool.awaitJoin(joinMe, this, timed, nanos);
945 UNSAFE.putOrderedObject(this, currentJoinOffset, prevJoin);
946 }
947
948 /**
949 * Run tasks in local queue until given task is done.
950 * Not currently used because it complicates semantics.
951 *
952 * @param joinMe the task to join
953 */
954 private void localHelpJoinTask(ForkJoinTask<?> joinMe) {
955 int s;
956 ForkJoinTask<?>[] q;
957 while (joinMe.status >= 0 && (s = sp) != base && (q = queue) != null) {
958 int i = (q.length - 1) & --s;
959 long u = (i << qShift) + qBase; // raw offset
960 ForkJoinTask<?> t = q[i];
961 if (t == null) // lost to a stealer
962 break;
963 if (UNSAFE.compareAndSwapObject(q, u, t, null)) {
964 /*
965 * This recheck (and similarly in helpJoinTask)
966 * handles cases where joinMe is independently
967 * cancelled or forced even though there is other work
968 * available. Back out of the pop by putting t back
969 * into slot before we commit by writing sp.
970 */
971 if (joinMe.status < 0) {
972 UNSAFE.putObjectVolatile(q, u, t);
973 break;
974 }
975 sp = s;
976 // UNSAFE.putOrderedInt(this, spOffset, s);
977 t.quietlyExec();
978 }
979 }
980 }
981
982 /**
983 * Tries to locate and help perform tasks for a stealer of the
984 * given task, or in turn one of its stealers. Traces
985 * currentSteal->currentJoin links looking for a thread working on
986 * a descendant of the given task and with a non-empty queue to
987 * steal back and execute tasks from.
988 *
989 * The implementation is very branchy to cope with potential
990 * inconsistencies or loops encountering chains that are stale,
991 * unknown, or of length greater than MAX_HELP_DEPTH links. All
992 * of these cases are dealt with by just returning back to the
993 * caller, who is expected to retry if other join mechanisms also
994 * don't work out.
995 *
996 * @param joinMe the task to join
997 */
998 final void helpJoinTask(ForkJoinTask<?> joinMe) {
999 ForkJoinWorkerThread[] ws;
1000 int n;
1001 if (joinMe.status < 0) // already done
1002 return;
1003 if ((ws = pool.workers) == null || (n = ws.length) <= 1)
1004 return; // need at least 2 workers
1005
1006 ForkJoinTask<?> task = joinMe; // base of chain
1007 ForkJoinWorkerThread thread = this; // thread with stolen task
1008 for (int d = 0; d < MAX_HELP_DEPTH; ++d) { // chain length
1009 // Try to find v, the stealer of task, by first using hint
1010 ForkJoinWorkerThread v = ws[thread.stealHint & (n - 1)];
1011 if (v == null || v.currentSteal != task) {
1012 for (int j = 0; ; ++j) { // search array
1013 if (j < n) {
1014 ForkJoinTask<?> vs;
1015 if ((v = ws[j]) != null && v != this &&
1016 (vs = v.currentSteal) != null) {
1017 if (joinMe.status < 0 || task.status < 0)
1018 return; // stale or done
1019 if (vs == task) {
1020 thread.stealHint = j;
1021 break; // save hint for next time
1022 }
1023 }
1024 }
1025 else
1026 return; // no stealer
1027 }
1028 }
1029 for (;;) { // Try to help v, using specialized form of deqTask
1030 if (joinMe.status < 0)
1031 return;
1032 int b = v.base;
1033 ForkJoinTask<?>[] q = v.queue;
1034 if (b == v.sp || q == null)
1035 break;
1036 int i = (q.length - 1) & b;
1037 long u = (i << qShift) + qBase;
1038 ForkJoinTask<?> t = q[i];
1039 int pid = poolIndex;
1040 ForkJoinTask<?> ps = currentSteal;
1041 if (task.status < 0)
1042 return; // stale or done
1043 if (t != null && v.base == b++ &&
1044 UNSAFE.compareAndSwapObject(q, u, t, null)) {
1045 if (joinMe.status < 0) {
1046 UNSAFE.putObjectVolatile(q, u, t);
1047 return; // back out on cancel
1048 }
1049 v.base = b;
1050 v.stealHint = pid;
1051 UNSAFE.putOrderedObject(this, currentStealOffset, t);
1052 t.quietlyExec();
1053 UNSAFE.putOrderedObject(this, currentStealOffset, ps);
1054 }
1055 }
1056 // Try to descend to find v's stealer
1057 ForkJoinTask<?> next = v.currentJoin;
1058 if (task.status < 0 || next == null || next == task ||
1059 joinMe.status < 0)
1060 return;
1061 task = next;
1062 thread = v;
1063 }
1064 }
1065
1066 /**
1067 * Implements ForkJoinTask.getSurplusQueuedTaskCount().
1068 * Returns an estimate of the number of tasks, offset by a
1069 * function of number of idle workers.
1070 *
1071 * This method provides a cheap heuristic guide for task
1072 * partitioning when programmers, frameworks, tools, or languages
1073 * have little or no idea about task granularity. In essence by
1074 * offering this method, we ask users only about tradeoffs in
1075 * overhead vs expected throughput and its variance, rather than
1076 * how finely to partition tasks.
1077 *
1078 * In a steady state strict (tree-structured) computation, each
1079 * thread makes available for stealing enough tasks for other
1080 * threads to remain active. Inductively, if all threads play by
1081 * the same rules, each thread should make available only a
1082 * constant number of tasks.
1083 *
1084 * The minimum useful constant is just 1. But using a value of 1
1085 * would require immediate replenishment upon each steal to
1086 * maintain enough tasks, which is infeasible. Further,
1087 * partitionings/granularities of offered tasks should minimize
1088 * steal rates, which in general means that threads nearer the top
1089 * of computation tree should generate more than those nearer the
1090 * bottom. In perfect steady state, each thread is at
1091 * approximately the same level of computation tree. However,
1092 * producing extra tasks amortizes the uncertainty of progress and
1093 * diffusion assumptions.
1094 *
1095 * So, users will want to use values larger, but not much larger
1096 * than 1 to both smooth over transient shortages and hedge
1097 * against uneven progress; as traded off against the cost of
1098 * extra task overhead. We leave the user to pick a threshold
1099 * value to compare with the results of this call to guide
1100 * decisions, but recommend values such as 3.
1101 *
1102 * When all threads are active, it is on average OK to estimate
1103 * surplus strictly locally. In steady-state, if one thread is
1104 * maintaining say 2 surplus tasks, then so are others. So we can
1105 * just use estimated queue length (although note that (sp - base)
1106 * can be an overestimate because of stealers lagging increments
1107 * of base). However, this strategy alone leads to serious
1108 * mis-estimates in some non-steady-state conditions (ramp-up,
1109 * ramp-down, other stalls). We can detect many of these by
1110 * further considering the number of "idle" threads, that are
1111 * known to have zero queued tasks, so compensate by a factor of
1112 * (#idle/#active) threads.
1113 */
1114 final int getEstimatedSurplusTaskCount() {
1115 return sp - base - pool.idlePerActive();
1116 }
1117
1118 /**
1119 * Runs tasks until {@code pool.isQuiescent()}.
1120 */
1121 final void helpQuiescePool() {
1122 ForkJoinTask<?> ps = currentSteal; // to restore below
1123 for (;;) {
1124 ForkJoinTask<?> t = pollLocalTask();
1125 if (t != null || (t = scan()) != null)
1126 t.quietlyExec();
1127 else {
1128 ForkJoinPool p = pool;
1129 int a; // to inline CASes
1130 if (active) {
1131 if (!UNSAFE.compareAndSwapInt
1132 (p, poolRunStateOffset, a = p.runState, a - 1))
1133 continue; // retry later
1134 active = false; // inactivate
1135 UNSAFE.putOrderedObject(this, currentStealOffset, ps);
1136 }
1137 if (p.isQuiescent()) {
1138 active = true; // re-activate
1139 do {} while (!UNSAFE.compareAndSwapInt
1140 (p, poolRunStateOffset, a = p.runState, a+1));
1141 return;
1142 }
1143 }
1144 }
1145 }
1146
1147 // Unsafe mechanics
1148
1149 private static final sun.misc.Unsafe UNSAFE = getUnsafe();
1150 private static final long spOffset =
1151 objectFieldOffset("sp", ForkJoinWorkerThread.class);
1152 private static final long runStateOffset =
1153 objectFieldOffset("runState", ForkJoinWorkerThread.class);
1154 private static final long currentJoinOffset =
1155 objectFieldOffset("currentJoin", ForkJoinWorkerThread.class);
1156 private static final long currentStealOffset =
1157 objectFieldOffset("currentSteal", ForkJoinWorkerThread.class);
1158 private static final long qBase =
1159 UNSAFE.arrayBaseOffset(ForkJoinTask[].class);
1160 private static final long poolRunStateOffset = // to inline CAS
1161 objectFieldOffset("runState", ForkJoinPool.class);
1162
1163 private static final int qShift;
1164
1165 static {
1166 int s = UNSAFE.arrayIndexScale(ForkJoinTask[].class);
1167 if ((s & (s-1)) != 0)
1168 throw new Error("data type scale not a power of two");
1169 qShift = 31 - Integer.numberOfLeadingZeros(s);
1170 MAXIMUM_QUEUE_CAPACITY = 1 << (31 - qShift);
1171 }
1172
1173 private static long objectFieldOffset(String field, Class<?> klazz) {
1174 try {
1175 return UNSAFE.objectFieldOffset(klazz.getDeclaredField(field));
1176 } catch (NoSuchFieldException e) {
1177 // Convert Exception to corresponding Error
1178 NoSuchFieldError error = new NoSuchFieldError(field);
1179 error.initCause(e);
1180 throw error;
1181 }
1182 }
1183
1184 /**
1185 * Returns a sun.misc.Unsafe. Suitable for use in a 3rd party package.
1186 * Replace with a simple call to Unsafe.getUnsafe when integrating
1187 * into a jdk.
1188 *
1189 * @return a sun.misc.Unsafe
1190 */
1191 private static sun.misc.Unsafe getUnsafe() {
1192 try {
1193 return sun.misc.Unsafe.getUnsafe();
1194 } catch (SecurityException se) {
1195 try {
1196 return java.security.AccessController.doPrivileged
1197 (new java.security
1198 .PrivilegedExceptionAction<sun.misc.Unsafe>() {
1199 public sun.misc.Unsafe run() throws Exception {
1200 java.lang.reflect.Field f = sun.misc
1201 .Unsafe.class.getDeclaredField("theUnsafe");
1202 f.setAccessible(true);
1203 return (sun.misc.Unsafe) f.get(null);
1204 }});
1205 } catch (java.security.PrivilegedActionException e) {
1206 throw new RuntimeException("Could not initialize intrinsics",
1207 e.getCause());
1208 }
1209 }
1210 }
1211 }