ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/jsr166y/ForkJoinWorkerThread.java
Revision: 1.39
Committed: Sat Jul 24 20:28:18 2010 UTC (13 years, 9 months ago) by dl
Branch: MAIN
Changes since 1.38: +46 -47 lines
Log Message:
Fix and simplify joinTask

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain, as explained at
4 * http://creativecommons.org/licenses/publicdomain
5 */
6
7 package jsr166y;
8
9 import java.util.concurrent.*;
10
11 import java.util.Random;
12 import java.util.Collection;
13 import java.util.concurrent.locks.LockSupport;
14
15 /**
16 * A thread managed by a {@link ForkJoinPool}. This class is
17 * subclassable solely for the sake of adding functionality -- there
18 * are no overridable methods dealing with scheduling or execution.
19 * However, you can override initialization and termination methods
20 * surrounding the main task processing loop. If you do create such a
21 * subclass, you will also need to supply a custom {@link
22 * ForkJoinPool.ForkJoinWorkerThreadFactory} to use it in a {@code
23 * ForkJoinPool}.
24 *
25 * @since 1.7
26 * @author Doug Lea
27 */
28 public class ForkJoinWorkerThread extends Thread {
29 /*
30 * Overview:
31 *
32 * ForkJoinWorkerThreads are managed by ForkJoinPools and perform
33 * ForkJoinTasks. This class includes bookkeeping in support of
34 * worker activation, suspension, and lifecycle control described
35 * in more detail in the internal documentation of class
36 * ForkJoinPool. And as described further below, this class also
37 * includes special-cased support for some ForkJoinTask
38 * methods. But the main mechanics involve work-stealing:
39 *
40 * Work-stealing queues are special forms of Deques that support
41 * only three of the four possible end-operations -- push, pop,
42 * and deq (aka steal), under the further constraints that push
43 * and pop are called only from the owning thread, while deq may
44 * be called from other threads. (If you are unfamiliar with
45 * them, you probably want to read Herlihy and Shavit's book "The
46 * Art of Multiprocessor programming", chapter 16 describing these
47 * in more detail before proceeding.) The main work-stealing
48 * queue design is roughly similar to those in the papers "Dynamic
49 * Circular Work-Stealing Deque" by Chase and Lev, SPAA 2005
50 * (http://research.sun.com/scalable/pubs/index.html) and
51 * "Idempotent work stealing" by Michael, Saraswat, and Vechev,
52 * PPoPP 2009 (http://portal.acm.org/citation.cfm?id=1504186).
53 * The main differences ultimately stem from gc requirements that
54 * we null out taken slots as soon as we can, to maintain as small
55 * a footprint as possible even in programs generating huge
56 * numbers of tasks. To accomplish this, we shift the CAS
57 * arbitrating pop vs deq (steal) from being on the indices
58 * ("base" and "sp") to the slots themselves (mainly via method
59 * "casSlotNull()"). So, both a successful pop and deq mainly
60 * entail a CAS of a slot from non-null to null. Because we rely
61 * on CASes of references, we do not need tag bits on base or sp.
62 * They are simple ints as used in any circular array-based queue
63 * (see for example ArrayDeque). Updates to the indices must
64 * still be ordered in a way that guarantees that sp == base means
65 * the queue is empty, but otherwise may err on the side of
66 * possibly making the queue appear nonempty when a push, pop, or
67 * deq have not fully committed. Note that this means that the deq
68 * operation, considered individually, is not wait-free. One thief
69 * cannot successfully continue until another in-progress one (or,
70 * if previously empty, a push) completes. However, in the
71 * aggregate, we ensure at least probabilistic non-blockingness.
72 * If an attempted steal fails, a thief always chooses a different
73 * random victim target to try next. So, in order for one thief to
74 * progress, it suffices for any in-progress deq or new push on
75 * any empty queue to complete. One reason this works well here is
76 * that apparently-nonempty often means soon-to-be-stealable,
77 * which gives threads a chance to set activation status if
78 * necessary before stealing.
79 *
80 * This approach also enables support for "async mode" where local
81 * task processing is in FIFO, not LIFO order; simply by using a
82 * version of deq rather than pop when locallyFifo is true (as set
83 * by the ForkJoinPool). This allows use in message-passing
84 * frameworks in which tasks are never joined.
85 *
86 * When a worker would otherwise be blocked waiting to join a
87 * task, it first tries a form of linear helping: Each worker
88 * records (in field currentSteal) the most recent task it stole
89 * from some other worker. Plus, it records (in field currentJoin)
90 * the task it is currently actively joining. Method joinTask uses
91 * these markers to try to find a worker to help (i.e., steal back
92 * a task from and execute it) that could hasten completion of the
93 * actively joined task. In essence, the joiner executes a task
94 * that would be on its own local deque had the to-be-joined task
95 * not been stolen. This may be seen as a conservative variant of
96 * the approach in Wagner & Calder "Leapfrogging: a portable
97 * technique for implementing efficient futures" SIGPLAN Notices,
98 * 1993 (http://portal.acm.org/citation.cfm?id=155354). It differs
99 * in that: (1) We only maintain dependency links across workers
100 * upon steals, rather than use per-task bookkeeping. This may
101 * require a linear scan of workers array to locate stealers, but
102 * usually doesn't because stealers leave hints (that may become
103 * stale/wrong) of where to locate them. This isolates cost to
104 * when it is needed, rather than adding to per-task overhead.
105 * (2) It is "shallow", ignoring nesting and potentially cyclic
106 * mutual steals. (3) It is intentionally racy: field currentJoin
107 * is updated only while actively joining, which means that we
108 * miss links in the chain during long-lived tasks, GC stalls etc
109 * (which is OK since blocking in such cases is usually a good
110 * idea). (4) We bound the number of attempts to find work (see
111 * MAX_HELP_DEPTH) and fall back to suspending the worker and if
112 * necessary replacing it with a spare (see
113 * ForkJoinPool.tryAwaitJoin).
114 *
115 * Efficient implementation of these algorithms currently relies
116 * on an uncomfortable amount of "Unsafe" mechanics. To maintain
117 * correct orderings, reads and writes of variable base require
118 * volatile ordering. Variable sp does not require volatile
119 * writes but still needs store-ordering, which we accomplish by
120 * pre-incrementing sp before filling the slot with an ordered
121 * store. (Pre-incrementing also enables backouts used in
122 * joinTask.) Because they are protected by volatile base reads,
123 * reads of the queue array and its slots by other threads do not
124 * need volatile load semantics, but writes (in push) require
125 * store order and CASes (in pop and deq) require (volatile) CAS
126 * semantics. (Michael, Saraswat, and Vechev's algorithm has
127 * similar properties, but without support for nulling slots.)
128 * Since these combinations aren't supported using ordinary
129 * volatiles, the only way to accomplish these efficiently is to
130 * use direct Unsafe calls. (Using external AtomicIntegers and
131 * AtomicReferenceArrays for the indices and array is
132 * significantly slower because of memory locality and indirection
133 * effects.)
134 *
135 * Further, performance on most platforms is very sensitive to
136 * placement and sizing of the (resizable) queue array. Even
137 * though these queues don't usually become all that big, the
138 * initial size must be large enough to counteract cache
139 * contention effects across multiple queues (especially in the
140 * presence of GC cardmarking). Also, to improve thread-locality,
141 * queues are initialized after starting. All together, these
142 * low-level implementation choices produce as much as a factor of
143 * 4 performance improvement compared to naive implementations,
144 * and enable the processing of billions of tasks per second,
145 * sometimes at the expense of ugliness.
146 */
147
148 /**
149 * Generator for initial random seeds for random victim
150 * selection. This is used only to create initial seeds. Random
151 * steals use a cheaper xorshift generator per steal attempt. We
152 * expect only rare contention on seedGenerator, so just use a
153 * plain Random.
154 */
155 private static final Random seedGenerator = new Random();
156
157 /**
158 * The timeout value for suspending spares. Spare workers that
159 * remain unsignalled for more than this time may be trimmed
160 * (killed and removed from pool). Since our goal is to avoid
161 * long-term thread buildup, the exact value of timeout does not
162 * matter too much so long as it avoids most false-alarm timeouts
163 * under GC stalls or momentarily high system load.
164 */
165 private static final long SPARE_KEEPALIVE_NANOS =
166 5L * 1000L * 1000L * 1000L; // 5 secs
167
168 /**
169 * The maximum stolen->joining link depth allowed in helpJoinTask.
170 * Depths for legitimate chains are unbounded, but we use a fixed
171 * constant to avoid (otherwise unchecked) cycles and bound
172 * staleness of traversal parameters at the expense of sometimes
173 * blocking when we could be helping.
174 */
175 private static final int MAX_HELP_DEPTH = 8;
176
177 /**
178 * Capacity of work-stealing queue array upon initialization.
179 * Must be a power of two. Initial size must be at least 4, but is
180 * padded to minimize cache effects.
181 */
182 private static final int INITIAL_QUEUE_CAPACITY = 1 << 13;
183
184 /**
185 * Maximum work-stealing queue array size. Must be less than or
186 * equal to 1 << 28 to ensure lack of index wraparound. (This
187 * is less than usual bounds, because we need leftshift by 3
188 * to be in int range).
189 */
190 private static final int MAXIMUM_QUEUE_CAPACITY = 1 << 28;
191
192 /**
193 * The pool this thread works in. Accessed directly by ForkJoinTask.
194 */
195 final ForkJoinPool pool;
196
197 /**
198 * The work-stealing queue array. Size must be a power of two.
199 * Initialized in onStart, to improve memory locality.
200 */
201 private ForkJoinTask<?>[] queue;
202
203 /**
204 * Index (mod queue.length) of least valid queue slot, which is
205 * always the next position to steal from if nonempty.
206 */
207 private volatile int base;
208
209 /**
210 * Index (mod queue.length) of next queue slot to push to or pop
211 * from. It is written only by owner thread, and accessed by other
212 * threads only after reading (volatile) base. Both sp and base
213 * are allowed to wrap around on overflow, but (sp - base) still
214 * estimates size.
215 */
216 private int sp;
217
218 /**
219 * The index of most recent stealer, used as a hint to avoid
220 * traversal in method helpJoinTask. This is only a hint because a
221 * worker might have had multiple steals and this only holds one
222 * of them (usually the most current). Declared non-volatile,
223 * relying on other prevailing sync to keep reasonably current.
224 */
225 private int stealHint;
226
227 /**
228 * Run state of this worker. In addition to the usual run levels,
229 * tracks if this worker is suspended as a spare, and if it was
230 * killed (trimmed) while suspended. However, "active" status is
231 * maintained separately.
232 */
233 private volatile int runState;
234
235 private static final int TERMINATING = 0x01;
236 private static final int TERMINATED = 0x02;
237 private static final int SUSPENDED = 0x04; // inactive spare
238 private static final int TRIMMED = 0x08; // killed while suspended
239
240 /**
241 * Number of LockSupport.park calls to block this thread for
242 * suspension or event waits. Used for internal instrumention;
243 * currently not exported but included because volatile write upon
244 * park also provides a workaround for a JVM bug.
245 */
246 volatile int parkCount;
247
248 /**
249 * Number of steals, transferred and reset in pool callbacks pool
250 * when idle Accessed directly by pool.
251 */
252 int stealCount;
253
254 /**
255 * Seed for random number generator for choosing steal victims.
256 * Uses Marsaglia xorshift. Must be initialized as nonzero.
257 */
258 private int seed;
259
260 /**
261 * Activity status. When true, this worker is considered active.
262 * Accessed directly by pool. Must be false upon construction.
263 */
264 boolean active;
265
266 /**
267 * True if use local fifo, not default lifo, for local polling.
268 * Shadows value from ForkJoinPool.
269 */
270 private final boolean locallyFifo;
271
272 /**
273 * Index of this worker in pool array. Set once by pool before
274 * running, and accessed directly by pool to locate this worker in
275 * its workers array.
276 */
277 int poolIndex;
278
279 /**
280 * The last pool event waited for. Accessed only by pool in
281 * callback methods invoked within this thread.
282 */
283 int lastEventCount;
284
285 /**
286 * Encoded index and event count of next event waiter. Used only
287 * by ForkJoinPool for managing event waiters.
288 */
289 volatile long nextWaiter;
290
291 /**
292 * The task currently being joined, set only when actively trying
293 * to helpStealer. Written only by current thread, but read by
294 * others.
295 */
296 private volatile ForkJoinTask<?> currentJoin;
297
298 /**
299 * The task most recently stolen from another worker (or
300 * submission queue). Not volatile because always read/written in
301 * presence of related volatiles in those cases where it matters.
302 */
303 private ForkJoinTask<?> currentSteal;
304
305 /**
306 * Creates a ForkJoinWorkerThread operating in the given pool.
307 *
308 * @param pool the pool this thread works in
309 * @throws NullPointerException if pool is null
310 */
311 protected ForkJoinWorkerThread(ForkJoinPool pool) {
312 this.pool = pool;
313 this.locallyFifo = pool.locallyFifo;
314 // To avoid exposing construction details to subclasses,
315 // remaining initialization is in start() and onStart()
316 }
317
318 /**
319 * Performs additional initialization and starts this thread
320 */
321 final void start(int poolIndex, UncaughtExceptionHandler ueh) {
322 this.poolIndex = poolIndex;
323 setDaemon(true);
324 if (ueh != null)
325 setUncaughtExceptionHandler(ueh);
326 start();
327 }
328
329 // Public/protected methods
330
331 /**
332 * Returns the pool hosting this thread.
333 *
334 * @return the pool
335 */
336 public ForkJoinPool getPool() {
337 return pool;
338 }
339
340 /**
341 * Returns the index number of this thread in its pool. The
342 * returned value ranges from zero to the maximum number of
343 * threads (minus one) that have ever been created in the pool.
344 * This method may be useful for applications that track status or
345 * collect results per-worker rather than per-task.
346 *
347 * @return the index number
348 */
349 public int getPoolIndex() {
350 return poolIndex;
351 }
352
353 /**
354 * Initializes internal state after construction but before
355 * processing any tasks. If you override this method, you must
356 * invoke super.onStart() at the beginning of the method.
357 * Initialization requires care: Most fields must have legal
358 * default values, to ensure that attempted accesses from other
359 * threads work correctly even before this thread starts
360 * processing tasks.
361 */
362 protected void onStart() {
363 int rs = seedGenerator.nextInt();
364 seed = rs == 0? 1 : rs; // seed must be nonzero
365
366 // Allocate name string and arrays in this thread
367 String pid = Integer.toString(pool.getPoolNumber());
368 String wid = Integer.toString(poolIndex);
369 setName("ForkJoinPool-" + pid + "-worker-" + wid);
370
371 queue = new ForkJoinTask<?>[INITIAL_QUEUE_CAPACITY];
372 }
373
374 /**
375 * Performs cleanup associated with termination of this worker
376 * thread. If you override this method, you must invoke
377 * {@code super.onTermination} at the end of the overridden method.
378 *
379 * @param exception the exception causing this thread to abort due
380 * to an unrecoverable error, or {@code null} if completed normally
381 */
382 protected void onTermination(Throwable exception) {
383 try {
384 cancelTasks();
385 setTerminated();
386 pool.workerTerminated(this);
387 } catch (Throwable ex) { // Shouldn't ever happen
388 if (exception == null) // but if so, at least rethrown
389 exception = ex;
390 } finally {
391 if (exception != null)
392 UNSAFE.throwException(exception);
393 }
394 }
395
396 /**
397 * This method is required to be public, but should never be
398 * called explicitly. It performs the main run loop to execute
399 * ForkJoinTasks.
400 */
401 public void run() {
402 Throwable exception = null;
403 try {
404 onStart();
405 mainLoop();
406 } catch (Throwable ex) {
407 exception = ex;
408 } finally {
409 onTermination(exception);
410 }
411 }
412
413 // helpers for run()
414
415 /**
416 * Find and execute tasks and check status while running
417 */
418 private void mainLoop() {
419 int emptyScans = 0; // consecutive times failed to find work
420 ForkJoinPool p = pool;
421 for (;;) {
422 p.preStep(this, emptyScans);
423 if (runState != 0)
424 return;
425 ForkJoinTask<?> t; // try to get and run stolen or submitted task
426 if ((t = scan()) != null || (t = pollSubmission()) != null) {
427 t.tryExec();
428 if (base != sp)
429 runLocalTasks();
430 currentSteal = null;
431 emptyScans = 0;
432 }
433 else
434 ++emptyScans;
435 }
436 }
437
438 /**
439 * Runs local tasks until queue is empty or shut down. Call only
440 * while active.
441 */
442 private void runLocalTasks() {
443 while (runState == 0) {
444 ForkJoinTask<?> t = locallyFifo? locallyDeqTask() : popTask();
445 if (t != null)
446 t.tryExec();
447 else if (base == sp)
448 break;
449 }
450 }
451
452 /**
453 * If a submission exists, try to activate and take it
454 *
455 * @return a task, if available
456 */
457 private ForkJoinTask<?> pollSubmission() {
458 ForkJoinPool p = pool;
459 while (p.hasQueuedSubmissions()) {
460 if (active || (active = p.tryIncrementActiveCount())) {
461 ForkJoinTask<?> t = p.pollSubmission();
462 if (t != null) {
463 currentSteal = t;
464 return t;
465 }
466 return scan(); // if missed, rescan
467 }
468 }
469 return null;
470 }
471
472 /*
473 * Intrinsics-based atomic writes for queue slots. These are
474 * basically the same as methods in AtomicObjectArray, but
475 * specialized for (1) ForkJoinTask elements (2) requirement that
476 * nullness and bounds checks have already been performed by
477 * callers and (3) effective offsets are known not to overflow
478 * from int to long (because of MAXIMUM_QUEUE_CAPACITY). We don't
479 * need corresponding version for reads: plain array reads are OK
480 * because they protected by other volatile reads and are
481 * confirmed by CASes.
482 *
483 * Most uses don't actually call these methods, but instead contain
484 * inlined forms that enable more predictable optimization. We
485 * don't define the version of write used in pushTask at all, but
486 * instead inline there a store-fenced array slot write.
487 */
488
489 /**
490 * CASes slot i of array q from t to null. Caller must ensure q is
491 * non-null and index is in range.
492 */
493 private static final boolean casSlotNull(ForkJoinTask<?>[] q, int i,
494 ForkJoinTask<?> t) {
495 return UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase, t, null);
496 }
497
498 /**
499 * Performs a volatile write of the given task at given slot of
500 * array q. Caller must ensure q is non-null and index is in
501 * range. This method is used only during resets and backouts.
502 */
503 private static final void writeSlot(ForkJoinTask<?>[] q, int i,
504 ForkJoinTask<?> t) {
505 UNSAFE.putObjectVolatile(q, (i << qShift) + qBase, t);
506 }
507
508 // queue methods
509
510 /**
511 * Pushes a task. Call only from this thread.
512 *
513 * @param t the task. Caller must ensure non-null.
514 */
515 final void pushTask(ForkJoinTask<?> t) {
516 ForkJoinTask<?>[] q = queue;
517 int mask = q.length - 1; // implicit assert q != null
518 int s = sp++; // ok to increment sp before slot write
519 UNSAFE.putOrderedObject(q, ((s & mask) << qShift) + qBase, t);
520 if ((s -= base) == 0)
521 pool.signalWork(); // was empty
522 else if (s == mask)
523 growQueue(); // is full
524 }
525
526 /**
527 * Tries to take a task from the base of the queue, failing if
528 * empty or contended. Note: Specializations of this code appear
529 * in locallyDeqTask and elsewhere.
530 *
531 * @return a task, or null if none or contended
532 */
533 final ForkJoinTask<?> deqTask() {
534 ForkJoinTask<?> t;
535 ForkJoinTask<?>[] q;
536 int b, i;
537 if ((b = base) != sp &&
538 (q = queue) != null && // must read q after b
539 (t = q[i = (q.length - 1) & b]) != null && base == b &&
540 UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase, t, null)) {
541 base = b + 1;
542 return t;
543 }
544 return null;
545 }
546
547 /**
548 * Tries to take a task from the base of own queue. Assumes active
549 * status. Called only by current thread.
550 *
551 * @return a task, or null if none
552 */
553 final ForkJoinTask<?> locallyDeqTask() {
554 ForkJoinTask<?>[] q = queue;
555 if (q != null) {
556 ForkJoinTask<?> t;
557 int b, i;
558 while (sp != (b = base)) {
559 if ((t = q[i = (q.length - 1) & b]) != null && base == b &&
560 UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase,
561 t, null)) {
562 base = b + 1;
563 return t;
564 }
565 }
566 }
567 return null;
568 }
569
570 /**
571 * Returns a popped task, or null if empty. Assumes active status.
572 * Called only by current thread.
573 */
574 final ForkJoinTask<?> popTask() {
575 int s;
576 ForkJoinTask<?>[] q;
577 if (base != (s = sp) && (q = queue) != null) {
578 int i = (q.length - 1) & --s;
579 ForkJoinTask<?> t = q[i];
580 if (t != null && UNSAFE.compareAndSwapObject
581 (q, (i << qShift) + qBase, t, null)) {
582 sp = s;
583 return t;
584 }
585 }
586 return null;
587 }
588
589 /**
590 * Specialized version of popTask to pop only if topmost element
591 * is the given task. Called only by current thread while
592 * active.
593 *
594 * @param t the task. Caller must ensure non-null.
595 */
596 final boolean unpushTask(ForkJoinTask<?> t) {
597 int s;
598 ForkJoinTask<?>[] q;
599 if (base != (s = sp) && (q = queue) != null &&
600 UNSAFE.compareAndSwapObject
601 (q, (((q.length - 1) & --s) << qShift) + qBase, t, null)) {
602 sp = s;
603 return true;
604 }
605 return false;
606 }
607
608 /**
609 * Returns next task or null if empty or contended
610 */
611 final ForkJoinTask<?> peekTask() {
612 ForkJoinTask<?>[] q = queue;
613 if (q == null)
614 return null;
615 int mask = q.length - 1;
616 int i = locallyFifo ? base : (sp - 1);
617 return q[i & mask];
618 }
619
620 /**
621 * Doubles queue array size. Transfers elements by emulating
622 * steals (deqs) from old array and placing, oldest first, into
623 * new array.
624 */
625 private void growQueue() {
626 ForkJoinTask<?>[] oldQ = queue;
627 int oldSize = oldQ.length;
628 int newSize = oldSize << 1;
629 if (newSize > MAXIMUM_QUEUE_CAPACITY)
630 throw new RejectedExecutionException("Queue capacity exceeded");
631 ForkJoinTask<?>[] newQ = queue = new ForkJoinTask<?>[newSize];
632
633 int b = base;
634 int bf = b + oldSize;
635 int oldMask = oldSize - 1;
636 int newMask = newSize - 1;
637 do {
638 int oldIndex = b & oldMask;
639 ForkJoinTask<?> t = oldQ[oldIndex];
640 if (t != null && !casSlotNull(oldQ, oldIndex, t))
641 t = null;
642 writeSlot(newQ, b & newMask, t);
643 } while (++b != bf);
644 pool.signalWork();
645 }
646
647 /**
648 * Computes next value for random victim probe in scan(). Scans
649 * don't require a very high quality generator, but also not a
650 * crummy one. Marsaglia xor-shift is cheap and works well enough.
651 * Note: This is manually inlined in scan()
652 */
653 private static final int xorShift(int r) {
654 r ^= r << 13;
655 r ^= r >>> 17;
656 return r ^ (r << 5);
657 }
658
659 /**
660 * Tries to steal a task from another worker. Starts at a random
661 * index of workers array, and probes workers until finding one
662 * with non-empty queue or finding that all are empty. It
663 * randomly selects the first n probes. If these are empty, it
664 * resorts to a circular sweep, which is necessary to accurately
665 * set active status. (The circular sweep uses steps of
666 * approximately half the array size plus 1, to avoid bias
667 * stemming from leftmost packing of the array in ForkJoinPool.)
668 *
669 * This method must be both fast and quiet -- usually avoiding
670 * memory accesses that could disrupt cache sharing etc other than
671 * those needed to check for and take tasks (or to activate if not
672 * already active). This accounts for, among other things,
673 * updating random seed in place without storing it until exit.
674 *
675 * @return a task, or null if none found
676 */
677 private ForkJoinTask<?> scan() {
678 ForkJoinPool p = pool;
679 ForkJoinWorkerThread[] ws; // worker array
680 int n; // upper bound of #workers
681 if ((ws = p.workers) != null && (n = ws.length) > 1) {
682 boolean canSteal = active; // shadow active status
683 int r = seed; // extract seed once
684 int mask = n - 1;
685 int j = -n; // loop counter
686 int k = r; // worker index, random if j < 0
687 for (;;) {
688 ForkJoinWorkerThread v = ws[k & mask];
689 r ^= r << 13; r ^= r >>> 17; r ^= r << 5; // inline xorshift
690 if (v != null && v.base != v.sp) {
691 if (canSteal || // ensure active status
692 (canSteal = active = p.tryIncrementActiveCount())) {
693 int b = v.base; // inline specialized deqTask
694 ForkJoinTask<?>[] q;
695 if (b != v.sp && (q = v.queue) != null) {
696 ForkJoinTask<?> t;
697 int i = (q.length - 1) & b;
698 long u = (i << qShift) + qBase; // raw offset
699 if ((t = q[i]) != null && v.base == b &&
700 UNSAFE.compareAndSwapObject(q, u, t, null)) {
701 currentSteal = t;
702 v.stealHint = poolIndex;
703 v.base = b + 1;
704 seed = r;
705 ++stealCount;
706 return t;
707 }
708 }
709 }
710 j = -n;
711 k = r; // restart on contention
712 }
713 else if (++j <= 0)
714 k = r;
715 else if (j <= n)
716 k += (n >>> 1) | 1;
717 else
718 break;
719 }
720 }
721 return null;
722 }
723
724 // Run State management
725
726 // status check methods used mainly by ForkJoinPool
727 final boolean isTerminating() { return (runState & TERMINATING) != 0; }
728 final boolean isTerminated() { return (runState & TERMINATED) != 0; }
729 final boolean isSuspended() { return (runState & SUSPENDED) != 0; }
730 final boolean isTrimmed() { return (runState & TRIMMED) != 0; }
731
732 /**
733 * Sets state to TERMINATING, also resuming if suspended.
734 */
735 final void shutdown() {
736 for (;;) {
737 int s = runState;
738 if ((s & SUSPENDED) != 0) { // kill and wakeup if suspended
739 if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
740 (s & ~SUSPENDED) |
741 (TRIMMED|TERMINATING))) {
742 LockSupport.unpark(this);
743 break;
744 }
745 }
746 else if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
747 s | TERMINATING))
748 break;
749 }
750 }
751
752 /**
753 * Sets state to TERMINATED. Called only by this thread.
754 */
755 private void setTerminated() {
756 int s;
757 do {} while (!UNSAFE.compareAndSwapInt(this, runStateOffset,
758 s = runState,
759 s | (TERMINATING|TERMINATED)));
760 }
761
762 /**
763 * Instrumented version of park used by ForkJoinPool.eventSync
764 */
765 final void doPark() {
766 ++parkCount;
767 LockSupport.park(this);
768 }
769
770 /**
771 * If suspended, tries to set status to unsuspended and unparks.
772 *
773 * @return true if successful
774 */
775 final boolean tryResumeSpare() {
776 int s = runState;
777 if ((s & SUSPENDED) != 0 &&
778 UNSAFE.compareAndSwapInt(this, runStateOffset, s,
779 s & ~SUSPENDED)) {
780 LockSupport.unpark(this);
781 return true;
782 }
783 return false;
784 }
785
786 /**
787 * Sets suspended status and blocks as spare until resumed,
788 * shutdown, or timed out.
789 *
790 * @return false if trimmed
791 */
792 final boolean suspendAsSpare() {
793 for (;;) { // set suspended unless terminating
794 int s = runState;
795 if ((s & TERMINATING) != 0) { // must kill
796 if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
797 s | (TRIMMED | TERMINATING)))
798 return false;
799 }
800 else if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
801 s | SUSPENDED))
802 break;
803 }
804 int pc = pool.parallelism;
805 pool.accumulateStealCount(this);
806 boolean timed;
807 long nanos;
808 long startTime;
809 if (poolIndex < pc) { // untimed wait for core threads
810 timed = false;
811 nanos = 0L;
812 startTime = 0L;
813 }
814 else { // timed wait for added threads
815 timed = true;
816 nanos = SPARE_KEEPALIVE_NANOS;
817 startTime = System.nanoTime();
818 }
819 lastEventCount = 0; // reset upon resume
820 interrupted(); // clear/ignore interrupts
821 while ((runState & SUSPENDED) != 0) {
822 ++parkCount;
823 if (!timed)
824 LockSupport.park(this);
825 else if ((nanos -= (System.nanoTime() - startTime)) > 0)
826 LockSupport.parkNanos(this, nanos);
827 else { // try to trim on timeout
828 int s = runState;
829 if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
830 (s & ~SUSPENDED) |
831 (TRIMMED|TERMINATING)))
832 return false;
833 }
834 }
835 return true;
836 }
837
838 // Misc support methods for ForkJoinPool
839
840 /**
841 * Returns an estimate of the number of tasks in the queue. Also
842 * used by ForkJoinTask.
843 */
844 final int getQueueSize() {
845 return -base + sp;
846 }
847
848 /**
849 * Removes and cancels all tasks in queue. Can be called from any
850 * thread.
851 */
852 final void cancelTasks() {
853 ForkJoinTask<?> cj = currentJoin; // try to kill live tasks
854 if (cj != null) {
855 currentJoin = null;
856 cj.cancelIgnoringExceptions();
857 }
858 ForkJoinTask<?> cs = currentSteal;
859 if (cs != null) {
860 currentSteal = null;
861 cs.cancelIgnoringExceptions();
862 }
863 while (base != sp) {
864 ForkJoinTask<?> t = deqTask();
865 if (t != null)
866 t.cancelIgnoringExceptions();
867 }
868 }
869
870 /**
871 * Drains tasks to given collection c.
872 *
873 * @return the number of tasks drained
874 */
875 final int drainTasksTo(Collection<? super ForkJoinTask<?>> c) {
876 int n = 0;
877 while (base != sp) {
878 ForkJoinTask<?> t = deqTask();
879 if (t != null) {
880 c.add(t);
881 ++n;
882 }
883 }
884 return n;
885 }
886
887 // Support methods for ForkJoinTask
888
889 /**
890 * Gets and removes a local task.
891 *
892 * @return a task, if available
893 */
894 final ForkJoinTask<?> pollLocalTask() {
895 while (sp != base) {
896 if (active || (active = pool.tryIncrementActiveCount()))
897 return locallyFifo? locallyDeqTask() : popTask();
898 }
899 return null;
900 }
901
902 /**
903 * Gets and removes a local or stolen task.
904 *
905 * @return a task, if available
906 */
907 final ForkJoinTask<?> pollTask() {
908 ForkJoinTask<?> t = pollLocalTask();
909 if (t == null) {
910 t = scan();
911 currentSteal = null; // cannot retain/track
912 }
913 return t;
914 }
915
916 /**
917 * Possibly runs some tasks and/or blocks, until task is done.
918 * The main body is basically a big spinloop, alternating between
919 * calls to helpJoinTask and pool.tryAwaitJoin with increased
920 * patience parameters until either the task is done without
921 * waiting, or we have, if necessary, created or resumed a
922 * replacement for this thread while it blocks.
923 *
924 * @param joinMe the task to join
925 * @return task status on exit
926 */
927 final int joinTask(ForkJoinTask<?> joinMe) {
928 int stat;
929 ForkJoinTask<?> prevJoin = currentJoin;
930 // Only written by this thread; only need ordered store
931 UNSAFE.putOrderedObject(this, currentJoinOffset, joinMe);
932 if ((stat = joinMe.status) >= 0 &&
933 (sp == base || (stat = localHelpJoinTask(joinMe)) >= 0)) {
934 for (int retries = 0; ; ++retries) {
935 helpJoinTask(joinMe, retries);
936 if ((stat = joinMe.status) < 0)
937 break;
938 pool.tryAwaitJoin(joinMe, retries);
939 if ((stat = joinMe.status) < 0)
940 break;
941 Thread.yield(); // tame unbounded loop
942 }
943 }
944 UNSAFE.putOrderedObject(this, currentJoinOffset, prevJoin);
945 return stat;
946 }
947
948 /**
949 * Run tasks in local queue until given task is done.
950 *
951 * @param joinMe the task to join
952 * @return task status on exit
953 */
954 private int localHelpJoinTask(ForkJoinTask<?> joinMe) {
955 int stat, s;
956 ForkJoinTask<?>[] q;
957 while ((stat = joinMe.status) >= 0 &&
958 base != (s = sp) && (q = queue) != null) {
959 ForkJoinTask<?> t;
960 int i = (q.length - 1) & --s;
961 long u = (i << qShift) + qBase; // raw offset
962 if ((t = q[i]) != null &&
963 UNSAFE.compareAndSwapObject(q, u, t, null)) {
964 /*
965 * This recheck (and similarly in helpJoinTask)
966 * handles cases where joinMe is independently
967 * cancelled or forced even though there is other work
968 * available. Back out of the pop by putting t back
969 * into slot before we commit by writing sp.
970 */
971 if ((stat = joinMe.status) < 0) {
972 UNSAFE.putObjectVolatile(q, u, t);
973 break;
974 }
975 sp = s;
976 t.tryExec();
977 }
978 }
979 return stat;
980 }
981
982 /**
983 * Tries to locate and help perform tasks for a stealer of the
984 * given task, or in turn one of its stealers. Traces
985 * currentSteal->currentJoin links looking for a thread working on
986 * a descendant of the given task and with a non-empty queue to
987 * steal back and execute tasks from. Restarts search upon
988 * encountering chains that are stale, unknown, or of length
989 * greater than MAX_HELP_DEPTH links, to avoid unbounded cycles.
990 *
991 * The implementation is very branchy to cope with the restart
992 * cases. Returns void, not task status (which must be reread by
993 * caller anyway) to slightly simplify control paths.
994 *
995 * @param joinMe the task to join
996 * @param rescans the number of times to recheck for work
997 */
998 private void helpJoinTask(ForkJoinTask<?> joinMe, int rescans) {
999 ForkJoinWorkerThread[] ws = pool.workers;
1000 int n;
1001 if (ws == null || (n = ws.length) <= 1)
1002 return; // need at least 2 workers
1003 restart:while (rescans-- >= 0 && joinMe.status >= 0) {
1004 ForkJoinTask<?> task = joinMe; // base of chain
1005 ForkJoinWorkerThread thread = this; // thread with stolen task
1006 for (int depth = 0; depth < MAX_HELP_DEPTH; ++depth) {
1007 // Try to find v, the stealer of task, by first using hint
1008 ForkJoinWorkerThread v = ws[thread.stealHint & (n - 1)];
1009 if (v == null || v.currentSteal != task) {
1010 for (int j = 0; ; ++j) { // search array
1011 if (task.status < 0 || j == n)
1012 continue restart; // stale or no stealer
1013 if ((v = ws[j]) != null && v.currentSteal == task) {
1014 thread.stealHint = j; // save for next time
1015 break;
1016 }
1017 }
1018 }
1019 // Try to help v, using specialized form of deqTask
1020 int b;
1021 ForkJoinTask<?>[] q;
1022 while ((b = v.base) != v.sp && (q = v.queue) != null) {
1023 int i = (q.length - 1) & b;
1024 long u = (i << qShift) + qBase;
1025 ForkJoinTask<?> t = q[i];
1026 if (task.status < 0) // stale
1027 continue restart;
1028 if (t != null) {
1029 if (v.base == b &&
1030 UNSAFE.compareAndSwapObject(q, u, t, null)) {
1031 if (joinMe.status < 0) {
1032 UNSAFE.putObjectVolatile(q, u, t);
1033 return; // back out on cancel
1034 }
1035 ForkJoinTask<?> prevSteal = currentSteal;
1036 currentSteal = t;
1037 v.stealHint = poolIndex;
1038 v.base = b + 1;
1039 t.tryExec();
1040 currentSteal = prevSteal;
1041 }
1042 }
1043 else if (v.base == b) // producer stalled
1044 continue restart; // retry via restart
1045 if (joinMe.status < 0)
1046 return;
1047 }
1048 // Try to descend to find v's stealer
1049 ForkJoinTask<?> next = v.currentJoin;
1050 if (next == null || next == task || task.status < 0)
1051 continue restart; // no descendent or stale
1052 if (joinMe.status < 0)
1053 return;
1054 task = next;
1055 thread = v;
1056 }
1057 }
1058 }
1059
1060 /**
1061 * Returns an estimate of the number of tasks, offset by a
1062 * function of number of idle workers.
1063 *
1064 * This method provides a cheap heuristic guide for task
1065 * partitioning when programmers, frameworks, tools, or languages
1066 * have little or no idea about task granularity. In essence by
1067 * offering this method, we ask users only about tradeoffs in
1068 * overhead vs expected throughput and its variance, rather than
1069 * how finely to partition tasks.
1070 *
1071 * In a steady state strict (tree-structured) computation, each
1072 * thread makes available for stealing enough tasks for other
1073 * threads to remain active. Inductively, if all threads play by
1074 * the same rules, each thread should make available only a
1075 * constant number of tasks.
1076 *
1077 * The minimum useful constant is just 1. But using a value of 1
1078 * would require immediate replenishment upon each steal to
1079 * maintain enough tasks, which is infeasible. Further,
1080 * partitionings/granularities of offered tasks should minimize
1081 * steal rates, which in general means that threads nearer the top
1082 * of computation tree should generate more than those nearer the
1083 * bottom. In perfect steady state, each thread is at
1084 * approximately the same level of computation tree. However,
1085 * producing extra tasks amortizes the uncertainty of progress and
1086 * diffusion assumptions.
1087 *
1088 * So, users will want to use values larger, but not much larger
1089 * than 1 to both smooth over transient shortages and hedge
1090 * against uneven progress; as traded off against the cost of
1091 * extra task overhead. We leave the user to pick a threshold
1092 * value to compare with the results of this call to guide
1093 * decisions, but recommend values such as 3.
1094 *
1095 * When all threads are active, it is on average OK to estimate
1096 * surplus strictly locally. In steady-state, if one thread is
1097 * maintaining say 2 surplus tasks, then so are others. So we can
1098 * just use estimated queue length (although note that (sp - base)
1099 * can be an overestimate because of stealers lagging increments
1100 * of base). However, this strategy alone leads to serious
1101 * mis-estimates in some non-steady-state conditions (ramp-up,
1102 * ramp-down, other stalls). We can detect many of these by
1103 * further considering the number of "idle" threads, that are
1104 * known to have zero queued tasks, so compensate by a factor of
1105 * (#idle/#active) threads.
1106 */
1107 final int getEstimatedSurplusTaskCount() {
1108 return sp - base - pool.idlePerActive();
1109 }
1110
1111 /**
1112 * Runs tasks until {@code pool.isQuiescent()}.
1113 */
1114 final void helpQuiescePool() {
1115 for (;;) {
1116 ForkJoinTask<?> t = pollLocalTask();
1117 if (t != null || (t = scan()) != null) {
1118 t.tryExec();
1119 currentSteal = null;
1120 }
1121 else {
1122 ForkJoinPool p = pool;
1123 if (active) {
1124 active = false; // inactivate
1125 do {} while (!p.tryDecrementActiveCount());
1126 }
1127 if (p.isQuiescent()) {
1128 active = true; // re-activate
1129 do {} while (!p.tryIncrementActiveCount());
1130 return;
1131 }
1132 }
1133 }
1134 }
1135
1136 // Unsafe mechanics
1137
1138 private static final sun.misc.Unsafe UNSAFE = getUnsafe();
1139 private static final long runStateOffset =
1140 objectFieldOffset("runState", ForkJoinWorkerThread.class);
1141 private static final long currentJoinOffset =
1142 objectFieldOffset("currentJoin", ForkJoinWorkerThread.class);
1143 private static final long qBase =
1144 UNSAFE.arrayBaseOffset(ForkJoinTask[].class);
1145 private static final int qShift;
1146
1147 static {
1148 int s = UNSAFE.arrayIndexScale(ForkJoinTask[].class);
1149 if ((s & (s-1)) != 0)
1150 throw new Error("data type scale not a power of two");
1151 qShift = 31 - Integer.numberOfLeadingZeros(s);
1152 }
1153
1154 private static long objectFieldOffset(String field, Class<?> klazz) {
1155 try {
1156 return UNSAFE.objectFieldOffset(klazz.getDeclaredField(field));
1157 } catch (NoSuchFieldException e) {
1158 // Convert Exception to corresponding Error
1159 NoSuchFieldError error = new NoSuchFieldError(field);
1160 error.initCause(e);
1161 throw error;
1162 }
1163 }
1164
1165 /**
1166 * Returns a sun.misc.Unsafe. Suitable for use in a 3rd party package.
1167 * Replace with a simple call to Unsafe.getUnsafe when integrating
1168 * into a jdk.
1169 *
1170 * @return a sun.misc.Unsafe
1171 */
1172 private static sun.misc.Unsafe getUnsafe() {
1173 try {
1174 return sun.misc.Unsafe.getUnsafe();
1175 } catch (SecurityException se) {
1176 try {
1177 return java.security.AccessController.doPrivileged
1178 (new java.security
1179 .PrivilegedExceptionAction<sun.misc.Unsafe>() {
1180 public sun.misc.Unsafe run() throws Exception {
1181 java.lang.reflect.Field f = sun.misc
1182 .Unsafe.class.getDeclaredField("theUnsafe");
1183 f.setAccessible(true);
1184 return (sun.misc.Unsafe) f.get(null);
1185 }});
1186 } catch (java.security.PrivilegedActionException e) {
1187 throw new RuntimeException("Could not initialize intrinsics",
1188 e.getCause());
1189 }
1190 }
1191 }
1192 }