ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ForkJoinWorkerThread.java
Revision: 1.8
Committed: Tue Aug 4 01:23:41 2009 UTC (14 years, 10 months ago) by jsr166
Branch: MAIN
Changes since 1.7: +34 -29 lines
Log Message:
sync with jsr166 package

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain, as explained at
4 * http://creativecommons.org/licenses/publicdomain
5 */
6
7 package java.util.concurrent;
8
9 import java.util.Collection;
10
11 /**
12 * A thread managed by a {@link ForkJoinPool}. This class is
13 * subclassable solely for the sake of adding functionality -- there
14 * are no overridable methods dealing with scheduling or execution.
15 * However, you can override initialization and termination methods
16 * surrounding the main task processing loop. If you do create such a
17 * subclass, you will also need to supply a custom {@link
18 * ForkJoinPool.ForkJoinWorkerThreadFactory} to use it in a {@code
19 * ForkJoinPool}.
20 *
21 * @since 1.7
22 * @author Doug Lea
23 */
24 public class ForkJoinWorkerThread extends Thread {
25 /*
26 * Algorithm overview:
27 *
28 * 1. Work-Stealing: Work-stealing queues are special forms of
29 * Deques that support only three of the four possible
30 * end-operations -- push, pop, and deq (aka steal), and only do
31 * so under the constraints that push and pop are called only from
32 * the owning thread, while deq may be called from other threads.
33 * (If you are unfamiliar with them, you probably want to read
34 * Herlihy and Shavit's book "The Art of Multiprocessor
35 * programming", chapter 16 describing these in more detail before
36 * proceeding.) The main work-stealing queue design is roughly
37 * similar to "Dynamic Circular Work-Stealing Deque" by David
38 * Chase and Yossi Lev, SPAA 2005
39 * (http://research.sun.com/scalable/pubs/index.html). The main
40 * difference ultimately stems from gc requirements that we null
41 * out taken slots as soon as we can, to maintain as small a
42 * footprint as possible even in programs generating huge numbers
43 * of tasks. To accomplish this, we shift the CAS arbitrating pop
44 * vs deq (steal) from being on the indices ("base" and "sp") to
45 * the slots themselves (mainly via method "casSlotNull()"). So,
46 * both a successful pop and deq mainly entail CAS'ing a non-null
47 * slot to null. Because we rely on CASes of references, we do
48 * not need tag bits on base or sp. They are simple ints as used
49 * in any circular array-based queue (see for example ArrayDeque).
50 * Updates to the indices must still be ordered in a way that
51 * guarantees that (sp - base) > 0 means the queue is empty, but
52 * otherwise may err on the side of possibly making the queue
53 * appear nonempty when a push, pop, or deq have not fully
54 * committed. Note that this means that the deq operation,
55 * considered individually, is not wait-free. One thief cannot
56 * successfully continue until another in-progress one (or, if
57 * previously empty, a push) completes. However, in the
58 * aggregate, we ensure at least probabilistic
59 * non-blockingness. If an attempted steal fails, a thief always
60 * chooses a different random victim target to try next. So, in
61 * order for one thief to progress, it suffices for any
62 * in-progress deq or new push on any empty queue to complete. One
63 * reason this works well here is that apparently-nonempty often
64 * means soon-to-be-stealable, which gives threads a chance to
65 * activate if necessary before stealing (see below).
66 *
67 * This approach also enables support for "async mode" where local
68 * task processing is in FIFO, not LIFO order; simply by using a
69 * version of deq rather than pop when locallyFifo is true (as set
70 * by the ForkJoinPool). This allows use in message-passing
71 * frameworks in which tasks are never joined.
72 *
73 * Efficient implementation of this approach currently relies on
74 * an uncomfortable amount of "Unsafe" mechanics. To maintain
75 * correct orderings, reads and writes of variable base require
76 * volatile ordering. Variable sp does not require volatile write
77 * but needs cheaper store-ordering on writes. Because they are
78 * protected by volatile base reads, reads of the queue array and
79 * its slots do not need volatile load semantics, but writes (in
80 * push) require store order and CASes (in pop and deq) require
81 * (volatile) CAS semantics. (See "Idempotent work stealing" by
82 * Michael, Saraswat, and Vechev, PPoPP 2009
83 * http://portal.acm.org/citation.cfm?id=1504186 for an algorithm
84 * with similar properties, but without support for nulling
85 * slots.) Since these combinations aren't supported using
86 * ordinary volatiles, the only way to accomplish these
87 * efficiently is to use direct Unsafe calls. (Using external
88 * AtomicIntegers and AtomicReferenceArrays for the indices and
89 * array is significantly slower because of memory locality and
90 * indirection effects.)
91 *
92 * Further, performance on most platforms is very sensitive to
93 * placement and sizing of the (resizable) queue array. Even
94 * though these queues don't usually become all that big, the
95 * initial size must be large enough to counteract cache
96 * contention effects across multiple queues (especially in the
97 * presence of GC cardmarking). Also, to improve thread-locality,
98 * queues are currently initialized immediately after the thread
99 * gets the initial signal to start processing tasks. However,
100 * all queue-related methods except pushTask are written in a way
101 * that allows them to instead be lazily allocated and/or disposed
102 * of when empty. All together, these low-level implementation
103 * choices produce as much as a factor of 4 performance
104 * improvement compared to naive implementations, and enable the
105 * processing of billions of tasks per second, sometimes at the
106 * expense of ugliness.
107 *
108 * 2. Run control: The primary run control is based on a global
109 * counter (activeCount) held by the pool. It uses an algorithm
110 * similar to that in Herlihy and Shavit section 17.6 to cause
111 * threads to eventually block when all threads declare they are
112 * inactive. For this to work, threads must be declared active
113 * when executing tasks, and before stealing a task. They must be
114 * inactive before blocking on the Pool Barrier (awaiting a new
115 * submission or other Pool event). In between, there is some free
116 * play which we take advantage of to avoid contention and rapid
117 * flickering of the global activeCount: If inactive, we activate
118 * only if a victim queue appears to be nonempty (see above).
119 * Similarly, a thread tries to inactivate only after a full scan
120 * of other threads. The net effect is that contention on
121 * activeCount is rarely a measurable performance issue. (There
122 * are also a few other cases where we scan for work rather than
123 * retry/block upon contention.)
124 *
125 * 3. Selection control. We maintain policy of always choosing to
126 * run local tasks rather than stealing, and always trying to
127 * steal tasks before trying to run a new submission. All steals
128 * are currently performed in randomly-chosen deq-order. It may be
129 * worthwhile to bias these with locality / anti-locality
130 * information, but doing this well probably requires more
131 * lower-level information from JVMs than currently provided.
132 */
133
134 /**
135 * Capacity of work-stealing queue array upon initialization.
136 * Must be a power of two. Initial size must be at least 2, but is
137 * padded to minimize cache effects.
138 */
139 private static final int INITIAL_QUEUE_CAPACITY = 1 << 13;
140
141 /**
142 * Maximum work-stealing queue array size. Must be less than or
143 * equal to 1 << 28 to ensure lack of index wraparound. (This
144 * is less than usual bounds, because we need leftshift by 3
145 * to be in int range).
146 */
147 private static final int MAXIMUM_QUEUE_CAPACITY = 1 << 28;
148
149 /**
150 * The pool this thread works in. Accessed directly by ForkJoinTask.
151 */
152 final ForkJoinPool pool;
153
154 /**
155 * The work-stealing queue array. Size must be a power of two.
156 * Initialized when thread starts, to improve memory locality.
157 */
158 private ForkJoinTask<?>[] queue;
159
160 /**
161 * Index (mod queue.length) of next queue slot to push to or pop
162 * from. It is written only by owner thread, via ordered store.
163 * Both sp and base are allowed to wrap around on overflow, but
164 * (sp - base) still estimates size.
165 */
166 private volatile int sp;
167
168 /**
169 * Index (mod queue.length) of least valid queue slot, which is
170 * always the next position to steal from if nonempty.
171 */
172 private volatile int base;
173
174 /**
175 * Activity status. When true, this worker is considered active.
176 * Must be false upon construction. It must be true when executing
177 * tasks, and BEFORE stealing a task. It must be false before
178 * calling pool.sync.
179 */
180 private boolean active;
181
182 /**
183 * Run state of this worker. Supports simple versions of the usual
184 * shutdown/shutdownNow control.
185 */
186 private volatile int runState;
187
188 /**
189 * Seed for random number generator for choosing steal victims.
190 * Uses Marsaglia xorshift. Must be nonzero upon initialization.
191 */
192 private int seed;
193
194 /**
195 * Number of steals, transferred to pool when idle
196 */
197 private int stealCount;
198
199 /**
200 * Index of this worker in pool array. Set once by pool before
201 * running, and accessed directly by pool during cleanup etc.
202 */
203 int poolIndex;
204
205 /**
206 * The last barrier event waited for. Accessed in pool callback
207 * methods, but only by current thread.
208 */
209 long lastEventCount;
210
211 /**
212 * True if use local fifo, not default lifo, for local polling
213 */
214 private boolean locallyFifo;
215
216 /**
217 * Creates a ForkJoinWorkerThread operating in the given pool.
218 *
219 * @param pool the pool this thread works in
220 * @throws NullPointerException if pool is null
221 */
222 protected ForkJoinWorkerThread(ForkJoinPool pool) {
223 if (pool == null) throw new NullPointerException();
224 this.pool = pool;
225 // Note: poolIndex is set by pool during construction
226 // Remaining initialization is deferred to onStart
227 }
228
229 // Public access methods
230
231 /**
232 * Returns the pool hosting this thread.
233 *
234 * @return the pool
235 */
236 public ForkJoinPool getPool() {
237 return pool;
238 }
239
240 /**
241 * Returns the index number of this thread in its pool. The
242 * returned value ranges from zero to the maximum number of
243 * threads (minus one) that have ever been created in the pool.
244 * This method may be useful for applications that track status or
245 * collect results per-worker rather than per-task.
246 *
247 * @return the index number
248 */
249 public int getPoolIndex() {
250 return poolIndex;
251 }
252
253 /**
254 * Establishes local first-in-first-out scheduling mode for forked
255 * tasks that are never joined.
256 *
257 * @param async if true, use locally FIFO scheduling
258 */
259 void setAsyncMode(boolean async) {
260 locallyFifo = async;
261 }
262
263 // Runstate management
264
265 // Runstate values. Order matters
266 private static final int RUNNING = 0;
267 private static final int SHUTDOWN = 1;
268 private static final int TERMINATING = 2;
269 private static final int TERMINATED = 3;
270
271 final boolean isShutdown() { return runState >= SHUTDOWN; }
272 final boolean isTerminating() { return runState >= TERMINATING; }
273 final boolean isTerminated() { return runState == TERMINATED; }
274 final boolean shutdown() { return transitionRunStateTo(SHUTDOWN); }
275 final boolean shutdownNow() { return transitionRunStateTo(TERMINATING); }
276
277 /**
278 * Transitions to at least the given state.
279 *
280 * @return {@code true} if not already at least at given state
281 */
282 private boolean transitionRunStateTo(int state) {
283 for (;;) {
284 int s = runState;
285 if (s >= state)
286 return false;
287 if (UNSAFE.compareAndSwapInt(this, runStateOffset, s, state))
288 return true;
289 }
290 }
291
292 /**
293 * Tries to set status to active; fails on contention.
294 */
295 private boolean tryActivate() {
296 if (!active) {
297 if (!pool.tryIncrementActiveCount())
298 return false;
299 active = true;
300 }
301 return true;
302 }
303
304 /**
305 * Tries to set status to inactive; fails on contention.
306 */
307 private boolean tryInactivate() {
308 if (active) {
309 if (!pool.tryDecrementActiveCount())
310 return false;
311 active = false;
312 }
313 return true;
314 }
315
316 /**
317 * Computes next value for random victim probe. Scans don't
318 * require a very high quality generator, but also not a crummy
319 * one. Marsaglia xor-shift is cheap and works well.
320 */
321 private static int xorShift(int r) {
322 r ^= (r << 13);
323 r ^= (r >>> 17);
324 return r ^ (r << 5);
325 }
326
327 // Lifecycle methods
328
329 /**
330 * This method is required to be public, but should never be
331 * called explicitly. It performs the main run loop to execute
332 * ForkJoinTasks.
333 */
334 public void run() {
335 Throwable exception = null;
336 try {
337 onStart();
338 pool.sync(this); // await first pool event
339 mainLoop();
340 } catch (Throwable ex) {
341 exception = ex;
342 } finally {
343 onTermination(exception);
344 }
345 }
346
347 /**
348 * Executes tasks until shut down.
349 */
350 private void mainLoop() {
351 while (!isShutdown()) {
352 ForkJoinTask<?> t = pollTask();
353 if (t != null || (t = pollSubmission()) != null)
354 t.quietlyExec();
355 else if (tryInactivate())
356 pool.sync(this);
357 }
358 }
359
360 /**
361 * Initializes internal state after construction but before
362 * processing any tasks. If you override this method, you must
363 * invoke super.onStart() at the beginning of the method.
364 * Initialization requires care: Most fields must have legal
365 * default values, to ensure that attempted accesses from other
366 * threads work correctly even before this thread starts
367 * processing tasks.
368 */
369 protected void onStart() {
370 // Allocate while starting to improve chances of thread-local
371 // isolation
372 queue = new ForkJoinTask<?>[INITIAL_QUEUE_CAPACITY];
373 // Initial value of seed need not be especially random but
374 // should differ across workers and must be nonzero
375 int p = poolIndex + 1;
376 seed = p + (p << 8) + (p << 16) + (p << 24); // spread bits
377 }
378
379 /**
380 * Performs cleanup associated with termination of this worker
381 * thread. If you override this method, you must invoke
382 * {@code super.onTermination} at the end of the overridden method.
383 *
384 * @param exception the exception causing this thread to abort due
385 * to an unrecoverable error, or {@code null} if completed normally
386 */
387 protected void onTermination(Throwable exception) {
388 // Execute remaining local tasks unless aborting or terminating
389 while (exception == null && pool.isProcessingTasks() && base != sp) {
390 try {
391 ForkJoinTask<?> t = popTask();
392 if (t != null)
393 t.quietlyExec();
394 } catch (Throwable ex) {
395 exception = ex;
396 }
397 }
398 // Cancel other tasks, transition status, notify pool, and
399 // propagate exception to uncaught exception handler
400 try {
401 do {} while (!tryInactivate()); // ensure inactive
402 cancelTasks();
403 runState = TERMINATED;
404 pool.workerTerminated(this);
405 } catch (Throwable ex) { // Shouldn't ever happen
406 if (exception == null) // but if so, at least rethrown
407 exception = ex;
408 } finally {
409 if (exception != null)
410 ForkJoinTask.rethrowException(exception);
411 }
412 }
413
414 // Intrinsics-based support for queue operations.
415
416 /**
417 * Adds in store-order the given task at given slot of q to null.
418 * Caller must ensure q is non-null and index is in range.
419 */
420 private static void setSlot(ForkJoinTask<?>[] q, int i,
421 ForkJoinTask<?> t) {
422 UNSAFE.putOrderedObject(q, (i << qShift) + qBase, t);
423 }
424
425 /**
426 * CAS given slot of q to null. Caller must ensure q is non-null
427 * and index is in range.
428 */
429 private static boolean casSlotNull(ForkJoinTask<?>[] q, int i,
430 ForkJoinTask<?> t) {
431 return UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase, t, null);
432 }
433
434 /**
435 * Sets sp in store-order.
436 */
437 private void storeSp(int s) {
438 UNSAFE.putOrderedInt(this, spOffset, s);
439 }
440
441 // Main queue methods
442
443 /**
444 * Pushes a task. Called only by current thread.
445 *
446 * @param t the task. Caller must ensure non-null.
447 */
448 final void pushTask(ForkJoinTask<?> t) {
449 ForkJoinTask<?>[] q = queue;
450 int mask = q.length - 1;
451 int s = sp;
452 setSlot(q, s & mask, t);
453 storeSp(++s);
454 if ((s -= base) == 1)
455 pool.signalWork();
456 else if (s >= mask)
457 growQueue();
458 }
459
460 /**
461 * Tries to take a task from the base of the queue, failing if
462 * either empty or contended.
463 *
464 * @return a task, or null if none or contended
465 */
466 final ForkJoinTask<?> deqTask() {
467 ForkJoinTask<?> t;
468 ForkJoinTask<?>[] q;
469 int i;
470 int b;
471 if (sp != (b = base) &&
472 (q = queue) != null && // must read q after b
473 (t = q[i = (q.length - 1) & b]) != null &&
474 casSlotNull(q, i, t)) {
475 base = b + 1;
476 return t;
477 }
478 return null;
479 }
480
481 /**
482 * Tries to take a task from the base of own queue, activating if
483 * necessary, failing only if empty. Called only by current thread.
484 *
485 * @return a task, or null if none
486 */
487 final ForkJoinTask<?> locallyDeqTask() {
488 int b;
489 while (sp != (b = base)) {
490 if (tryActivate()) {
491 ForkJoinTask<?>[] q = queue;
492 int i = (q.length - 1) & b;
493 ForkJoinTask<?> t = q[i];
494 if (t != null && casSlotNull(q, i, t)) {
495 base = b + 1;
496 return t;
497 }
498 }
499 }
500 return null;
501 }
502
503 /**
504 * Returns a popped task, or null if empty. Ensures active status
505 * if non-null. Called only by current thread.
506 */
507 final ForkJoinTask<?> popTask() {
508 int s = sp;
509 while (s != base) {
510 if (tryActivate()) {
511 ForkJoinTask<?>[] q = queue;
512 int mask = q.length - 1;
513 int i = (s - 1) & mask;
514 ForkJoinTask<?> t = q[i];
515 if (t == null || !casSlotNull(q, i, t))
516 break;
517 storeSp(s - 1);
518 return t;
519 }
520 }
521 return null;
522 }
523
524 /**
525 * Specialized version of popTask to pop only if
526 * topmost element is the given task. Called only
527 * by current thread while active.
528 *
529 * @param t the task. Caller must ensure non-null.
530 */
531 final boolean unpushTask(ForkJoinTask<?> t) {
532 ForkJoinTask<?>[] q = queue;
533 int mask = q.length - 1;
534 int s = sp - 1;
535 if (casSlotNull(q, s & mask, t)) {
536 storeSp(s);
537 return true;
538 }
539 return false;
540 }
541
542 /**
543 * Returns next task or null if empty or contended
544 */
545 final ForkJoinTask<?> peekTask() {
546 ForkJoinTask<?>[] q = queue;
547 if (q == null)
548 return null;
549 int mask = q.length - 1;
550 int i = locallyFifo ? base : (sp - 1);
551 return q[i & mask];
552 }
553
554 /**
555 * Doubles queue array size. Transfers elements by emulating
556 * steals (deqs) from old array and placing, oldest first, into
557 * new array.
558 */
559 private void growQueue() {
560 ForkJoinTask<?>[] oldQ = queue;
561 int oldSize = oldQ.length;
562 int newSize = oldSize << 1;
563 if (newSize > MAXIMUM_QUEUE_CAPACITY)
564 throw new RejectedExecutionException("Queue capacity exceeded");
565 ForkJoinTask<?>[] newQ = queue = new ForkJoinTask<?>[newSize];
566
567 int b = base;
568 int bf = b + oldSize;
569 int oldMask = oldSize - 1;
570 int newMask = newSize - 1;
571 do {
572 int oldIndex = b & oldMask;
573 ForkJoinTask<?> t = oldQ[oldIndex];
574 if (t != null && !casSlotNull(oldQ, oldIndex, t))
575 t = null;
576 setSlot(newQ, b & newMask, t);
577 } while (++b != bf);
578 pool.signalWork();
579 }
580
581 /**
582 * Tries to steal a task from another worker. Starts at a random
583 * index of workers array, and probes workers until finding one
584 * with non-empty queue or finding that all are empty. It
585 * randomly selects the first n probes. If these are empty, it
586 * resorts to a full circular traversal, which is necessary to
587 * accurately set active status by caller. Also restarts if pool
588 * events occurred since last scan, which forces refresh of
589 * workers array, in case barrier was associated with resize.
590 *
591 * This method must be both fast and quiet -- usually avoiding
592 * memory accesses that could disrupt cache sharing etc other than
593 * those needed to check for and take tasks. This accounts for,
594 * among other things, updating random seed in place without
595 * storing it until exit.
596 *
597 * @return a task, or null if none found
598 */
599 private ForkJoinTask<?> scan() {
600 ForkJoinTask<?> t = null;
601 int r = seed; // extract once to keep scan quiet
602 ForkJoinWorkerThread[] ws; // refreshed on outer loop
603 int mask; // must be power 2 minus 1 and > 0
604 outer:do {
605 if ((ws = pool.workers) != null && (mask = ws.length - 1) > 0) {
606 int idx = r;
607 int probes = ~mask; // use random index while negative
608 for (;;) {
609 r = xorShift(r); // update random seed
610 ForkJoinWorkerThread v = ws[mask & idx];
611 if (v == null || v.sp == v.base) {
612 if (probes <= mask)
613 idx = (probes++ < 0) ? r : (idx + 1);
614 else
615 break;
616 }
617 else if (!tryActivate() || (t = v.deqTask()) == null)
618 continue outer; // restart on contention
619 else
620 break outer;
621 }
622 }
623 } while (pool.hasNewSyncEvent(this)); // retry on pool events
624 seed = r;
625 return t;
626 }
627
628 /**
629 * Gets and removes a local or stolen task.
630 *
631 * @return a task, if available
632 */
633 final ForkJoinTask<?> pollTask() {
634 ForkJoinTask<?> t = locallyFifo ? locallyDeqTask() : popTask();
635 if (t == null && (t = scan()) != null)
636 ++stealCount;
637 return t;
638 }
639
640 /**
641 * Gets a local task.
642 *
643 * @return a task, if available
644 */
645 final ForkJoinTask<?> pollLocalTask() {
646 return locallyFifo ? locallyDeqTask() : popTask();
647 }
648
649 /**
650 * Returns a pool submission, if one exists, activating first.
651 *
652 * @return a submission, if available
653 */
654 private ForkJoinTask<?> pollSubmission() {
655 ForkJoinPool p = pool;
656 while (p.hasQueuedSubmissions()) {
657 ForkJoinTask<?> t;
658 if (tryActivate() && (t = p.pollSubmission()) != null)
659 return t;
660 }
661 return null;
662 }
663
664 // Methods accessed only by Pool
665
666 /**
667 * Removes and cancels all tasks in queue. Can be called from any
668 * thread.
669 */
670 final void cancelTasks() {
671 ForkJoinTask<?> t;
672 while (base != sp && (t = deqTask()) != null)
673 t.cancelIgnoringExceptions();
674 }
675
676 /**
677 * Drains tasks to given collection c.
678 *
679 * @return the number of tasks drained
680 */
681 final int drainTasksTo(Collection<? super ForkJoinTask<?>> c) {
682 int n = 0;
683 ForkJoinTask<?> t;
684 while (base != sp && (t = deqTask()) != null) {
685 c.add(t);
686 ++n;
687 }
688 return n;
689 }
690
691 /**
692 * Gets and clears steal count for accumulation by pool. Called
693 * only when known to be idle (in pool.sync and termination).
694 */
695 final int getAndClearStealCount() {
696 int sc = stealCount;
697 stealCount = 0;
698 return sc;
699 }
700
701 /**
702 * Returns {@code true} if at least one worker in the given array
703 * appears to have at least one queued task.
704 *
705 * @param ws array of workers
706 */
707 static boolean hasQueuedTasks(ForkJoinWorkerThread[] ws) {
708 if (ws != null) {
709 int len = ws.length;
710 for (int j = 0; j < 2; ++j) { // need two passes for clean sweep
711 for (int i = 0; i < len; ++i) {
712 ForkJoinWorkerThread w = ws[i];
713 if (w != null && w.sp != w.base)
714 return true;
715 }
716 }
717 }
718 return false;
719 }
720
721 // Support methods for ForkJoinTask
722
723 /**
724 * Returns an estimate of the number of tasks in the queue.
725 */
726 final int getQueueSize() {
727 // suppress momentarily negative values
728 return Math.max(0, sp - base);
729 }
730
731 /**
732 * Returns an estimate of the number of tasks, offset by a
733 * function of number of idle workers.
734 */
735 final int getEstimatedSurplusTaskCount() {
736 // The halving approximates weighting idle vs non-idle workers
737 return (sp - base) - (pool.getIdleThreadCount() >>> 1);
738 }
739
740 /**
741 * Scans, returning early if joinMe done.
742 */
743 final ForkJoinTask<?> scanWhileJoining(ForkJoinTask<?> joinMe) {
744 ForkJoinTask<?> t = pollTask();
745 if (t != null && joinMe.status < 0 && sp == base) {
746 pushTask(t); // unsteal if done and this task would be stealable
747 t = null;
748 }
749 return t;
750 }
751
752 /**
753 * Runs tasks until {@code pool.isQuiescent()}.
754 */
755 final void helpQuiescePool() {
756 for (;;) {
757 ForkJoinTask<?> t = pollTask();
758 if (t != null)
759 t.quietlyExec();
760 else if (tryInactivate() && pool.isQuiescent())
761 break;
762 }
763 do {} while (!tryActivate()); // re-activate on exit
764 }
765
766 // Unsafe mechanics
767
768 private static final sun.misc.Unsafe UNSAFE = sun.misc.Unsafe.getUnsafe();
769 private static final long spOffset =
770 objectFieldOffset("sp", ForkJoinWorkerThread.class);
771 private static final long runStateOffset =
772 objectFieldOffset("runState", ForkJoinWorkerThread.class);
773 private static final long qBase;
774 private static final int qShift;
775
776 static {
777 qBase = UNSAFE.arrayBaseOffset(ForkJoinTask[].class);
778 int s = UNSAFE.arrayIndexScale(ForkJoinTask[].class);
779 if ((s & (s-1)) != 0)
780 throw new Error("data type scale not a power of two");
781 qShift = 31 - Integer.numberOfLeadingZeros(s);
782 }
783
784 private static long objectFieldOffset(String field, Class<?> klazz) {
785 try {
786 return UNSAFE.objectFieldOffset(klazz.getDeclaredField(field));
787 } catch (NoSuchFieldException e) {
788 // Convert Exception to corresponding Error
789 NoSuchFieldError error = new NoSuchFieldError(field);
790 error.initCause(e);
791 throw error;
792 }
793 }
794 }