ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/jsr166y/ForkJoinWorkerThread.java
(Generate patch)

Comparing jsr166/src/jsr166y/ForkJoinWorkerThread.java (file contents):
Revision 1.40 by dl, Wed Aug 11 18:45:12 2010 UTC vs.
Revision 1.68 by dl, Thu Jan 26 00:08:13 2012 UTC

# Line 1 | Line 1
1   /*
2   * Written by Doug Lea with assistance from members of JCP JSR-166
3   * Expert Group and released to the public domain, as explained at
4 < * http://creativecommons.org/licenses/publicdomain
4 > * http://creativecommons.org/publicdomain/zero/1.0/
5   */
6  
7   package jsr166y;
8  
9 import java.util.concurrent.*;
10
11 import java.util.Random;
12 import java.util.Collection;
13 import java.util.concurrent.locks.LockSupport;
14
9   /**
10 < * A thread managed by a {@link ForkJoinPool}.  This class is
11 < * subclassable solely for the sake of adding functionality -- there
12 < * are no overridable methods dealing with scheduling or execution.
13 < * However, you can override initialization and termination methods
14 < * surrounding the main task processing loop.  If you do create such a
15 < * subclass, you will also need to supply a custom {@link
16 < * ForkJoinPool.ForkJoinWorkerThreadFactory} to use it in a {@code
17 < * ForkJoinPool}.
10 > * A thread managed by a {@link ForkJoinPool}, which executes
11 > * {@link ForkJoinTask}s.
12 > * This class is subclassable solely for the sake of adding
13 > * functionality -- there are no overridable methods dealing with
14 > * scheduling or execution.  However, you can override initialization
15 > * and termination methods surrounding the main task processing loop.
16 > * If you do create such a subclass, you will also need to supply a
17 > * custom {@link ForkJoinPool.ForkJoinWorkerThreadFactory} to use it
18 > * in a {@code ForkJoinPool}.
19   *
20   * @since 1.7
21   * @author Doug Lea
22   */
23   public class ForkJoinWorkerThread extends Thread {
24      /*
30     * Overview:
31     *
25       * ForkJoinWorkerThreads are managed by ForkJoinPools and perform
26 <     * ForkJoinTasks. This class includes bookkeeping in support of
27 <     * worker activation, suspension, and lifecycle control described
35 <     * in more detail in the internal documentation of class
36 <     * ForkJoinPool. And as described further below, this class also
37 <     * includes special-cased support for some ForkJoinTask
38 <     * methods. But the main mechanics involve work-stealing:
39 <     *
40 <     * Work-stealing queues are special forms of Deques that support
41 <     * only three of the four possible end-operations -- push, pop,
42 <     * and deq (aka steal), under the further constraints that push
43 <     * and pop are called only from the owning thread, while deq may
44 <     * be called from other threads.  (If you are unfamiliar with
45 <     * them, you probably want to read Herlihy and Shavit's book "The
46 <     * Art of Multiprocessor programming", chapter 16 describing these
47 <     * in more detail before proceeding.)  The main work-stealing
48 <     * queue design is roughly similar to those in the papers "Dynamic
49 <     * Circular Work-Stealing Deque" by Chase and Lev, SPAA 2005
50 <     * (http://research.sun.com/scalable/pubs/index.html) and
51 <     * "Idempotent work stealing" by Michael, Saraswat, and Vechev,
52 <     * PPoPP 2009 (http://portal.acm.org/citation.cfm?id=1504186).
53 <     * The main differences ultimately stem from gc requirements that
54 <     * we null out taken slots as soon as we can, to maintain as small
55 <     * a footprint as possible even in programs generating huge
56 <     * numbers of tasks. To accomplish this, we shift the CAS
57 <     * arbitrating pop vs deq (steal) from being on the indices
58 <     * ("base" and "sp") to the slots themselves (mainly via method
59 <     * "casSlotNull()"). So, both a successful pop and deq mainly
60 <     * entail a CAS of a slot from non-null to null.  Because we rely
61 <     * on CASes of references, we do not need tag bits on base or sp.
62 <     * They are simple ints as used in any circular array-based queue
63 <     * (see for example ArrayDeque).  Updates to the indices must
64 <     * still be ordered in a way that guarantees that sp == base means
65 <     * the queue is empty, but otherwise may err on the side of
66 <     * possibly making the queue appear nonempty when a push, pop, or
67 <     * deq have not fully committed. Note that this means that the deq
68 <     * operation, considered individually, is not wait-free. One thief
69 <     * cannot successfully continue until another in-progress one (or,
70 <     * if previously empty, a push) completes.  However, in the
71 <     * aggregate, we ensure at least probabilistic non-blockingness.
72 <     * If an attempted steal fails, a thief always chooses a different
73 <     * random victim target to try next. So, in order for one thief to
74 <     * progress, it suffices for any in-progress deq or new push on
75 <     * any empty queue to complete. One reason this works well here is
76 <     * that apparently-nonempty often means soon-to-be-stealable,
77 <     * which gives threads a chance to set activation status if
78 <     * necessary before stealing.
79 <     *
80 <     * This approach also enables support for "async mode" where local
81 <     * task processing is in FIFO, not LIFO order; simply by using a
82 <     * version of deq rather than pop when locallyFifo is true (as set
83 <     * by the ForkJoinPool).  This allows use in message-passing
84 <     * frameworks in which tasks are never joined.
85 <     *
86 <     * When a worker would otherwise be blocked waiting to join a
87 <     * task, it first tries a form of linear helping: Each worker
88 <     * records (in field currentSteal) the most recent task it stole
89 <     * from some other worker. Plus, it records (in field currentJoin)
90 <     * the task it is currently actively joining. Method joinTask uses
91 <     * these markers to try to find a worker to help (i.e., steal back
92 <     * a task from and execute it) that could hasten completion of the
93 <     * actively joined task. In essence, the joiner executes a task
94 <     * that would be on its own local deque had the to-be-joined task
95 <     * not been stolen. This may be seen as a conservative variant of
96 <     * the approach in Wagner & Calder "Leapfrogging: a portable
97 <     * technique for implementing efficient futures" SIGPLAN Notices,
98 <     * 1993 (http://portal.acm.org/citation.cfm?id=155354). It differs
99 <     * in that: (1) We only maintain dependency links across workers
100 <     * upon steals, rather than use per-task bookkeeping.  This may
101 <     * require a linear scan of workers array to locate stealers, but
102 <     * usually doesn't because stealers leave hints (that may become
103 <     * stale/wrong) of where to locate them. This isolates cost to
104 <     * when it is needed, rather than adding to per-task overhead.
105 <     * (2) It is "shallow", ignoring nesting and potentially cyclic
106 <     * mutual steals.  (3) It is intentionally racy: field currentJoin
107 <     * is updated only while actively joining, which means that we
108 <     * miss links in the chain during long-lived tasks, GC stalls etc
109 <     * (which is OK since blocking in such cases is usually a good
110 <     * idea).  (4) We bound the number of attempts to find work (see
111 <     * MAX_HELP_DEPTH) and fall back to suspending the worker and if
112 <     * necessary replacing it with a spare (see
113 <     * ForkJoinPool.tryAwaitJoin).
114 <     *
115 <     * Efficient implementation of these algorithms currently relies
116 <     * on an uncomfortable amount of "Unsafe" mechanics. To maintain
117 <     * correct orderings, reads and writes of variable base require
118 <     * volatile ordering.  Variable sp does not require volatile
119 <     * writes but still needs store-ordering, which we accomplish by
120 <     * pre-incrementing sp before filling the slot with an ordered
121 <     * store.  (Pre-incrementing also enables backouts used in
122 <     * joinTask.)  Because they are protected by volatile base reads,
123 <     * reads of the queue array and its slots by other threads do not
124 <     * need volatile load semantics, but writes (in push) require
125 <     * store order and CASes (in pop and deq) require (volatile) CAS
126 <     * semantics.  (Michael, Saraswat, and Vechev's algorithm has
127 <     * similar properties, but without support for nulling slots.)
128 <     * Since these combinations aren't supported using ordinary
129 <     * volatiles, the only way to accomplish these efficiently is to
130 <     * use direct Unsafe calls. (Using external AtomicIntegers and
131 <     * AtomicReferenceArrays for the indices and array is
132 <     * significantly slower because of memory locality and indirection
133 <     * effects.)
134 <     *
135 <     * Further, performance on most platforms is very sensitive to
136 <     * placement and sizing of the (resizable) queue array.  Even
137 <     * though these queues don't usually become all that big, the
138 <     * initial size must be large enough to counteract cache
139 <     * contention effects across multiple queues (especially in the
140 <     * presence of GC cardmarking). Also, to improve thread-locality,
141 <     * queues are initialized after starting.  All together, these
142 <     * low-level implementation choices produce as much as a factor of
143 <     * 4 performance improvement compared to naive implementations,
144 <     * and enable the processing of billions of tasks per second,
145 <     * sometimes at the expense of ugliness.
146 <     */
147 <
148 <    /**
149 <     * Generator for initial random seeds for random victim
150 <     * selection. This is used only to create initial seeds. Random
151 <     * steals use a cheaper xorshift generator per steal attempt. We
152 <     * expect only rare contention on seedGenerator, so just use a
153 <     * plain Random.
154 <     */
155 <    private static final Random seedGenerator = new Random();
156 <
157 <    /**
158 <     * The maximum stolen->joining link depth allowed in helpJoinTask.
159 <     * Depths for legitimate chains are unbounded, but we use a fixed
160 <     * constant to avoid (otherwise unchecked) cycles and bound
161 <     * staleness of traversal parameters at the expense of sometimes
162 <     * blocking when we could be helping.
163 <     */
164 <    private static final int MAX_HELP_DEPTH = 8;
165 <
166 <    /**
167 <     * The wakeup interval (in nanoseconds) for the first worker
168 <     * suspended as spare.  On each wakeup not signalled by a
169 <     * resumption, it may ask the pool to reduce the number of spares.
170 <     */
171 <    private static final long TRIM_RATE_NANOS = 200L * 1000L * 1000L;
172 <
173 <    /**
174 <     * Capacity of work-stealing queue array upon initialization.
175 <     * Must be a power of two. Initial size must be at least 4, but is
176 <     * padded to minimize cache effects.
177 <     */
178 <    private static final int INITIAL_QUEUE_CAPACITY = 1 << 13;
179 <
180 <    /**
181 <     * Maximum work-stealing queue array size.  Must be less than or
182 <     * equal to 1 << 28 to ensure lack of index wraparound. (This
183 <     * is less than usual bounds, because we need leftshift by 3
184 <     * to be in int range).
185 <     */
186 <    private static final int MAXIMUM_QUEUE_CAPACITY = 1 << 28;
187 <
188 <    /**
189 <     * The pool this thread works in. Accessed directly by ForkJoinTask.
190 <     */
191 <    final ForkJoinPool pool;
192 <
193 <    /**
194 <     * The work-stealing queue array. Size must be a power of two.
195 <     * Initialized in onStart, to improve memory locality.
196 <     */
197 <    private ForkJoinTask<?>[] queue;
198 <
199 <    /**
200 <     * Index (mod queue.length) of least valid queue slot, which is
201 <     * always the next position to steal from if nonempty.
202 <     */
203 <    private volatile int base;
204 <
205 <    /**
206 <     * Index (mod queue.length) of next queue slot to push to or pop
207 <     * from. It is written only by owner thread, and accessed by other
208 <     * threads only after reading (volatile) base.  Both sp and base
209 <     * are allowed to wrap around on overflow, but (sp - base) still
210 <     * estimates size.
211 <     */
212 <    private int sp;
213 <
214 <    /**
215 <     * The index of most recent stealer, used as a hint to avoid
216 <     * traversal in method helpJoinTask. This is only a hint because a
217 <     * worker might have had multiple steals and this only holds one
218 <     * of them (usually the most current). Declared non-volatile,
219 <     * relying on other prevailing sync to keep reasonably current.
220 <     */
221 <    private int stealHint;
222 <
223 <    /**
224 <     * Run state of this worker. In addition to the usual run levels,
225 <     * tracks if this worker is suspended as a spare, and if it was
226 <     * killed (trimmed) while suspended. However, "active" status is
227 <     * maintained separately.
228 <     */
229 <    private volatile int runState;
230 <
231 <    private static final int TERMINATING = 0x01;
232 <    private static final int TERMINATED  = 0x02;
233 <    private static final int SUSPENDED   = 0x04; // inactive spare
234 <    private static final int TRIMMED     = 0x08; // killed while suspended
235 <
236 <    /**
237 <     * Number of steals, transferred and reset in pool callbacks pool
238 <     * when idle Accessed directly by pool.
239 <     */
240 <    int stealCount;
241 <
242 <    /**
243 <     * Seed for random number generator for choosing steal victims.
244 <     * Uses Marsaglia xorshift. Must be initialized as nonzero.
245 <     */
246 <    private int seed;
247 <
248 <    /**
249 <     * Activity status. When true, this worker is considered active.
250 <     * Accessed directly by pool.  Must be false upon construction.
251 <     */
252 <    boolean active;
253 <
254 <    /**
255 <     * True if use local fifo, not default lifo, for local polling.
256 <     * Shadows value from ForkJoinPool.
257 <     */
258 <    private final boolean locallyFifo;
259 <
260 <    /**
261 <     * Index of this worker in pool array. Set once by pool before
262 <     * running, and accessed directly by pool to locate this worker in
263 <     * its workers array.
264 <     */
265 <    int poolIndex;
266 <
267 <    /**
268 <     * The last pool event waited for. Accessed only by pool in
269 <     * callback methods invoked within this thread.
270 <     */
271 <    int lastEventCount;
272 <
273 <    /**
274 <     * Encoded index and event count of next event waiter. Used only
275 <     * by ForkJoinPool for managing event waiters.
276 <     */
277 <    volatile long nextWaiter;
278 <
279 <    /**
280 <     * Number of times this thread suspended as spare
281 <     */
282 <    int spareCount;
283 <
284 <    /**
285 <     * Encoded index and count of next spare waiter. Used only
286 <     * by ForkJoinPool for managing spares.
287 <     */
288 <    volatile int nextSpare;
289 <
290 <    /**
291 <     * The task currently being joined, set only when actively trying
292 <     * to helpStealer. Written only by current thread, but read by
293 <     * others.
26 >     * ForkJoinTasks. For explanation, see the internal documentation
27 >     * of class ForkJoinPool.
28       */
295    private volatile ForkJoinTask<?> currentJoin;
29  
30 <    /**
31 <     * The task most recently stolen from another worker (or
299 <     * submission queue).  Not volatile because always read/written in
300 <     * presence of related volatiles in those cases where it matters.
301 <     */
302 <    private ForkJoinTask<?> currentSteal;
30 >    final ForkJoinPool.WorkQueue workQueue; // Work-stealing mechanics
31 >    final ForkJoinPool pool;                // the pool this thread works in
32  
33      /**
34       * Creates a ForkJoinWorkerThread operating in the given pool.
# Line 308 | Line 37 | public class ForkJoinWorkerThread extend
37       * @throws NullPointerException if pool is null
38       */
39      protected ForkJoinWorkerThread(ForkJoinPool pool) {
40 <        this.pool = pool;
312 <        this.locallyFifo = pool.locallyFifo;
40 >        super(pool.nextWorkerName());
41          setDaemon(true);
42 <        // To avoid exposing construction details to subclasses,
315 <        // remaining initialization is in start() and onStart()
316 <    }
317 <
318 <    /**
319 <     * Performs additional initialization and starts this thread
320 <     */
321 <    final void start(int poolIndex, UncaughtExceptionHandler ueh) {
322 <        this.poolIndex = poolIndex;
42 >        Thread.UncaughtExceptionHandler ueh = pool.ueh;
43          if (ueh != null)
44              setUncaughtExceptionHandler(ueh);
45 <        start();
45 >        this.pool = pool;
46 >        this.workQueue = new ForkJoinPool.WorkQueue(this, pool.localMode);
47 >        pool.registerWorker(this);
48      }
49  
328    // Public/protected methods
329
50      /**
51       * Returns the pool hosting this thread.
52       *
# Line 346 | Line 66 | public class ForkJoinWorkerThread extend
66       * @return the index number
67       */
68      public int getPoolIndex() {
69 <        return poolIndex;
69 >        return workQueue.poolIndex;
70      }
71  
72      /**
73       * Initializes internal state after construction but before
74       * processing any tasks. If you override this method, you must
75 <     * invoke super.onStart() at the beginning of the method.
75 >     * invoke {@code super.onStart()} at the beginning of the method.
76       * Initialization requires care: Most fields must have legal
77       * default values, to ensure that attempted accesses from other
78       * threads work correctly even before this thread starts
79       * processing tasks.
80       */
81      protected void onStart() {
362        int rs = seedGenerator.nextInt();
363        seed = rs == 0? 1 : rs; // seed must be nonzero
364
365        // Allocate name string and arrays in this thread
366        String pid = Integer.toString(pool.getPoolNumber());
367        String wid = Integer.toString(poolIndex);
368        setName("ForkJoinPool-" + pid + "-worker-" + wid);
369
370        queue = new ForkJoinTask<?>[INITIAL_QUEUE_CAPACITY];
82      }
83  
84      /**
# Line 379 | Line 90 | public class ForkJoinWorkerThread extend
90       * to an unrecoverable error, or {@code null} if completed normally
91       */
92      protected void onTermination(Throwable exception) {
382        try {
383            cancelTasks();
384            while (active)              // force inactive
385                active = !pool.tryDecrementActiveCount();
386            setTerminated();
387            pool.workerTerminated(this);
388        } catch (Throwable ex) {        // Shouldn't ever happen
389            if (exception == null)      // but if so, at least rethrown
390                exception = ex;
391        } finally {
392            if (exception != null)
393                UNSAFE.throwException(exception);
394        }
93      }
94  
95      /**
96       * This method is required to be public, but should never be
97       * called explicitly. It performs the main run loop to execute
98 <     * ForkJoinTasks.
98 >     * {@link ForkJoinTask}s.
99       */
100      public void run() {
101          Throwable exception = null;
102          try {
103              onStart();
104 <            mainLoop();
104 >            pool.runWorker(this);
105          } catch (Throwable ex) {
106              exception = ex;
107          } finally {
410            onTermination(exception);
411        }
412    }
413
414    // helpers for run()
415
416    /**
417     * Find and execute tasks and check status while running
418     */
419    private void mainLoop() {
420        int misses = 0; // track consecutive times failed to find work; max 2
421        ForkJoinPool p = pool;
422        for (;;) {
423            p.preStep(this, misses);
424            if (runState != 0)
425                break;
426            misses = ((tryExecSteal() || tryExecSubmission()) ? 0 :
427                      (misses < 2 ? misses + 1 : 2));
428        }
429    }
430
431    /**
432     * Try to steal a task and execute it
433     *
434     * @return true if ran a task
435     */
436    private boolean tryExecSteal() {
437        ForkJoinTask<?> t;
438        if ((t  = scan()) != null) {
439            t.quietlyExec();
440            currentSteal = null;
441            if (sp != base)
442                execLocalTasks();
443            return true;
444        }
445        return false;
446    }
447
448    /**
449     * If a submission exists, try to activate and run it;
450     *
451     * @return true if ran a task
452     */
453    private boolean tryExecSubmission() {
454        ForkJoinPool p = pool;
455        while (p.hasQueuedSubmissions()) {
456            ForkJoinTask<?> t;
457            if (active || (active = p.tryIncrementActiveCount())) {
458                if ((t = p.pollSubmission()) != null) {
459                    currentSteal = t;
460                    t.quietlyExec();
461                    currentSteal = null;
462                    if (sp != base)
463                        execLocalTasks();
464                    return true;
465                }
466            }
467        }
468        return false;
469    }
470
471    /**
472     * Runs local tasks until queue is empty or shut down.  Call only
473     * while active.
474     */
475    private void execLocalTasks() {
476        while (runState == 0) {
477            ForkJoinTask<?> t = locallyFifo? locallyDeqTask() : popTask();
478            if (t != null)
479                t.quietlyExec();
480            else if (sp == base)
481                break;
482        }
483    }
484
485    /*
486     * Intrinsics-based atomic writes for queue slots. These are
487     * basically the same as methods in AtomicObjectArray, but
488     * specialized for (1) ForkJoinTask elements (2) requirement that
489     * nullness and bounds checks have already been performed by
490     * callers and (3) effective offsets are known not to overflow
491     * from int to long (because of MAXIMUM_QUEUE_CAPACITY). We don't
492     * need corresponding version for reads: plain array reads are OK
493     * because they protected by other volatile reads and are
494     * confirmed by CASes.
495     *
496     * Most uses don't actually call these methods, but instead contain
497     * inlined forms that enable more predictable optimization.  We
498     * don't define the version of write used in pushTask at all, but
499     * instead inline there a store-fenced array slot write.
500     */
501
502    /**
503     * CASes slot i of array q from t to null. Caller must ensure q is
504     * non-null and index is in range.
505     */
506    private static final boolean casSlotNull(ForkJoinTask<?>[] q, int i,
507                                             ForkJoinTask<?> t) {
508        return UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase, t, null);
509    }
510
511    /**
512     * Performs a volatile write of the given task at given slot of
513     * array q.  Caller must ensure q is non-null and index is in
514     * range. This method is used only during resets and backouts.
515     */
516    private static final void writeSlot(ForkJoinTask<?>[] q, int i,
517                                              ForkJoinTask<?> t) {
518        UNSAFE.putObjectVolatile(q, (i << qShift) + qBase, t);
519    }
520
521    // queue methods
522
523    /**
524     * Pushes a task. Call only from this thread.
525     *
526     * @param t the task. Caller must ensure non-null.
527     */
528    final void pushTask(ForkJoinTask<?> t) {
529        ForkJoinTask<?>[] q = queue;
530        int mask = q.length - 1; // implicit assert q != null
531        int s = sp++;            // ok to increment sp before slot write
532        UNSAFE.putOrderedObject(q, ((s & mask) << qShift) + qBase, t);
533        if ((s -= base) == 0)
534            pool.signalWork();   // was empty
535        else if (s == mask)
536            growQueue();         // is full
537    }
538
539    /**
540     * Tries to take a task from the base of the queue, failing if
541     * empty or contended. Note: Specializations of this code appear
542     * in locallyDeqTask and elsewhere.
543     *
544     * @return a task, or null if none or contended
545     */
546    final ForkJoinTask<?> deqTask() {
547        ForkJoinTask<?> t;
548        ForkJoinTask<?>[] q;
549        int b, i;
550        if (sp != (b = base) &&
551            (q = queue) != null && // must read q after b
552            (t = q[i = (q.length - 1) & b]) != null && base == b &&
553            UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase, t, null)) {
554            base = b + 1;
555            return t;
556        }
557        return null;
558    }
559
560    /**
561     * Tries to take a task from the base of own queue. Assumes active
562     * status.  Called only by current thread.
563     *
564     * @return a task, or null if none
565     */
566    final ForkJoinTask<?> locallyDeqTask() {
567        ForkJoinTask<?>[] q = queue;
568        if (q != null) {
569            ForkJoinTask<?> t;
570            int b, i;
571            while (sp != (b = base)) {
572                if ((t = q[i = (q.length - 1) & b]) != null && base == b &&
573                    UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase,
574                                                t, null)) {
575                    base = b + 1;
576                    return t;
577                }
578            }
579        }
580        return null;
581    }
582
583    /**
584     * Returns a popped task, or null if empty. Assumes active status.
585     * Called only by current thread.
586     */
587    private ForkJoinTask<?> popTask() {
588        ForkJoinTask<?>[] q = queue;
589        if (q != null) {
590            int s;
591            while ((s = sp) != base) {
592                int i = (q.length - 1) & --s;
593                long u = (i << qShift) + qBase; // raw offset
594                ForkJoinTask<?> t = q[i];
595                if (t == null)   // lost to stealer
596                    break;
597                if (UNSAFE.compareAndSwapObject(q, u, t, null)) {
598                    sp = s; // putOrderedInt may encourage more timely write
599                    // UNSAFE.putOrderedInt(this, spOffset, s);
600                    return t;
601                }
602            }
603        }
604        return null;
605    }
606
607    /**
608     * Specialized version of popTask to pop only if topmost element
609     * is the given task. Called only by current thread while
610     * active.
611     *
612     * @param t the task. Caller must ensure non-null.
613     */
614    final boolean unpushTask(ForkJoinTask<?> t) {
615        int s;
616        ForkJoinTask<?>[] q = queue;
617        if ((s = sp) != base && q != null &&
618            UNSAFE.compareAndSwapObject
619            (q, (((q.length - 1) & --s) << qShift) + qBase, t, null)) {
620            sp = s;
621            // UNSAFE.putOrderedInt(this, spOffset, s);
622            return true;
623        }
624        return false;
625    }
626
627    /**
628     * Returns next task or null if empty or contended
629     */
630    final ForkJoinTask<?> peekTask() {
631        ForkJoinTask<?>[] q = queue;
632        if (q == null)
633            return null;
634        int mask = q.length - 1;
635        int i = locallyFifo ? base : (sp - 1);
636        return q[i & mask];
637    }
638
639    /**
640     * Doubles queue array size. Transfers elements by emulating
641     * steals (deqs) from old array and placing, oldest first, into
642     * new array.
643     */
644    private void growQueue() {
645        ForkJoinTask<?>[] oldQ = queue;
646        int oldSize = oldQ.length;
647        int newSize = oldSize << 1;
648        if (newSize > MAXIMUM_QUEUE_CAPACITY)
649            throw new RejectedExecutionException("Queue capacity exceeded");
650        ForkJoinTask<?>[] newQ = queue = new ForkJoinTask<?>[newSize];
651
652        int b = base;
653        int bf = b + oldSize;
654        int oldMask = oldSize - 1;
655        int newMask = newSize - 1;
656        do {
657            int oldIndex = b & oldMask;
658            ForkJoinTask<?> t = oldQ[oldIndex];
659            if (t != null && !casSlotNull(oldQ, oldIndex, t))
660                t = null;
661            writeSlot(newQ, b & newMask, t);
662        } while (++b != bf);
663        pool.signalWork();
664    }
665
666    /**
667     * Computes next value for random victim probe in scan().  Scans
668     * don't require a very high quality generator, but also not a
669     * crummy one.  Marsaglia xor-shift is cheap and works well enough.
670     * Note: This is manually inlined in scan()
671     */
672    private static final int xorShift(int r) {
673        r ^= r << 13;
674        r ^= r >>> 17;
675        return r ^ (r << 5);
676    }
677
678    /**
679     * Tries to steal a task from another worker. Starts at a random
680     * index of workers array, and probes workers until finding one
681     * with non-empty queue or finding that all are empty.  It
682     * randomly selects the first n probes. If these are empty, it
683     * resorts to a circular sweep, which is necessary to accurately
684     * set active status. (The circular sweep uses steps of
685     * approximately half the array size plus 1, to avoid bias
686     * stemming from leftmost packing of the array in ForkJoinPool.)
687     *
688     * This method must be both fast and quiet -- usually avoiding
689     * memory accesses that could disrupt cache sharing etc other than
690     * those needed to check for and take tasks (or to activate if not
691     * already active). This accounts for, among other things,
692     * updating random seed in place without storing it until exit.
693     *
694     * @return a task, or null if none found
695     */
696    private ForkJoinTask<?> scan() {
697        ForkJoinPool p = pool;
698        ForkJoinWorkerThread[] ws;        // worker array
699        int n;                            // upper bound of #workers
700        if ((ws = p.workers) != null && (n = ws.length) > 1) {
701            boolean canSteal = active;    // shadow active status
702            int r = seed;                 // extract seed once
703            int mask = n - 1;
704            int j = -n;                   // loop counter
705            int k = r;                    // worker index, random if j < 0
706            for (;;) {
707                ForkJoinWorkerThread v = ws[k & mask];
708                r ^= r << 13; r ^= r >>> 17; r ^= r << 5; // inline xorshift
709                if (v != null && v.base != v.sp) {
710                    ForkJoinTask<?>[] q; int b;
711                    if ((canSteal ||       // ensure active status
712                         (canSteal = active = p.tryIncrementActiveCount())) &&
713                        (q = v.queue) != null && (b = v.base) != v.sp) {
714                        int i = (q.length - 1) & b;
715                        long u = (i << qShift) + qBase; // raw offset
716                        ForkJoinTask<?> t = q[i];
717                        if (v.base == b && t != null &&
718                            UNSAFE.compareAndSwapObject(q, u, t, null)) {
719                            int pid = poolIndex;
720                            currentSteal = t;
721                            v.stealHint = pid;
722                            v.base = b + 1;
723                            seed = r;
724                            ++stealCount;
725                            return t;
726                        }
727                    }
728                    j = -n;
729                    k = r;                // restart on contention
730                }
731                else if (++j <= 0)
732                    k = r;
733                else if (j <= n)
734                    k += (n >>> 1) | 1;
735                else
736                    break;
737            }
738        }
739        return null;
740    }
741
742    // Run State management
743
744    // status check methods used mainly by ForkJoinPool
745    final boolean isRunning()     { return runState == 0; }
746    final boolean isTerminating() { return (runState & TERMINATING) != 0; }
747    final boolean isTerminated()  { return (runState & TERMINATED) != 0; }
748    final boolean isSuspended()   { return (runState & SUSPENDED) != 0; }
749    final boolean isTrimmed()     { return (runState & TRIMMED) != 0; }
750
751    /**
752     * Sets state to TERMINATING, also, unless "quiet", unparking if
753     * not already terminated
754     *
755     * @param quiet don't unpark (used for faster status updates on
756     * pool termination)
757     */
758    final void shutdown(boolean quiet) {
759        for (;;) {
760            int s = runState;
761            if ((s & (TERMINATING|TERMINATED)) != 0)
762                break;
763            if ((s & SUSPENDED) != 0) { // kill and wakeup if suspended
764                if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
765                                             (s & ~SUSPENDED) |
766                                             (TRIMMED|TERMINATING)))
767                    break;
768            }
769            else if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
770                                              s | TERMINATING))
771                break;
772        }
773        if (!quiet && (runState & TERMINATED) != 0)
774            LockSupport.unpark(this);
775    }
776
777    /**
778     * Sets state to TERMINATED. Called only by onTermination()
779     */
780    private void setTerminated() {
781        int s;
782        do {} while (!UNSAFE.compareAndSwapInt(this, runStateOffset,
783                                               s = runState,
784                                               s | (TERMINATING|TERMINATED)));
785    }
786
787    /**
788     * If suspended, tries to set status to unsuspended and unparks.
789     *
790     * @return true if successful
791     */
792    final boolean tryUnsuspend() {
793        int s;
794        while (((s = runState) & SUSPENDED) != 0) {
795            if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
796                                         s & ~SUSPENDED))
797                return true;
798        }
799        return false;
800    }
801
802    /**
803     * Sets suspended status and blocks as spare until resumed
804     * or shutdown.
805     * @returns true if still running on exit
806     */
807    final boolean suspendAsSpare() {
808        lastEventCount = 0;         // reset upon resume
809        for (;;) {                  // set suspended unless terminating
810            int s = runState;
811            if ((s & TERMINATING) != 0) { // must kill
812                if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
813                                             s | (TRIMMED | TERMINATING)))
814                    return false;
815            }
816            else if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
817                                              s | SUSPENDED))
818                break;
819        }
820        ForkJoinPool p = pool;
821        p.pushSpare(this);
822        while ((runState & SUSPENDED) != 0) {
823            if (!p.tryAccumulateStealCount(this))
824                continue;
825            interrupted();          // clear/ignore interrupts
826            if ((runState & SUSPENDED) == 0)
827                break;
828            if (nextSpare != 0)     // untimed
829                LockSupport.park(this);
830            else {
831                long startTime = System.nanoTime();
832                LockSupport.parkNanos(this, TRIM_RATE_NANOS);
833                if ((runState & SUSPENDED) == 0)
834                    break;
835                long now = System.nanoTime();
836                if (now - startTime >= TRIM_RATE_NANOS)
837                    pool.tryTrimSpare(now);
838            }
839        }
840        return runState == 0;
841    }
842
843    // Misc support methods for ForkJoinPool
844
845    /**
846     * Returns an estimate of the number of tasks in the queue.  Also
847     * used by ForkJoinTask.
848     */
849    final int getQueueSize() {
850        int n; // external calls must read base first
851        return (n = -base + sp) <= 0 ? 0 : n;
852    }
853
854    /**
855     * Removes and cancels all tasks in queue.  Can be called from any
856     * thread.
857     */
858    final void cancelTasks() {
859        ForkJoinTask<?> cj = currentJoin; // try to cancel ongoing tasks
860        if (cj != null) {
861            currentJoin = null;
862            cj.cancelIgnoringExceptions();
863            try {
864                this.interrupt(); // awaken wait
865            } catch (SecurityException ignore) {
866            }
867        }
868        ForkJoinTask<?> cs = currentSteal;
869        if (cs != null) {
870            currentSteal = null;
871            cs.cancelIgnoringExceptions();
872        }
873        while (base != sp) {
874            ForkJoinTask<?> t = deqTask();
875            if (t != null)
876                t.cancelIgnoringExceptions();
877        }
878    }
879
880    /**
881     * Drains tasks to given collection c.
882     *
883     * @return the number of tasks drained
884     */
885    final int drainTasksTo(Collection<? super ForkJoinTask<?>> c) {
886        int n = 0;
887        while (base != sp) {
888            ForkJoinTask<?> t = deqTask();
889            if (t != null) {
890                c.add(t);
891                ++n;
892            }
893        }
894        return n;
895    }
896
897    // Support methods for ForkJoinTask
898
899    /**
900     * Gets and removes a local task.
901     *
902     * @return a task, if available
903     */
904    final ForkJoinTask<?> pollLocalTask() {
905        while (sp != base) {
906            if (active || (active = pool.tryIncrementActiveCount()))
907                return locallyFifo? locallyDeqTask() : popTask();
908        }
909        return null;
910    }
911
912    /**
913     * Gets and removes a local or stolen task.
914     *
915     * @return a task, if available
916     */
917    final ForkJoinTask<?> pollTask() {
918        ForkJoinTask<?> t = pollLocalTask();
919        if (t == null) {
920            t = scan();
921            currentSteal = null; // cannot retain/track/help
922        }
923        return t;
924    }
925
926    /**
927     * Possibly runs some tasks and/or blocks, until task is done.
928     *
929     * @param joinMe the task to join
930     */
931    final void joinTask(ForkJoinTask<?> joinMe) {
932        // currentJoin only written by this thread; only need ordered store
933        ForkJoinTask<?> prevJoin = currentJoin;
934        UNSAFE.putOrderedObject(this, currentJoinOffset, joinMe);
935        if (sp != base)
936            localHelpJoinTask(joinMe);
937        if (joinMe.status >= 0)
938            pool.awaitJoin(joinMe, this);
939        UNSAFE.putOrderedObject(this, currentJoinOffset, prevJoin);
940    }
941
942    /**
943     * Run tasks in local queue until given task is done.
944     *
945     * @param joinMe the task to join
946     */
947    private void localHelpJoinTask(ForkJoinTask<?> joinMe) {
948        int s;
949        ForkJoinTask<?>[] q;
950        while (joinMe.status >= 0 && (s = sp) != base && (q = queue) != null) {
951            int i = (q.length - 1) & --s;
952            long u = (i << qShift) + qBase; // raw offset
953            ForkJoinTask<?> t = q[i];
954            if (t == null)  // lost to a stealer
955                break;
956            if (UNSAFE.compareAndSwapObject(q, u, t, null)) {
957                /*
958                 * This recheck (and similarly in helpJoinTask)
959                 * handles cases where joinMe is independently
960                 * cancelled or forced even though there is other work
961                 * available. Back out of the pop by putting t back
962                 * into slot before we commit by writing sp.
963                 */
964                if (joinMe.status < 0) {
965                    UNSAFE.putObjectVolatile(q, u, t);
966                    break;
967                }
968                sp = s;
969                // UNSAFE.putOrderedInt(this, spOffset, s);
970                t.quietlyExec();
971            }
972        }
973    }
974
975    /**
976     * Tries to locate and help perform tasks for a stealer of the
977     * given task, or in turn one of its stealers.  Traces
978     * currentSteal->currentJoin links looking for a thread working on
979     * a descendant of the given task and with a non-empty queue to
980     * steal back and execute tasks from.
981     *
982     * The implementation is very branchy to cope with the potential
983     * inconsistencies or loops encountering chains that are stale,
984     * unknown, or of length greater than MAX_HELP_DEPTH links.  All
985     * of these cases are dealt with by just returning back to the
986     * caller, who is expected to retry if other join mechanisms also
987     * don't work out.
988     *
989     * @param joinMe the task to join
990     */
991    final void helpJoinTask(ForkJoinTask<?> joinMe) {
992        ForkJoinWorkerThread[] ws = pool.workers;
993        int n; // need at least 2 workers
994        if (ws != null && (n = ws.length) > 1 && joinMe.status >= 0) {
995            ForkJoinTask<?> task = joinMe;        // base of chain
996            ForkJoinWorkerThread thread = this;   // thread with stolen task
997            for (int d = 0; d < MAX_HELP_DEPTH; ++d) { // chain length
998                // Try to find v, the stealer of task, by first using hint
999                ForkJoinWorkerThread v = ws[thread.stealHint & (n - 1)];
1000                if (v == null || v.currentSteal != task) {
1001                    for (int j = 0; ; ++j) {      // search array
1002                        if (j < n) {
1003                            if ((v = ws[j]) != null) {
1004                                if (task.status < 0)
1005                                    return;       // stale or done
1006                                if (v.currentSteal == task) {
1007                                    thread.stealHint = j;
1008                                    break;        // save hint for next time
1009                                }
1010                            }
1011                        }
1012                        else
1013                            return;               // no stealer
1014                    }
1015                }
1016                // Try to help v, using specialized form of deqTask
1017                int b;
1018                ForkJoinTask<?>[] q;
1019                while ((b = v.base) != v.sp && (q = v.queue) != null) {
1020                    int i = (q.length - 1) & b;
1021                    long u = (i << qShift) + qBase;
1022                    ForkJoinTask<?> t = q[i];
1023                    if (task.status < 0)
1024                        return;                   // stale or done
1025                    if (v.base == b) {
1026                        if (t == null)
1027                            return;               // producer stalled
1028                        if (UNSAFE.compareAndSwapObject(q, u, t, null)) {
1029                            if (joinMe.status < 0) {
1030                                UNSAFE.putObjectVolatile(q, u, t);
1031                                return;           // back out on cancel
1032                            }
1033                            int pid = poolIndex;
1034                            ForkJoinTask<?> prevSteal = currentSteal;
1035                            currentSteal = t;
1036                            v.stealHint = pid;
1037                            v.base = b + 1;
1038                            t.quietlyExec();
1039                            currentSteal = prevSteal;
1040                        }
1041                    }
1042                    if (joinMe.status < 0)
1043                        return;
1044                }
1045                // Try to descend to find v's stealer
1046                ForkJoinTask<?> next = v.currentJoin;
1047                if (task.status < 0 || next == null || next == task ||
1048                    joinMe.status < 0)
1049                    return;
1050                task = next;
1051                thread = v;
1052            }
1053        }
1054    }
1055
1056    /**
1057     * Returns an estimate of the number of tasks, offset by a
1058     * function of number of idle workers.
1059     *
1060     * This method provides a cheap heuristic guide for task
1061     * partitioning when programmers, frameworks, tools, or languages
1062     * have little or no idea about task granularity.  In essence by
1063     * offering this method, we ask users only about tradeoffs in
1064     * overhead vs expected throughput and its variance, rather than
1065     * how finely to partition tasks.
1066     *
1067     * In a steady state strict (tree-structured) computation, each
1068     * thread makes available for stealing enough tasks for other
1069     * threads to remain active. Inductively, if all threads play by
1070     * the same rules, each thread should make available only a
1071     * constant number of tasks.
1072     *
1073     * The minimum useful constant is just 1. But using a value of 1
1074     * would require immediate replenishment upon each steal to
1075     * maintain enough tasks, which is infeasible.  Further,
1076     * partitionings/granularities of offered tasks should minimize
1077     * steal rates, which in general means that threads nearer the top
1078     * of computation tree should generate more than those nearer the
1079     * bottom. In perfect steady state, each thread is at
1080     * approximately the same level of computation tree. However,
1081     * producing extra tasks amortizes the uncertainty of progress and
1082     * diffusion assumptions.
1083     *
1084     * So, users will want to use values larger, but not much larger
1085     * than 1 to both smooth over transient shortages and hedge
1086     * against uneven progress; as traded off against the cost of
1087     * extra task overhead. We leave the user to pick a threshold
1088     * value to compare with the results of this call to guide
1089     * decisions, but recommend values such as 3.
1090     *
1091     * When all threads are active, it is on average OK to estimate
1092     * surplus strictly locally. In steady-state, if one thread is
1093     * maintaining say 2 surplus tasks, then so are others. So we can
1094     * just use estimated queue length (although note that (sp - base)
1095     * can be an overestimate because of stealers lagging increments
1096     * of base).  However, this strategy alone leads to serious
1097     * mis-estimates in some non-steady-state conditions (ramp-up,
1098     * ramp-down, other stalls). We can detect many of these by
1099     * further considering the number of "idle" threads, that are
1100     * known to have zero queued tasks, so compensate by a factor of
1101     * (#idle/#active) threads.
1102     */
1103    final int getEstimatedSurplusTaskCount() {
1104        return sp - base - pool.idlePerActive();
1105    }
1106
1107    /**
1108     * Runs tasks until {@code pool.isQuiescent()}.
1109     */
1110    final void helpQuiescePool() {
1111        for (;;) {
1112            ForkJoinTask<?> t = pollLocalTask();
1113            if (t != null || (t = scan()) != null) {
1114                t.quietlyExec();
1115                currentSteal = null;
1116            }
1117            else {
1118                ForkJoinPool p = pool;
1119                if (active) {
1120                    if (!p.tryDecrementActiveCount())
1121                        continue;   // retry later
1122                    active = false; // inactivate
1123                }
1124                if (p.isQuiescent()) {
1125                    active = true; // re-activate
1126                    do {} while (!p.tryIncrementActiveCount());
1127                    return;
1128                }
1129            }
1130        }
1131    }
1132
1133    // Unsafe mechanics
1134
1135    private static final sun.misc.Unsafe UNSAFE = getUnsafe();
1136    private static final long spOffset =
1137        objectFieldOffset("sp", ForkJoinWorkerThread.class);
1138    private static final long runStateOffset =
1139        objectFieldOffset("runState", ForkJoinWorkerThread.class);
1140    private static final long currentJoinOffset =
1141        objectFieldOffset("currentJoin", ForkJoinWorkerThread.class);
1142    private static final long currentStealOffset =
1143        objectFieldOffset("currentSteal", ForkJoinWorkerThread.class);
1144    private static final long qBase =
1145        UNSAFE.arrayBaseOffset(ForkJoinTask[].class);
1146
1147    private static final int qShift;
1148
1149    static {
1150        int s = UNSAFE.arrayIndexScale(ForkJoinTask[].class);
1151        if ((s & (s-1)) != 0)
1152            throw new Error("data type scale not a power of two");
1153        qShift = 31 - Integer.numberOfLeadingZeros(s);
1154    }
1155
1156    private static long objectFieldOffset(String field, Class<?> klazz) {
1157        try {
1158            return UNSAFE.objectFieldOffset(klazz.getDeclaredField(field));
1159        } catch (NoSuchFieldException e) {
1160            // Convert Exception to corresponding Error
1161            NoSuchFieldError error = new NoSuchFieldError(field);
1162            error.initCause(e);
1163            throw error;
1164        }
1165    }
1166
1167    /**
1168     * Returns a sun.misc.Unsafe.  Suitable for use in a 3rd party package.
1169     * Replace with a simple call to Unsafe.getUnsafe when integrating
1170     * into a jdk.
1171     *
1172     * @return a sun.misc.Unsafe
1173     */
1174    private static sun.misc.Unsafe getUnsafe() {
1175        try {
1176            return sun.misc.Unsafe.getUnsafe();
1177        } catch (SecurityException se) {
108              try {
109 <                return java.security.AccessController.doPrivileged
110 <                    (new java.security
111 <                     .PrivilegedExceptionAction<sun.misc.Unsafe>() {
112 <                        public sun.misc.Unsafe run() throws Exception {
113 <                            java.lang.reflect.Field f = sun.misc
114 <                                .Unsafe.class.getDeclaredField("theUnsafe");
1185 <                            f.setAccessible(true);
1186 <                            return (sun.misc.Unsafe) f.get(null);
1187 <                        }});
1188 <            } catch (java.security.PrivilegedActionException e) {
1189 <                throw new RuntimeException("Could not initialize intrinsics",
1190 <                                           e.getCause());
109 >                onTermination(exception);
110 >            } catch (Throwable ex) {
111 >                if (exception == null)
112 >                    exception = ex;
113 >            } finally {
114 >                pool.deregisterWorker(this, exception);
115              }
116          }
117      }
118   }
119 +

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines