ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/jsr166y/ForkJoinWorkerThread.java
(Generate patch)

Comparing jsr166/src/jsr166y/ForkJoinWorkerThread.java (file contents):
Revision 1.41 by dl, Tue Aug 17 18:30:33 2010 UTC vs.
Revision 1.73 by dl, Wed Nov 21 19:54:39 2012 UTC

# Line 1 | Line 1
1   /*
2   * Written by Doug Lea with assistance from members of JCP JSR-166
3   * Expert Group and released to the public domain, as explained at
4 < * http://creativecommons.org/licenses/publicdomain
4 > * http://creativecommons.org/publicdomain/zero/1.0/
5   */
6  
7   package jsr166y;
8  
9 import java.util.concurrent.*;
10
11 import java.util.Random;
12 import java.util.Collection;
13 import java.util.concurrent.locks.LockSupport;
14
9   /**
10 < * A thread managed by a {@link ForkJoinPool}.  This class is
11 < * subclassable solely for the sake of adding functionality -- there
12 < * are no overridable methods dealing with scheduling or execution.
13 < * However, you can override initialization and termination methods
14 < * surrounding the main task processing loop.  If you do create such a
15 < * subclass, you will also need to supply a custom {@link
16 < * ForkJoinPool.ForkJoinWorkerThreadFactory} to use it in a {@code
17 < * ForkJoinPool}.
10 > * A thread managed by a {@link ForkJoinPool}, which executes
11 > * {@link ForkJoinTask}s.
12 > * This class is subclassable solely for the sake of adding
13 > * functionality -- there are no overridable methods dealing with
14 > * scheduling or execution.  However, you can override initialization
15 > * and termination methods surrounding the main task processing loop.
16 > * If you do create such a subclass, you will also need to supply a
17 > * custom {@link ForkJoinPool.ForkJoinWorkerThreadFactory} to use it
18 > * in a {@code ForkJoinPool}.
19   *
20   * @since 1.7
21   * @author Doug Lea
22   */
23   public class ForkJoinWorkerThread extends Thread {
24      /*
30     * Overview:
31     *
25       * ForkJoinWorkerThreads are managed by ForkJoinPools and perform
26 <     * ForkJoinTasks. This class includes bookkeeping in support of
27 <     * worker activation, suspension, and lifecycle control described
35 <     * in more detail in the internal documentation of class
36 <     * ForkJoinPool. And as described further below, this class also
37 <     * includes special-cased support for some ForkJoinTask
38 <     * methods. But the main mechanics involve work-stealing:
39 <     *
40 <     * Work-stealing queues are special forms of Deques that support
41 <     * only three of the four possible end-operations -- push, pop,
42 <     * and deq (aka steal), under the further constraints that push
43 <     * and pop are called only from the owning thread, while deq may
44 <     * be called from other threads.  (If you are unfamiliar with
45 <     * them, you probably want to read Herlihy and Shavit's book "The
46 <     * Art of Multiprocessor programming", chapter 16 describing these
47 <     * in more detail before proceeding.)  The main work-stealing
48 <     * queue design is roughly similar to those in the papers "Dynamic
49 <     * Circular Work-Stealing Deque" by Chase and Lev, SPAA 2005
50 <     * (http://research.sun.com/scalable/pubs/index.html) and
51 <     * "Idempotent work stealing" by Michael, Saraswat, and Vechev,
52 <     * PPoPP 2009 (http://portal.acm.org/citation.cfm?id=1504186).
53 <     * The main differences ultimately stem from gc requirements that
54 <     * we null out taken slots as soon as we can, to maintain as small
55 <     * a footprint as possible even in programs generating huge
56 <     * numbers of tasks. To accomplish this, we shift the CAS
57 <     * arbitrating pop vs deq (steal) from being on the indices
58 <     * ("base" and "sp") to the slots themselves (mainly via method
59 <     * "casSlotNull()"). So, both a successful pop and deq mainly
60 <     * entail a CAS of a slot from non-null to null.  Because we rely
61 <     * on CASes of references, we do not need tag bits on base or sp.
62 <     * They are simple ints as used in any circular array-based queue
63 <     * (see for example ArrayDeque).  Updates to the indices must
64 <     * still be ordered in a way that guarantees that sp == base means
65 <     * the queue is empty, but otherwise may err on the side of
66 <     * possibly making the queue appear nonempty when a push, pop, or
67 <     * deq have not fully committed. Note that this means that the deq
68 <     * operation, considered individually, is not wait-free. One thief
69 <     * cannot successfully continue until another in-progress one (or,
70 <     * if previously empty, a push) completes.  However, in the
71 <     * aggregate, we ensure at least probabilistic non-blockingness.
72 <     * If an attempted steal fails, a thief always chooses a different
73 <     * random victim target to try next. So, in order for one thief to
74 <     * progress, it suffices for any in-progress deq or new push on
75 <     * any empty queue to complete. One reason this works well here is
76 <     * that apparently-nonempty often means soon-to-be-stealable,
77 <     * which gives threads a chance to set activation status if
78 <     * necessary before stealing.
79 <     *
80 <     * This approach also enables support for "async mode" where local
81 <     * task processing is in FIFO, not LIFO order; simply by using a
82 <     * version of deq rather than pop when locallyFifo is true (as set
83 <     * by the ForkJoinPool).  This allows use in message-passing
84 <     * frameworks in which tasks are never joined.
26 >     * ForkJoinTasks. For explanation, see the internal documentation
27 >     * of class ForkJoinPool.
28       *
29 <     * When a worker would otherwise be blocked waiting to join a
30 <     * task, it first tries a form of linear helping: Each worker
31 <     * records (in field currentSteal) the most recent task it stole
32 <     * from some other worker. Plus, it records (in field currentJoin)
33 <     * the task it is currently actively joining. Method joinTask uses
34 <     * these markers to try to find a worker to help (i.e., steal back
92 <     * a task from and execute it) that could hasten completion of the
93 <     * actively joined task. In essence, the joiner executes a task
94 <     * that would be on its own local deque had the to-be-joined task
95 <     * not been stolen. This may be seen as a conservative variant of
96 <     * the approach in Wagner & Calder "Leapfrogging: a portable
97 <     * technique for implementing efficient futures" SIGPLAN Notices,
98 <     * 1993 (http://portal.acm.org/citation.cfm?id=155354). It differs
99 <     * in that: (1) We only maintain dependency links across workers
100 <     * upon steals, rather than use per-task bookkeeping.  This may
101 <     * require a linear scan of workers array to locate stealers, but
102 <     * usually doesn't because stealers leave hints (that may become
103 <     * stale/wrong) of where to locate them. This isolates cost to
104 <     * when it is needed, rather than adding to per-task overhead.
105 <     * (2) It is "shallow", ignoring nesting and potentially cyclic
106 <     * mutual steals.  (3) It is intentionally racy: field currentJoin
107 <     * is updated only while actively joining, which means that we
108 <     * miss links in the chain during long-lived tasks, GC stalls etc
109 <     * (which is OK since blocking in such cases is usually a good
110 <     * idea).  (4) We bound the number of attempts to find work (see
111 <     * MAX_HELP_DEPTH) and fall back to suspending the worker and if
112 <     * necessary replacing it with a spare (see
113 <     * ForkJoinPool.tryAwaitJoin).
114 <     *
115 <     * Efficient implementation of these algorithms currently relies
116 <     * on an uncomfortable amount of "Unsafe" mechanics. To maintain
117 <     * correct orderings, reads and writes of variable base require
118 <     * volatile ordering.  Variable sp does not require volatile
119 <     * writes but still needs store-ordering, which we accomplish by
120 <     * pre-incrementing sp before filling the slot with an ordered
121 <     * store.  (Pre-incrementing also enables backouts used in
122 <     * joinTask.)  Because they are protected by volatile base reads,
123 <     * reads of the queue array and its slots by other threads do not
124 <     * need volatile load semantics, but writes (in push) require
125 <     * store order and CASes (in pop and deq) require (volatile) CAS
126 <     * semantics.  (Michael, Saraswat, and Vechev's algorithm has
127 <     * similar properties, but without support for nulling slots.)
128 <     * Since these combinations aren't supported using ordinary
129 <     * volatiles, the only way to accomplish these efficiently is to
130 <     * use direct Unsafe calls. (Using external AtomicIntegers and
131 <     * AtomicReferenceArrays for the indices and array is
132 <     * significantly slower because of memory locality and indirection
133 <     * effects.)
134 <     *
135 <     * Further, performance on most platforms is very sensitive to
136 <     * placement and sizing of the (resizable) queue array.  Even
137 <     * though these queues don't usually become all that big, the
138 <     * initial size must be large enough to counteract cache
139 <     * contention effects across multiple queues (especially in the
140 <     * presence of GC cardmarking). Also, to improve thread-locality,
141 <     * queues are initialized after starting.  All together, these
142 <     * low-level implementation choices produce as much as a factor of
143 <     * 4 performance improvement compared to naive implementations,
144 <     * and enable the processing of billions of tasks per second,
145 <     * sometimes at the expense of ugliness.
146 <     */
147 <
148 <    /**
149 <     * Generator for initial random seeds for random victim
150 <     * selection. This is used only to create initial seeds. Random
151 <     * steals use a cheaper xorshift generator per steal attempt. We
152 <     * expect only rare contention on seedGenerator, so just use a
153 <     * plain Random.
154 <     */
155 <    private static final Random seedGenerator = new Random();
156 <
157 <    /**
158 <     * The maximum stolen->joining link depth allowed in helpJoinTask.
159 <     * Depths for legitimate chains are unbounded, but we use a fixed
160 <     * constant to avoid (otherwise unchecked) cycles and bound
161 <     * staleness of traversal parameters at the expense of sometimes
162 <     * blocking when we could be helping.
163 <     */
164 <    private static final int MAX_HELP_DEPTH = 8;
165 <
166 <    /**
167 <     * The wakeup interval (in nanoseconds) for the oldest worker
168 <     * suspended as spare.  On each wakeup not signalled by a
169 <     * resumption, it may ask the pool to reduce the number of spares.
170 <     */
171 <    private static final long TRIM_RATE_NANOS =
172 <        5L * 1000L * 1000L * 1000L; // 5sec
173 <
174 <    /**
175 <     * Capacity of work-stealing queue array upon initialization.
176 <     * Must be a power of two. Initial size must be at least 4, but is
177 <     * padded to minimize cache effects.
178 <     */
179 <    private static final int INITIAL_QUEUE_CAPACITY = 1 << 13;
180 <
181 <    /**
182 <     * Maximum work-stealing queue array size.  Must be less than or
183 <     * equal to 1 << 28 to ensure lack of index wraparound. (This
184 <     * is less than usual bounds, because we need leftshift by 3
185 <     * to be in int range).
186 <     */
187 <    private static final int MAXIMUM_QUEUE_CAPACITY = 1 << 28;
188 <
189 <    /**
190 <     * The pool this thread works in. Accessed directly by ForkJoinTask.
191 <     */
192 <    final ForkJoinPool pool;
193 <
194 <    /**
195 <     * The work-stealing queue array. Size must be a power of two.
196 <     * Initialized in onStart, to improve memory locality.
197 <     */
198 <    private ForkJoinTask<?>[] queue;
199 <
200 <    /**
201 <     * Index (mod queue.length) of least valid queue slot, which is
202 <     * always the next position to steal from if nonempty.
203 <     */
204 <    private volatile int base;
205 <
206 <    /**
207 <     * Index (mod queue.length) of next queue slot to push to or pop
208 <     * from. It is written only by owner thread, and accessed by other
209 <     * threads only after reading (volatile) base.  Both sp and base
210 <     * are allowed to wrap around on overflow, but (sp - base) still
211 <     * estimates size.
212 <     */
213 <    private int sp;
214 <
215 <    /**
216 <     * The index of most recent stealer, used as a hint to avoid
217 <     * traversal in method helpJoinTask. This is only a hint because a
218 <     * worker might have had multiple steals and this only holds one
219 <     * of them (usually the most current). Declared non-volatile,
220 <     * relying on other prevailing sync to keep reasonably current.
221 <     */
222 <    private int stealHint;
223 <
224 <    /**
225 <     * Run state of this worker. In addition to the usual run levels,
226 <     * tracks if this worker is suspended as a spare, and if it was
227 <     * killed (trimmed) while suspended. However, "active" status is
228 <     * maintained separately and modified only in conjunction with
229 <     * CASes of the pool's runState (which are currently sadly manually
230 <     * inlined for performance.)
231 <     */
232 <    private volatile int runState;
233 <
234 <    private static final int TERMINATING = 0x01;
235 <    private static final int TERMINATED  = 0x02;
236 <    private static final int SUSPENDED   = 0x04; // inactive spare
237 <    private static final int TRIMMED     = 0x08; // killed while suspended
238 <
239 <    /**
240 <     * Number of steals, transferred and reset in pool callbacks pool
241 <     * when idle Accessed directly by pool.
242 <     */
243 <    int stealCount;
244 <
245 <    /**
246 <     * Seed for random number generator for choosing steal victims.
247 <     * Uses Marsaglia xorshift. Must be initialized as nonzero.
248 <     */
249 <    private int seed;
250 <
251 <    /**
252 <     * Activity status. When true, this worker is considered active.
253 <     * Accessed directly by pool.  Must be false upon construction.
254 <     */
255 <    boolean active;
256 <
257 <    /**
258 <     * True if use local fifo, not default lifo, for local polling.
259 <     * Shadows value from ForkJoinPool.
260 <     */
261 <    private final boolean locallyFifo;
262 <
263 <    /**
264 <     * Index of this worker in pool array. Set once by pool before
265 <     * running, and accessed directly by pool to locate this worker in
266 <     * its workers array.
267 <     */
268 <    int poolIndex;
269 <
270 <    /**
271 <     * The last pool event waited for. Accessed only by pool in
272 <     * callback methods invoked within this thread.
273 <     */
274 <    int lastEventCount;
275 <
276 <    /**
277 <     * Encoded index and event count of next event waiter. Used only
278 <     * by ForkJoinPool for managing event waiters.
279 <     */
280 <    volatile long nextWaiter;
281 <
282 <    /**
283 <     * Number of times this thread suspended as spare
284 <     */
285 <    int spareCount;
286 <
287 <    /**
288 <     * Encoded index and count of next spare waiter. Used only
289 <     * by ForkJoinPool for managing spares.
290 <     */
291 <    volatile int nextSpare;
292 <
293 <    /**
294 <     * The task currently being joined, set only when actively trying
295 <     * to helpStealer. Written only by current thread, but read by
296 <     * others.
29 >     * This class just maintains links to its pool and WorkQueue.  The
30 >     * pool field is set immediately upon construction, but the
31 >     * workQueue field is not set until a call to registerWorker
32 >     * completes. This leads to a visibility race, that is tolerated
33 >     * by requiring that the workQueue field is only accessed by the
34 >     * owning thread.
35       */
298    private volatile ForkJoinTask<?> currentJoin;
36  
37 <    /**
38 <     * The task most recently stolen from another worker (or
302 <     * submission queue).  Not volatile because always read/written in
303 <     * presence of related volatiles in those cases where it matters.
304 <     */
305 <    private ForkJoinTask<?> currentSteal;
37 >    final ForkJoinPool pool;                // the pool this thread works in
38 >    final ForkJoinPool.WorkQueue workQueue; // work-stealing mechanics
39  
40      /**
41       * Creates a ForkJoinWorkerThread operating in the given pool.
# Line 311 | Line 44 | public class ForkJoinWorkerThread extend
44       * @throws NullPointerException if pool is null
45       */
46      protected ForkJoinWorkerThread(ForkJoinPool pool) {
47 +        // Use a placeholder until a useful name can be set in registerWorker
48 +        super("aForkJoinWorkerThread");
49          this.pool = pool;
50 <        this.locallyFifo = pool.locallyFifo;
316 <        setDaemon(true);
317 <        // To avoid exposing construction details to subclasses,
318 <        // remaining initialization is in start() and onStart()
50 >        this.workQueue = pool.registerWorker(this);
51      }
52  
53      /**
322     * Performs additional initialization and starts this thread
323     */
324    final void start(int poolIndex, UncaughtExceptionHandler ueh) {
325        this.poolIndex = poolIndex;
326        if (ueh != null)
327            setUncaughtExceptionHandler(ueh);
328        start();
329    }
330
331    // Public/protected methods
332
333    /**
54       * Returns the pool hosting this thread.
55       *
56       * @return the pool
# Line 349 | Line 69 | public class ForkJoinWorkerThread extend
69       * @return the index number
70       */
71      public int getPoolIndex() {
72 <        return poolIndex;
72 >        return workQueue.poolIndex;
73      }
74  
75      /**
76       * Initializes internal state after construction but before
77       * processing any tasks. If you override this method, you must
78 <     * invoke super.onStart() at the beginning of the method.
78 >     * invoke {@code super.onStart()} at the beginning of the method.
79       * Initialization requires care: Most fields must have legal
80       * default values, to ensure that attempted accesses from other
81       * threads work correctly even before this thread starts
82       * processing tasks.
83       */
84      protected void onStart() {
365        int rs = seedGenerator.nextInt();
366        seed = rs == 0? 1 : rs; // seed must be nonzero
367
368        // Allocate name string and arrays in this thread
369        String pid = Integer.toString(pool.getPoolNumber());
370        String wid = Integer.toString(poolIndex);
371        setName("ForkJoinPool-" + pid + "-worker-" + wid);
372
373        queue = new ForkJoinTask<?>[INITIAL_QUEUE_CAPACITY];
85      }
86  
87      /**
# Line 382 | Line 93 | public class ForkJoinWorkerThread extend
93       * to an unrecoverable error, or {@code null} if completed normally
94       */
95      protected void onTermination(Throwable exception) {
385        try {
386            ForkJoinPool p = pool;
387            if (active) {
388                int a; // inline p.tryDecrementActiveCount
389                active = false;
390                do {} while(!UNSAFE.compareAndSwapInt
391                            (p, poolRunStateOffset, a = p.runState, a - 1));
392            }
393            cancelTasks();
394            setTerminated();
395            p.workerTerminated(this);
396        } catch (Throwable ex) {        // Shouldn't ever happen
397            if (exception == null)      // but if so, at least rethrown
398                exception = ex;
399        } finally {
400            if (exception != null)
401                UNSAFE.throwException(exception);
402        }
96      }
97  
98      /**
99       * This method is required to be public, but should never be
100       * called explicitly. It performs the main run loop to execute
101 <     * ForkJoinTasks.
101 >     * {@link ForkJoinTask}s.
102       */
103      public void run() {
104          Throwable exception = null;
105          try {
106              onStart();
107 <            mainLoop();
107 >            pool.runWorker(workQueue);
108          } catch (Throwable ex) {
109              exception = ex;
110          } finally {
418            onTermination(exception);
419        }
420    }
421
422    // helpers for run()
423
424    /**
425     * Find and execute tasks and check status while running
426     */
427    private void mainLoop() {
428        int misses = 0; // track consecutive times failed to find work; max 2
429        ForkJoinPool p = pool;
430        for (;;) {
431            p.preStep(this, misses);
432            if (runState != 0)
433                break;
434            misses = ((tryExecSteal() || tryExecSubmission()) ? 0 :
435                      (misses < 2 ? misses + 1 : 2));
436        }
437    }
438
439    /**
440     * Try to steal a task and execute it
441     *
442     * @return true if ran a task
443     */
444    private boolean tryExecSteal() {
445        ForkJoinTask<?> t;
446        if ((t  = scan()) != null) {
447            t.quietlyExec();
448            currentSteal = null;
449            if (sp != base)
450                execLocalTasks();
451            return true;
452        }
453        return false;
454    }
455
456    /**
457     * If a submission exists, try to activate and run it;
458     *
459     * @return true if ran a task
460     */
461    private boolean tryExecSubmission() {
462        ForkJoinPool p = pool;
463        while (p.hasQueuedSubmissions()) {
464            ForkJoinTask<?> t; int a;
465            if (active || // ugly/hacky: inline p.tryIncrementActiveCount
466                (active = UNSAFE.compareAndSwapInt(p, poolRunStateOffset,
467                                                   a = p.runState, a + 1))) {
468                if ((t = p.pollSubmission()) != null) {
469                    currentSteal = t;
470                    t.quietlyExec();
471                    currentSteal = null;
472                    if (sp != base)
473                        execLocalTasks();
474                    return true;
475                }
476            }
477        }
478        return false;
479    }
480
481    /**
482     * Runs local tasks until queue is empty or shut down.  Call only
483     * while active.
484     */
485    private void execLocalTasks() {
486        while (runState == 0) {
487            ForkJoinTask<?> t = locallyFifo? locallyDeqTask() : popTask();
488            if (t != null)
489                t.quietlyExec();
490            else if (sp == base)
491                break;
492        }
493    }
494
495    /*
496     * Intrinsics-based atomic writes for queue slots. These are
497     * basically the same as methods in AtomicObjectArray, but
498     * specialized for (1) ForkJoinTask elements (2) requirement that
499     * nullness and bounds checks have already been performed by
500     * callers and (3) effective offsets are known not to overflow
501     * from int to long (because of MAXIMUM_QUEUE_CAPACITY). We don't
502     * need corresponding version for reads: plain array reads are OK
503     * because they protected by other volatile reads and are
504     * confirmed by CASes.
505     *
506     * Most uses don't actually call these methods, but instead contain
507     * inlined forms that enable more predictable optimization.  We
508     * don't define the version of write used in pushTask at all, but
509     * instead inline there a store-fenced array slot write.
510     */
511
512    /**
513     * CASes slot i of array q from t to null. Caller must ensure q is
514     * non-null and index is in range.
515     */
516    private static final boolean casSlotNull(ForkJoinTask<?>[] q, int i,
517                                             ForkJoinTask<?> t) {
518        return UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase, t, null);
519    }
520
521    /**
522     * Performs a volatile write of the given task at given slot of
523     * array q.  Caller must ensure q is non-null and index is in
524     * range. This method is used only during resets and backouts.
525     */
526    private static final void writeSlot(ForkJoinTask<?>[] q, int i,
527                                              ForkJoinTask<?> t) {
528        UNSAFE.putObjectVolatile(q, (i << qShift) + qBase, t);
529    }
530
531    // queue methods
532
533    /**
534     * Pushes a task. Call only from this thread.
535     *
536     * @param t the task. Caller must ensure non-null.
537     */
538    final void pushTask(ForkJoinTask<?> t) {
539        ForkJoinTask<?>[] q = queue;
540        int mask = q.length - 1; // implicit assert q != null
541        int s = sp++;            // ok to increment sp before slot write
542        UNSAFE.putOrderedObject(q, ((s & mask) << qShift) + qBase, t);
543        if ((s -= base) == 0)
544            pool.signalWork();   // was empty
545        else if (s == mask)
546            growQueue();         // is full
547    }
548
549    /**
550     * Tries to take a task from the base of the queue, failing if
551     * empty or contended. Note: Specializations of this code appear
552     * in locallyDeqTask and elsewhere.
553     *
554     * @return a task, or null if none or contended
555     */
556    final ForkJoinTask<?> deqTask() {
557        ForkJoinTask<?> t;
558        ForkJoinTask<?>[] q;
559        int b, i;
560        if (sp != (b = base) &&
561            (q = queue) != null && // must read q after b
562            (t = q[i = (q.length - 1) & b]) != null && base == b &&
563            UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase, t, null)) {
564            base = b + 1;
565            return t;
566        }
567        return null;
568    }
569
570    /**
571     * Tries to take a task from the base of own queue. Assumes active
572     * status.  Called only by current thread.
573     *
574     * @return a task, or null if none
575     */
576    final ForkJoinTask<?> locallyDeqTask() {
577        ForkJoinTask<?>[] q = queue;
578        if (q != null) {
579            ForkJoinTask<?> t;
580            int b, i;
581            while (sp != (b = base)) {
582                if ((t = q[i = (q.length - 1) & b]) != null && base == b &&
583                    UNSAFE.compareAndSwapObject(q, (i << qShift) + qBase,
584                                                t, null)) {
585                    base = b + 1;
586                    return t;
587                }
588            }
589        }
590        return null;
591    }
592
593    /**
594     * Returns a popped task, or null if empty. Assumes active status.
595     * Called only by current thread.
596     */
597    private ForkJoinTask<?> popTask() {
598        ForkJoinTask<?>[] q = queue;
599        if (q != null) {
600            int s;
601            while ((s = sp) != base) {
602                int i = (q.length - 1) & --s;
603                long u = (i << qShift) + qBase; // raw offset
604                ForkJoinTask<?> t = q[i];
605                if (t == null)   // lost to stealer
606                    break;
607                if (UNSAFE.compareAndSwapObject(q, u, t, null)) {
608                    sp = s; // putOrderedInt may encourage more timely write
609                    // UNSAFE.putOrderedInt(this, spOffset, s);
610                    return t;
611                }
612            }
613        }
614        return null;
615    }
616
617    /**
618     * Specialized version of popTask to pop only if topmost element
619     * is the given task. Called only by current thread while
620     * active.
621     *
622     * @param t the task. Caller must ensure non-null.
623     */
624    final boolean unpushTask(ForkJoinTask<?> t) {
625        int s;
626        ForkJoinTask<?>[] q = queue;
627        if ((s = sp) != base && q != null &&
628            UNSAFE.compareAndSwapObject
629            (q, (((q.length - 1) & --s) << qShift) + qBase, t, null)) {
630            sp = s;
631            // UNSAFE.putOrderedInt(this, spOffset, s);
632            return true;
633        }
634        return false;
635    }
636
637    /**
638     * Returns next task or null if empty or contended
639     */
640    final ForkJoinTask<?> peekTask() {
641        ForkJoinTask<?>[] q = queue;
642        if (q == null)
643            return null;
644        int mask = q.length - 1;
645        int i = locallyFifo ? base : (sp - 1);
646        return q[i & mask];
647    }
648
649    /**
650     * Doubles queue array size. Transfers elements by emulating
651     * steals (deqs) from old array and placing, oldest first, into
652     * new array.
653     */
654    private void growQueue() {
655        ForkJoinTask<?>[] oldQ = queue;
656        int oldSize = oldQ.length;
657        int newSize = oldSize << 1;
658        if (newSize > MAXIMUM_QUEUE_CAPACITY)
659            throw new RejectedExecutionException("Queue capacity exceeded");
660        ForkJoinTask<?>[] newQ = queue = new ForkJoinTask<?>[newSize];
661
662        int b = base;
663        int bf = b + oldSize;
664        int oldMask = oldSize - 1;
665        int newMask = newSize - 1;
666        do {
667            int oldIndex = b & oldMask;
668            ForkJoinTask<?> t = oldQ[oldIndex];
669            if (t != null && !casSlotNull(oldQ, oldIndex, t))
670                t = null;
671            writeSlot(newQ, b & newMask, t);
672        } while (++b != bf);
673        pool.signalWork();
674    }
675
676    /**
677     * Computes next value for random victim probe in scan().  Scans
678     * don't require a very high quality generator, but also not a
679     * crummy one.  Marsaglia xor-shift is cheap and works well enough.
680     * Note: This is manually inlined in scan()
681     */
682    private static final int xorShift(int r) {
683        r ^= r << 13;
684        r ^= r >>> 17;
685        return r ^ (r << 5);
686    }
687
688    /**
689     * Tries to steal a task from another worker. Starts at a random
690     * index of workers array, and probes workers until finding one
691     * with non-empty queue or finding that all are empty.  It
692     * randomly selects the first n probes. If these are empty, it
693     * resorts to a circular sweep, which is necessary to accurately
694     * set active status. (The circular sweep uses steps of
695     * approximately half the array size plus 1, to avoid bias
696     * stemming from leftmost packing of the array in ForkJoinPool.)
697     *
698     * This method must be both fast and quiet -- usually avoiding
699     * memory accesses that could disrupt cache sharing etc other than
700     * those needed to check for and take tasks (or to activate if not
701     * already active). This accounts for, among other things,
702     * updating random seed in place without storing it until exit.
703     *
704     * @return a task, or null if none found
705     */
706    private ForkJoinTask<?> scan() {
707        ForkJoinPool p = pool;
708        ForkJoinWorkerThread[] ws;        // worker array
709        int n;                            // upper bound of #workers
710        if ((ws = p.workers) != null && (n = ws.length) > 1) {
711            boolean canSteal = active;    // shadow active status
712            int r = seed;                 // extract seed once
713            int mask = n - 1;
714            int j = -n;                   // loop counter
715            int k = r;                    // worker index, random if j < 0
716            for (;;) {
717                ForkJoinWorkerThread v = ws[k & mask];
718                r ^= r << 13; r ^= r >>> 17; r ^= r << 5; // inline xorshift
719                if (v != null && v.base != v.sp) {
720                    ForkJoinTask<?>[] q; int b, a;
721                    if ((canSteal ||      // Ugly/hacky: inline
722                         (canSteal = active =  // p.tryIncrementActiveCount
723                          UNSAFE.compareAndSwapInt(p, poolRunStateOffset,
724                                                   a = p.runState, a + 1))) &&
725                        (q = v.queue) != null && (b = v.base) != v.sp) {
726                        int i = (q.length - 1) & b;
727                        long u = (i << qShift) + qBase; // raw offset
728                        ForkJoinTask<?> t = q[i];
729                        if (v.base == b && t != null &&
730                            UNSAFE.compareAndSwapObject(q, u, t, null)) {
731                            int pid = poolIndex;
732                            currentSteal = t;
733                            v.stealHint = pid;
734                            v.base = b + 1;
735                            seed = r;
736                            ++stealCount;
737                            return t;
738                        }
739                    }
740                    j = -n;
741                    k = r;                // restart on contention
742                }
743                else if (++j <= 0)
744                    k = r;
745                else if (j <= n)
746                    k += (n >>> 1) | 1;
747                else
748                    break;
749            }
750        }
751        return null;
752    }
753
754    // Run State management
755
756    // status check methods used mainly by ForkJoinPool
757    final boolean isRunning()     { return runState == 0; }
758    final boolean isTerminating() { return (runState & TERMINATING) != 0; }
759    final boolean isTerminated()  { return (runState & TERMINATED) != 0; }
760    final boolean isSuspended()   { return (runState & SUSPENDED) != 0; }
761    final boolean isTrimmed()     { return (runState & TRIMMED) != 0; }
762
763    /**
764     * Sets state to TERMINATING. Does NOT unpark or interrupt
765     * to wake up if currently blocked.
766     */
767    final void shutdown() {
768        for (;;) {
769            int s = runState;
770            if ((s & (TERMINATING|TERMINATED)) != 0)
771                break;
772            if ((s & SUSPENDED) != 0) { // kill and wakeup if suspended
773                if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
774                                             (s & ~SUSPENDED) |
775                                             (TRIMMED|TERMINATING)))
776                    break;
777            }
778            else if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
779                                              s | TERMINATING))
780                break;
781        }
782    }
783
784    /**
785     * Sets state to TERMINATED. Called only by onTermination()
786     */
787    private void setTerminated() {
788        int s;
789        do {} while (!UNSAFE.compareAndSwapInt(this, runStateOffset,
790                                               s = runState,
791                                               s | (TERMINATING|TERMINATED)));
792    }
793
794    /**
795     * If suspended, tries to set status to unsuspended.
796     *
797     * @return true if successful
798     */
799    final boolean tryUnsuspend() {
800        int s;
801        while (((s = runState) & SUSPENDED) != 0) {
802            if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
803                                         s & ~SUSPENDED))
804                return true;
805        }
806        return false;
807    }
808
809    /**
810     * Sets suspended status and blocks as spare until resumed
811     * or shutdown.
812     */
813    final void suspendAsSpare() {
814        for (;;) {                  // set suspended unless terminating
815            int s = runState;
816            if ((s & TERMINATING) != 0) { // must kill
817                if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
818                                             s | (TRIMMED | TERMINATING)))
819                    return;
820            }
821            else if (UNSAFE.compareAndSwapInt(this, runStateOffset, s,
822                                              s | SUSPENDED))
823                break;
824        }
825        ForkJoinPool p = pool;
826        p.pushSpare(this);
827        lastEventCount = 0;         // reset upon resume
828        while ((runState & SUSPENDED) != 0) {
829            if (p.tryAccumulateStealCount(this)) {
830                boolean untimed = nextSpare != 0;
831                long startTime = untimed? 0 : System.nanoTime();
832                interrupted();          // clear/ignore interrupts
833                if ((runState & SUSPENDED) == 0)
834                    break;
835                if (untimed)     // untimed
836                    LockSupport.park(this);
837                else {
838                    LockSupport.parkNanos(this, TRIM_RATE_NANOS);
839                    if ((runState & SUSPENDED) == 0)
840                        break;
841                    if (System.nanoTime() - startTime >= TRIM_RATE_NANOS)
842                        p.tryShutdownSpare();
843                }
844            }
845        }
846    }
847
848    // Misc support methods for ForkJoinPool
849
850    /**
851     * Returns an estimate of the number of tasks in the queue.  Also
852     * used by ForkJoinTask.
853     */
854    final int getQueueSize() {
855        int n; // external calls must read base first
856        return (n = -base + sp) <= 0 ? 0 : n;
857    }
858
859    /**
860     * Removes and cancels all tasks in queue.  Can be called from any
861     * thread.
862     */
863    final void cancelTasks() {
864        ForkJoinTask<?> cj = currentJoin; // try to cancel ongoing tasks
865        if (cj != null) {
866            currentJoin = null;
867            cj.cancelIgnoringExceptions();
868            try {
869                this.interrupt(); // awaken wait
870            } catch (SecurityException ignore) {
871            }
872        }
873        ForkJoinTask<?> cs = currentSteal;
874        if (cs != null) {
875            currentSteal = null;
876            cs.cancelIgnoringExceptions();
877        }
878        while (base != sp) {
879            ForkJoinTask<?> t = deqTask();
880            if (t != null)
881                t.cancelIgnoringExceptions();
882        }
883    }
884
885    /**
886     * Drains tasks to given collection c.
887     *
888     * @return the number of tasks drained
889     */
890    final int drainTasksTo(Collection<? super ForkJoinTask<?>> c) {
891        int n = 0;
892        while (base != sp) {
893            ForkJoinTask<?> t = deqTask();
894            if (t != null) {
895                c.add(t);
896                ++n;
897            }
898        }
899        return n;
900    }
901
902    // Support methods for ForkJoinTask
903
904    /**
905     * Gets and removes a local task.
906     *
907     * @return a task, if available
908     */
909    final ForkJoinTask<?> pollLocalTask() {
910        ForkJoinPool p = pool;
911        while (sp != base) {
912            int a; // inline p.tryIncrementActiveCount
913            if (active ||
914                (active = UNSAFE.compareAndSwapInt(p, poolRunStateOffset,
915                                                   a = p.runState, a + 1)))
916                return locallyFifo? locallyDeqTask() : popTask();
917        }
918        return null;
919    }
920
921    /**
922     * Gets and removes a local or stolen task.
923     *
924     * @return a task, if available
925     */
926    final ForkJoinTask<?> pollTask() {
927        ForkJoinTask<?> t = pollLocalTask();
928        if (t == null) {
929            t = scan();
930            currentSteal = null; // cannot retain/track/help
931        }
932        return t;
933    }
934
935    /**
936     * Possibly runs some tasks and/or blocks, until task is done.
937     *
938     * @param joinMe the task to join
939     */
940    final void joinTask(ForkJoinTask<?> joinMe) {
941        // currentJoin only written by this thread; only need ordered store
942        ForkJoinTask<?> prevJoin = currentJoin;
943        UNSAFE.putOrderedObject(this, currentJoinOffset, joinMe);
944        if (sp != base)
945            localHelpJoinTask(joinMe);
946        if (joinMe.status >= 0)
947            pool.awaitJoin(joinMe, this);
948        UNSAFE.putOrderedObject(this, currentJoinOffset, prevJoin);
949    }
950
951    /**
952     * Run tasks in local queue until given task is done.
953     *
954     * @param joinMe the task to join
955     */
956    private void localHelpJoinTask(ForkJoinTask<?> joinMe) {
957        int s;
958        ForkJoinTask<?>[] q;
959        while (joinMe.status >= 0 && (s = sp) != base && (q = queue) != null) {
960            int i = (q.length - 1) & --s;
961            long u = (i << qShift) + qBase; // raw offset
962            ForkJoinTask<?> t = q[i];
963            if (t == null)  // lost to a stealer
964                break;
965            if (UNSAFE.compareAndSwapObject(q, u, t, null)) {
966                /*
967                 * This recheck (and similarly in helpJoinTask)
968                 * handles cases where joinMe is independently
969                 * cancelled or forced even though there is other work
970                 * available. Back out of the pop by putting t back
971                 * into slot before we commit by writing sp.
972                 */
973                if (joinMe.status < 0) {
974                    UNSAFE.putObjectVolatile(q, u, t);
975                    break;
976                }
977                sp = s;
978                // UNSAFE.putOrderedInt(this, spOffset, s);
979                t.quietlyExec();
980            }
981        }
982    }
983
984    /**
985     * Tries to locate and help perform tasks for a stealer of the
986     * given task, or in turn one of its stealers.  Traces
987     * currentSteal->currentJoin links looking for a thread working on
988     * a descendant of the given task and with a non-empty queue to
989     * steal back and execute tasks from.
990     *
991     * The implementation is very branchy to cope with the potential
992     * inconsistencies or loops encountering chains that are stale,
993     * unknown, or of length greater than MAX_HELP_DEPTH links.  All
994     * of these cases are dealt with by just returning back to the
995     * caller, who is expected to retry if other join mechanisms also
996     * don't work out.
997     *
998     * @param joinMe the task to join
999     */
1000    final void helpJoinTask(ForkJoinTask<?> joinMe) {
1001        ForkJoinWorkerThread[] ws = pool.workers;
1002        int n; // need at least 2 workers
1003        if (ws != null && (n = ws.length) > 1 && joinMe.status >= 0) {
1004            ForkJoinTask<?> task = joinMe;        // base of chain
1005            ForkJoinWorkerThread thread = this;   // thread with stolen task
1006            for (int d = 0; d < MAX_HELP_DEPTH; ++d) { // chain length
1007                // Try to find v, the stealer of task, by first using hint
1008                ForkJoinWorkerThread v = ws[thread.stealHint & (n - 1)];
1009                if (v == null || v.currentSteal != task) {
1010                    for (int j = 0; ; ++j) {      // search array
1011                        if (j < n) {
1012                            if ((v = ws[j]) != null) {
1013                                if (task.status < 0)
1014                                    return;       // stale or done
1015                                if (v.currentSteal == task) {
1016                                    thread.stealHint = j;
1017                                    break;        // save hint for next time
1018                                }
1019                            }
1020                        }
1021                        else
1022                            return;               // no stealer
1023                    }
1024                }
1025                // Try to help v, using specialized form of deqTask
1026                int b;
1027                ForkJoinTask<?>[] q;
1028                while ((b = v.base) != v.sp && (q = v.queue) != null) {
1029                    int i = (q.length - 1) & b;
1030                    long u = (i << qShift) + qBase;
1031                    ForkJoinTask<?> t = q[i];
1032                    if (task.status < 0)
1033                        return;                   // stale or done
1034                    if (v.base == b) {
1035                        if (t == null)
1036                            return;               // producer stalled
1037                        if (UNSAFE.compareAndSwapObject(q, u, t, null)) {
1038                            if (joinMe.status < 0) {
1039                                UNSAFE.putObjectVolatile(q, u, t);
1040                                return;           // back out on cancel
1041                            }
1042                            int pid = poolIndex;
1043                            ForkJoinTask<?> prevSteal = currentSteal;
1044                            currentSteal = t;
1045                            v.stealHint = pid;
1046                            v.base = b + 1;
1047                            t.quietlyExec();
1048                            currentSteal = prevSteal;
1049                        }
1050                    }
1051                    if (joinMe.status < 0)
1052                        return;
1053                }
1054                // Try to descend to find v's stealer
1055                ForkJoinTask<?> next = v.currentJoin;
1056                if (task.status < 0 || next == null || next == task ||
1057                    joinMe.status < 0)
1058                    return;
1059                task = next;
1060                thread = v;
1061            }
1062        }
1063    }
1064
1065    /**
1066     * Returns an estimate of the number of tasks, offset by a
1067     * function of number of idle workers.
1068     *
1069     * This method provides a cheap heuristic guide for task
1070     * partitioning when programmers, frameworks, tools, or languages
1071     * have little or no idea about task granularity.  In essence by
1072     * offering this method, we ask users only about tradeoffs in
1073     * overhead vs expected throughput and its variance, rather than
1074     * how finely to partition tasks.
1075     *
1076     * In a steady state strict (tree-structured) computation, each
1077     * thread makes available for stealing enough tasks for other
1078     * threads to remain active. Inductively, if all threads play by
1079     * the same rules, each thread should make available only a
1080     * constant number of tasks.
1081     *
1082     * The minimum useful constant is just 1. But using a value of 1
1083     * would require immediate replenishment upon each steal to
1084     * maintain enough tasks, which is infeasible.  Further,
1085     * partitionings/granularities of offered tasks should minimize
1086     * steal rates, which in general means that threads nearer the top
1087     * of computation tree should generate more than those nearer the
1088     * bottom. In perfect steady state, each thread is at
1089     * approximately the same level of computation tree. However,
1090     * producing extra tasks amortizes the uncertainty of progress and
1091     * diffusion assumptions.
1092     *
1093     * So, users will want to use values larger, but not much larger
1094     * than 1 to both smooth over transient shortages and hedge
1095     * against uneven progress; as traded off against the cost of
1096     * extra task overhead. We leave the user to pick a threshold
1097     * value to compare with the results of this call to guide
1098     * decisions, but recommend values such as 3.
1099     *
1100     * When all threads are active, it is on average OK to estimate
1101     * surplus strictly locally. In steady-state, if one thread is
1102     * maintaining say 2 surplus tasks, then so are others. So we can
1103     * just use estimated queue length (although note that (sp - base)
1104     * can be an overestimate because of stealers lagging increments
1105     * of base).  However, this strategy alone leads to serious
1106     * mis-estimates in some non-steady-state conditions (ramp-up,
1107     * ramp-down, other stalls). We can detect many of these by
1108     * further considering the number of "idle" threads, that are
1109     * known to have zero queued tasks, so compensate by a factor of
1110     * (#idle/#active) threads.
1111     */
1112    final int getEstimatedSurplusTaskCount() {
1113        return sp - base - pool.idlePerActive();
1114    }
1115
1116    /**
1117     * Runs tasks until {@code pool.isQuiescent()}.
1118     */
1119    final void helpQuiescePool() {
1120        for (;;) {
1121            ForkJoinTask<?> t = pollLocalTask();
1122            if (t != null || (t = scan()) != null) {
1123                t.quietlyExec();
1124                currentSteal = null;
1125            }
1126            else {
1127                ForkJoinPool p = pool;
1128                int a; // to inline CASes
1129                if (active) {
1130                    if (!UNSAFE.compareAndSwapInt
1131                        (p, poolRunStateOffset, a = p.runState, a - 1))
1132                        continue;   // retry later
1133                    active = false; // inactivate
1134                }
1135                if (p.isQuiescent()) {
1136                    active = true; // re-activate
1137                    do {} while(!UNSAFE.compareAndSwapInt
1138                                (p, poolRunStateOffset, a = p.runState, a+1));
1139                    return;
1140                }
1141            }
1142        }
1143    }
1144
1145    // Unsafe mechanics
1146
1147    private static final sun.misc.Unsafe UNSAFE = getUnsafe();
1148    private static final long spOffset =
1149        objectFieldOffset("sp", ForkJoinWorkerThread.class);
1150    private static final long runStateOffset =
1151        objectFieldOffset("runState", ForkJoinWorkerThread.class);
1152    private static final long currentJoinOffset =
1153        objectFieldOffset("currentJoin", ForkJoinWorkerThread.class);
1154    private static final long currentStealOffset =
1155        objectFieldOffset("currentSteal", ForkJoinWorkerThread.class);
1156    private static final long qBase =
1157        UNSAFE.arrayBaseOffset(ForkJoinTask[].class);
1158    private static final long poolRunStateOffset = // to inline CAS
1159        objectFieldOffset("runState", ForkJoinPool.class);
1160
1161    private static final int qShift;
1162
1163    static {
1164        int s = UNSAFE.arrayIndexScale(ForkJoinTask[].class);
1165        if ((s & (s-1)) != 0)
1166            throw new Error("data type scale not a power of two");
1167        qShift = 31 - Integer.numberOfLeadingZeros(s);
1168    }
1169
1170    private static long objectFieldOffset(String field, Class<?> klazz) {
1171        try {
1172            return UNSAFE.objectFieldOffset(klazz.getDeclaredField(field));
1173        } catch (NoSuchFieldException e) {
1174            // Convert Exception to corresponding Error
1175            NoSuchFieldError error = new NoSuchFieldError(field);
1176            error.initCause(e);
1177            throw error;
1178        }
1179    }
1180
1181    /**
1182     * Returns a sun.misc.Unsafe.  Suitable for use in a 3rd party package.
1183     * Replace with a simple call to Unsafe.getUnsafe when integrating
1184     * into a jdk.
1185     *
1186     * @return a sun.misc.Unsafe
1187     */
1188    private static sun.misc.Unsafe getUnsafe() {
1189        try {
1190            return sun.misc.Unsafe.getUnsafe();
1191        } catch (SecurityException se) {
111              try {
112 <                return java.security.AccessController.doPrivileged
113 <                    (new java.security
114 <                     .PrivilegedExceptionAction<sun.misc.Unsafe>() {
115 <                        public sun.misc.Unsafe run() throws Exception {
116 <                            java.lang.reflect.Field f = sun.misc
117 <                                .Unsafe.class.getDeclaredField("theUnsafe");
1199 <                            f.setAccessible(true);
1200 <                            return (sun.misc.Unsafe) f.get(null);
1201 <                        }});
1202 <            } catch (java.security.PrivilegedActionException e) {
1203 <                throw new RuntimeException("Could not initialize intrinsics",
1204 <                                           e.getCause());
112 >                onTermination(exception);
113 >            } catch (Throwable ex) {
114 >                if (exception == null)
115 >                    exception = ex;
116 >            } finally {
117 >                pool.deregisterWorker(this, exception);
118              }
119          }
120      }

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines