ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/jsr166y/ForkJoinPool.java
(Generate patch)

Comparing jsr166/src/jsr166y/ForkJoinPool.java (file contents):
Revision 1.113 by jsr166, Thu Jan 26 19:00:15 2012 UTC vs.
Revision 1.118 by jsr166, Sat Jan 28 04:34:54 2012 UTC

# Line 60 | Line 60 | import java.util.concurrent.locks.Condit
60   * convenient form for informal monitoring.
61   *
62   * <p> As is the case with other ExecutorServices, there are three
63 < * main task execution methods summarized in the following
64 < * table. These are designed to be used primarily by clients not
65 < * already engaged in fork/join computations in the current pool.  The
66 < * main forms of these methods accept instances of {@code
67 < * ForkJoinTask}, but overloaded forms also allow mixed execution of
68 < * plain {@code Runnable}- or {@code Callable}- based activities as
69 < * well.  However, tasks that are already executing in a pool should
70 < * normally instead use the within-computation forms listed in the
71 < * table unless using async event-style tasks that are not usually
72 < * joined, in which case there is little difference among choice of
73 < * methods.
63 > * main task execution methods summarized in the following table.
64 > * These are designed to be used primarily by clients not already
65 > * engaged in fork/join computations in the current pool.  The main
66 > * forms of these methods accept instances of {@code ForkJoinTask},
67 > * but overloaded forms also allow mixed execution of plain {@code
68 > * Runnable}- or {@code Callable}- based activities as well.  However,
69 > * tasks that are already executing in a pool should normally instead
70 > * use the within-computation forms listed in the table unless using
71 > * async event-style tasks that are not usually joined, in which case
72 > * there is little difference among choice of methods.
73   *
74   * <table BORDER CELLPADDING=3 CELLSPACING=1>
75   *  <tr>
# Line 131 | Line 130 | public class ForkJoinPool extends Abstra
130       *
131       * This class and its nested classes provide the main
132       * functionality and control for a set of worker threads:
133 <     * Submissions from non-FJ threads enter into submission
134 <     * queues. Workers take these tasks and typically split them into
135 <     * subtasks that may be stolen by other workers.  Preference rules
136 <     * give first priority to processing tasks from their own queues
137 <     * (LIFO or FIFO, depending on mode), then to randomized FIFO
138 <     * steals of tasks in other queues.
133 >     * Submissions from non-FJ threads enter into submission queues.
134 >     * Workers take these tasks and typically split them into subtasks
135 >     * that may be stolen by other workers.  Preference rules give
136 >     * first priority to processing tasks from their own queues (LIFO
137 >     * or FIFO, depending on mode), then to randomized FIFO steals of
138 >     * tasks in other queues.
139       *
140 <     * WorkQueues.
140 >     * WorkQueues
141       * ==========
142       *
143       * Most operations occur within work-stealing queues (in nested
# Line 156 | Line 155 | public class ForkJoinPool extends Abstra
155       * (http://research.sun.com/scalable/pubs/index.html) and
156       * "Idempotent work stealing" by Michael, Saraswat, and Vechev,
157       * PPoPP 2009 (http://portal.acm.org/citation.cfm?id=1504186).
158 <     * The main differences ultimately stem from gc requirements that
158 >     * The main differences ultimately stem from GC requirements that
159       * we null out taken slots as soon as we can, to maintain as small
160       * a footprint as possible even in programs generating huge
161       * numbers of tasks. To accomplish this, we shift the CAS
# Line 188 | Line 187 | public class ForkJoinPool extends Abstra
187       * rarely provide the best possible performance on a given
188       * machine, but portably provide good throughput by averaging over
189       * these factors.  (Further, even if we did try to use such
190 <     * information, we do not usually have a basis for exploiting
191 <     * it. For example, some sets of tasks profit from cache
192 <     * affinities, but others are harmed by cache pollution effects.)
190 >     * information, we do not usually have a basis for exploiting it.
191 >     * For example, some sets of tasks profit from cache affinities,
192 >     * but others are harmed by cache pollution effects.)
193       *
194       * WorkQueues are also used in a similar way for tasks submitted
195       * to the pool. We cannot mix these tasks in the same queues used
196       * for work-stealing (this would contaminate lifo/fifo
197 <     * processing). Instead, we loosely associate (via hashing)
198 <     * submission queues with submitting threads, and randomly scan
199 <     * these queues as well when looking for work. In essence,
200 <     * submitters act like workers except that they never take tasks,
201 <     * and they are multiplexed on to a finite number of shared work
202 <     * queues. However, classes are set up so that future extensions
203 <     * could allow submitters to optionally help perform tasks as
204 <     * well. Pool submissions from internal workers are also allowed,
205 <     * but use randomized rather than thread-hashed queue indices to
206 <     * avoid imbalance.  Insertion of tasks in shared mode requires a
207 <     * lock (mainly to protect in the case of resizing) but we use
208 <     * only a simple spinlock (using bits in field runState), because
209 <     * submitters encountering a busy queue try or create others so
210 <     * never block.
197 >     * processing). Instead, we loosely associate submission queues
198 >     * with submitting threads, using a form of hashing.  The
199 >     * ThreadLocal Submitter class contains a value initially used as
200 >     * a hash code for choosing existing queues, but may be randomly
201 >     * repositioned upon contention with other submitters.  In
202 >     * essence, submitters act like workers except that they never
203 >     * take tasks, and they are multiplexed on to a finite number of
204 >     * shared work queues. However, classes are set up so that future
205 >     * extensions could allow submitters to optionally help perform
206 >     * tasks as well. Pool submissions from internal workers are also
207 >     * allowed, but use randomized rather than thread-hashed queue
208 >     * indices to avoid imbalance.  Insertion of tasks in shared mode
209 >     * requires a lock (mainly to protect in the case of resizing) but
210 >     * we use only a simple spinlock (using bits in field runState),
211 >     * because submitters encountering a busy queue try or create
212 >     * others so never block.
213       *
214 <     * Management.
214 >     * Management
215       * ==========
216       *
217       * The main throughput advantages of work-stealing stem from
# Line 220 | Line 221 | public class ForkJoinPool extends Abstra
221       * tactic for avoiding bottlenecks is packing nearly all
222       * essentially atomic control state into two volatile variables
223       * that are by far most often read (not written) as status and
224 <     * consistency checks
224 >     * consistency checks.
225       *
226       * Field "ctl" contains 64 bits holding all the information needed
227       * to atomically decide to add, inactivate, enqueue (on an event
# Line 299 | Line 300 | public class ForkJoinPool extends Abstra
300       * some other queued worker rather than itself, which has the same
301       * net effect. Because enqueued workers may actually be rescanning
302       * rather than waiting, we set and clear the "parker" field of
303 <     * Workqueues to reduce unnecessary calls to unpark.  (This
303 >     * WorkQueues to reduce unnecessary calls to unpark.  (This
304       * requires a secondary recheck to avoid missed signals.)  Note
305       * the unusual conventions about Thread.interrupts surrounding
306       * parking and other blocking: Because interrupts are used solely
# Line 327 | Line 328 | public class ForkJoinPool extends Abstra
328       * terminating all workers after long periods of non-use.
329       *
330       * Shutdown and Termination. A call to shutdownNow atomically sets
331 <     * a runState bit and then (non-atomically) sets each workers
331 >     * a runState bit and then (non-atomically) sets each worker's
332       * runState status, cancels all unprocessed tasks, and wakes up
333       * all waiting workers.  Detecting whether termination should
334       * commence after a non-abrupt shutdown() call requires more work
# Line 336 | Line 337 | public class ForkJoinPool extends Abstra
337       * indication but non-abrupt shutdown still requires a rechecking
338       * scan for any workers that are inactive but not queued.
339       *
340 <     * Joining Tasks.
341 <     * ==============
340 >     * Joining Tasks
341 >     * =============
342       *
343       * Any of several actions may be taken when one worker is waiting
344 <     * to join a task stolen (or always held by) another.  Because we
344 >     * to join a task stolen (or always held) by another.  Because we
345       * are multiplexing many tasks on to a pool of workers, we can't
346       * just let them block (as in Thread.join).  We also cannot just
347       * reassign the joiner's run-time stack with another and replace
348       * it later, which would be a form of "continuation", that even if
349       * possible is not necessarily a good idea since we sometimes need
350 <     * both an unblocked task and its continuation to
351 <     * progress. Instead we combine two tactics:
350 >     * both an unblocked task and its continuation to progress.
351 >     * Instead we combine two tactics:
352       *
353       *   Helping: Arranging for the joiner to execute some task that it
354       *      would be running if the steal had not occurred.
# Line 423 | Line 424 | public class ForkJoinPool extends Abstra
424       * enable the processing of billions of tasks per second, at the
425       * expense of some ugliness.
426       *
427 <     * Methods signalWork() and scan() are the main bottlenecks so are
427 >     * Methods signalWork() and scan() are the main bottlenecks, so are
428       * especially heavily micro-optimized/mangled.  There are lots of
429       * inline assignments (of form "while ((local = field) != 0)")
430       * which are usually the simplest way to ensure the required read
# Line 435 | Line 436 | public class ForkJoinPool extends Abstra
436       * coding oddities that help some methods perform reasonably even
437       * when interpreted (not compiled).
438       *
439 <     * The order of declarations in this file is: (1) declarations of
440 <     * statics (2) fields (along with constants used when unpacking
441 <     * some of them), listed in an order that tends to reduce
442 <     * contention among them a bit under most JVMs; (3) nested
443 <     * classes; (4) internal control methods; (5) callbacks and other
444 <     * support for ForkJoinTask methods; (6) exported methods (plus a
445 <     * few little helpers); (7) static block initializing all statics
446 <     * in a minimally dependent order.
439 >     * The order of declarations in this file is:
440 >     * (1) statics
441 >     * (2) fields (along with constants used when unpacking some of
442 >     *     them), listed in an order that tends to reduce contention
443 >     *     among them a bit under most JVMs;
444 >     * (3) nested classes
445 >     * (4) internal control methods
446 >     * (5) callbacks and other support for ForkJoinTask methods
447 >     * (6) exported methods (plus a few little helpers)
448 >     * (7) static block initializing all statics in a minimally
449 >     *     dependent order.
450       */
451  
452      /**
# Line 537 | Line 541 | public class ForkJoinPool extends Abstra
541       * consistency checks: Staleness of read-only operations on the
542       * workers and queues arrays can be checked by comparing runState
543       * before vs after the reads. The low 16 bits (i.e, anding with
544 <     * SMASK) hold (the smallest power of two covering all worker
544 >     * SMASK) hold the smallest power of two covering all worker
545       * indices, minus one.  The mask for queues (vs workers) is twice
546       * this value plus 1.
547       */
# Line 649 | Line 653 | public class ForkJoinPool extends Abstra
653       * for push, or under lock for trySharedPush, and accessed by
654       * other threads only after reading (volatile) base.  Both top and
655       * base are allowed to wrap around on overflow, but (top - base)
656 <     * (or more comonly -(base - top) to force volatile read of base
656 >     * (or more commonly -(base - top) to force volatile read of base
657       * before top) still estimates size.
658       *
659       * The array slots are read and written using the emulation of
# Line 730 | Line 734 | public class ForkJoinPool extends Abstra
734          }
735  
736          /**
737 <         * Returns number of tasks in the queue
737 >         * Returns number of tasks in the queue.
738           */
739          final int queueSize() {
740              int n = base - top; // non-owner callers must read base first
# Line 741 | Line 745 | public class ForkJoinPool extends Abstra
745           * Pushes a task. Call only by owner in unshared queues.
746           *
747           * @param task the task. Caller must ensure non-null.
748 <         * @param p, if non-null, pool to signal if necessary
749 <         * @throw RejectedExecutionException if array cannot
746 <         * be resized
748 >         * @param p if non-null, pool to signal if necessary
749 >         * @throw RejectedExecutionException if array cannot be resized
750           */
751          final void push(ForkJoinTask<?> task, ForkJoinPool p) {
752              ForkJoinTask<?>[] a;
# Line 978 | Line 981 | public class ForkJoinPool extends Abstra
981          }
982  
983          /**
984 <         * Removes and cancels all known tasks, ignoring any exceptions
984 >         * Removes and cancels all known tasks, ignoring any exceptions.
985           */
986          final void cancelAll() {
987              ForkJoinTask.cancelIgnoringExceptions(currentJoin);
# Line 1021 | Line 1024 | public class ForkJoinPool extends Abstra
1024          }
1025  
1026          /**
1027 <         * Executes a non-top-level (stolen) task
1027 >         * Executes a non-top-level (stolen) task.
1028           */
1029          final void runSubtask(ForkJoinTask<?> t) {
1030              if (t != null) {
# Line 1085 | Line 1088 | public class ForkJoinPool extends Abstra
1088      }
1089  
1090      /**
1091 <     * Computes a hash code for the given thread. This method is
1092 <     * expected to provide higher-quality hash codes than those using
1093 <     * method hashCode().
1094 <     */
1095 <    static final int hashThread(Thread t) {
1096 <        long id = (t == null) ? 0L : t.getId(); // Use MurmurHash of thread id
1097 <        int h = (int)id ^ (int)(id >>> 32);
1098 <        h ^= h >>> 16;
1099 <        h *= 0x85ebca6b;
1100 <        h ^= h >>> 13;
1101 <        h *= 0xc2b2ae35;
1102 <        return h ^ (h >>> 16);
1091 >     * Per-thread records for (typically non-FJ) threads that submit
1092 >     * to pools. Cureently holds only psuedo-random seed / index that
1093 >     * is used to choose submission queues in method doSubmit. In the
1094 >     * future, this may incorporate a means to implement different
1095 >     * task rejection and resubmission policies.
1096 >     */
1097 >    static final class Submitter {
1098 >        int seed; // seed for random submission queue selection
1099 >
1100 >        // Heuristic padding to ameliorate unfortunate memory placements
1101 >        int p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, pa, pb, pc, pd, pe;
1102 >
1103 >        Submitter() {
1104 >            // Use identityHashCode, forced negative, for seed
1105 >            seed = System.identityHashCode(Thread.currentThread()) | (1 << 31);
1106 >        }
1107 >
1108 >        /**
1109 >         * Computes next value for random probes.  Like method
1110 >         * WorkQueue.nextSeed, this is manually inlined in several
1111 >         * usages to avoid writes inside busy loops.
1112 >         */
1113 >        final int nextSeed() {
1114 >            int r = seed;
1115 >            r ^= r << 13;
1116 >            r ^= r >>> 17;
1117 >            return seed = r ^= r << 5;
1118 >        }
1119 >    }
1120 >
1121 >    /** ThreadLocal class for Submitters */
1122 >    static final class ThreadSubmitter extends ThreadLocal<Submitter> {
1123 >        public Submitter initialValue() { return new Submitter(); }
1124      }
1125  
1126      /**
1127 +     * Per-thread submission bookeeping. Shared across all pools
1128 +     * to reduce ThreadLocal pollution and because random motion
1129 +     * to avoid contention in one pool is likely to hold for others.
1130 +     */
1131 +    static final ThreadSubmitter submitters = new ThreadSubmitter();
1132 +
1133 +    /**
1134       * Top-level runloop for workers
1135       */
1136      final void runWorker(ForkJoinWorkerThread wt) {
1137 +        // Initialize queue array and seed in this thread
1138          WorkQueue w = wt.workQueue;
1139 <        w.growArray(false);     // Initialize queue array and seed in this thread
1140 <        w.seed = hashThread(Thread.currentThread()) | (1 << 31); // force < 0
1139 >        w.growArray(false);
1140 >        // Same initial hash as Submitters
1141 >        w.seed = System.identityHashCode(Thread.currentThread()) | (1 << 31);
1142  
1143          do {} while (w.runTask(scan(w)));
1144      }
# Line 1142 | Line 1175 | public class ForkJoinPool extends Abstra
1175  
1176      /**
1177       * Callback from ForkJoinWorkerThread constructor to establish and
1178 <     * record its WorkQueue
1178 >     * record its WorkQueue.
1179       *
1180       * @param wt the worker thread
1181       */
# Line 1218 | Line 1251 | public class ForkJoinPool extends Abstra
1251              U.throwException(ex);
1252      }
1253  
1254 +    /**
1255 +     * Tries to add and register a new queue at the given index.
1256 +     *
1257 +     * @param idx the workQueues array index to register the queue
1258 +     * @return the queue, or null if could not add because could
1259 +     * not acquire lock or idx is unusable
1260 +     */
1261 +    private WorkQueue tryAddSharedQueue(int idx) {
1262 +        WorkQueue q = null;
1263 +        ReentrantLock lock = this.lock;
1264 +        if (idx >= 0 && (idx & 1) == 0 && !lock.isLocked()) {
1265 +            // create queue outside of lock but only if apparently free
1266 +            WorkQueue nq = new WorkQueue(null, SHARED_QUEUE);
1267 +            if (lock.tryLock()) {
1268 +                try {
1269 +                    WorkQueue[] ws = workQueues;
1270 +                    if (ws != null && idx < ws.length) {
1271 +                        if ((q = ws[idx]) == null) {
1272 +                            int rs;         // update runState seq
1273 +                            ws[idx] = q = nq;
1274 +                            runState = (((rs = runState) & SHUTDOWN) |
1275 +                                        ((rs + RS_SEQ) & ~SHUTDOWN));
1276 +                        }
1277 +                    }
1278 +                } finally {
1279 +                    lock.unlock();
1280 +                }
1281 +            }
1282 +        }
1283 +        return q;
1284 +    }
1285  
1286      // Maintaining ctl counts
1287  
1288      /**
1289 <     * Increments active count; mainly called upon return from blocking
1289 >     * Increments active count; mainly called upon return from blocking.
1290       */
1291      final void incrementActiveCount() {
1292          long c;
# Line 1230 | Line 1294 | public class ForkJoinPool extends Abstra
1294      }
1295  
1296      /**
1297 <     * Activates or creates a worker
1297 >     * Activates or creates a worker.
1298       */
1299      final void signalWork() {
1300          /*
# Line 1320 | Line 1384 | public class ForkJoinPool extends Abstra
1384      // Submissions
1385  
1386      /**
1387 <     * Unless shutting down, adds the given task to some submission
1388 <     * queue; using a randomly chosen queue index if the caller is a
1389 <     * ForkJoinWorkerThread, else one based on caller thread's hash
1390 <     * code. If no queue exists at the index, one is created.  If the
1327 <     * queue is busy, another is chosen by sweeping through the queues
1328 <     * array.
1387 >     * Unless shutting down, adds the given task to a submission queue
1388 >     * at submitter's current queue index. If no queue exists at the
1389 >     * index, one is created unless pool lock is busy.  If the queue
1390 >     * and/or lock are busy, another index is randomly chosen.
1391       */
1392      private void doSubmit(ForkJoinTask<?> task) {
1393          if (task == null)
1394              throw new NullPointerException();
1395 <        Thread t = Thread.currentThread();
1396 <        int r = ((t instanceof ForkJoinWorkerThread) ?
1397 <                 ((ForkJoinWorkerThread)t).workQueue.nextSeed() : hashThread(t));
1336 <        for (;;) {
1395 >        Submitter s = submitters.get();
1396 >        for (int r = s.seed;;) {
1397 >            WorkQueue q; int k;
1398              int rs = runState, m = rs & SMASK;
1338            int j = r &= (m & ~1);                      // even numbered queues
1399              WorkQueue[] ws = workQueues;
1400 <            if (rs < 0 || ws == null)
1401 <                throw new RejectedExecutionException(); // shutting down
1402 <            if (ws.length > m) {                        // consistency check
1403 <                for (WorkQueue q;;) {                   // circular sweep
1404 <                    if (((q = ws[j]) != null ||
1405 <                         (q = tryAddSharedQueue(j)) != null) &&
1406 <                        q.trySharedPush(task)) {
1407 <                        signalWork();
1348 <                        return;
1349 <                    }
1350 <                    if ((j = (j + 2) & m) == r) {
1351 <                        Thread.yield();                 // all queues busy
1352 <                        break;
1353 <                    }
1354 <                }
1400 >            if (rs < 0 || ws == null)   // shutting down
1401 >                throw new RejectedExecutionException();
1402 >            if (ws.length > m &&        // k must be at index
1403 >                ((q = ws[k = (r << 1) & m]) != null ||
1404 >                 (q = tryAddSharedQueue(k)) != null) &&
1405 >                q.trySharedPush(task)) {
1406 >                signalWork();
1407 >                return;
1408              }
1409 +            r ^= r << 13;               // xorshift seed to new position
1410 +            r ^= r >>> 17;
1411 +            if (((s.seed = r ^= r << 5) & m) == 0)
1412 +                Thread.yield();         // occasionally yield if busy
1413          }
1414      }
1415  
1359    /**
1360     * Tries to add and register a new queue at the given index.
1361     *
1362     * @param idx the workQueues array index to register the queue
1363     * @return the queue, or null if could not add because could
1364     * not acquire lock or idx is unusable
1365     */
1366    private WorkQueue tryAddSharedQueue(int idx) {
1367        WorkQueue q = null;
1368        ReentrantLock lock = this.lock;
1369        if (idx >= 0 && (idx & 1) == 0 && !lock.isLocked()) {
1370            // create queue outside of lock but only if apparently free
1371            WorkQueue nq = new WorkQueue(null, SHARED_QUEUE);
1372            if (lock.tryLock()) {
1373                try {
1374                    WorkQueue[] ws = workQueues;
1375                    if (ws != null && idx < ws.length) {
1376                        if ((q = ws[idx]) == null) {
1377                            int rs;         // update runState seq
1378                            ws[idx] = q = nq;
1379                            runState = (((rs = runState) & SHUTDOWN) |
1380                                        ((rs + RS_SEQ) & ~SHUTDOWN));
1381                        }
1382                    }
1383                } finally {
1384                    lock.unlock();
1385                }
1386            }
1387        }
1388        return q;
1389    }
1416  
1417      // Scanning for tasks
1418  
# Line 1415 | Line 1441 | public class ForkJoinPool extends Abstra
1441       * another worker, but with same net effect. Releasing in other
1442       * cases as well ensures that we have enough workers running.
1443       *
1444 <     * * If the caller has run a task since the the last empty scan,
1444 >     * * If the caller has run a task since the last empty scan,
1445       * return (to allow rescan) if other workers are not also yet
1446       * enqueued.  Field WorkQueue.rescans counts down on each scan to
1447       * ensure eventual inactivation, and occasional calls to
1448       * Thread.yield to help avoid interference with more useful
1449       * activities on the system.
1450       *
1451 <     * * If pool is terminating, terminate the worker
1451 >     * * If pool is terminating, terminate the worker.
1452       *
1453       * * If not already enqueued, try to inactivate and enqueue the
1454       * worker on wait queue.
# Line 1444 | Line 1470 | public class ForkJoinPool extends Abstra
1470              ForkJoinTask<?> task = null;
1471              for (int k = 0, j = -2 - m; ; ++j) {
1472                  WorkQueue q; int b;
1473 <                if (j < 0) {                    // random probes while j negative
1473 >                if (j < 0) {                   // random probes while j negative
1474                      r ^= r << 13; r ^= r >>> 17; k = (r ^= r << 5) | (j & 1);
1475 <                }                               // worker (not submit) for odd j
1476 <                else                            // cyclic scan when j >= 0
1477 <                    k += (m >>> 1) | 1;         // step by half to reduce bias
1475 >                }                              // worker (not submit) for odd j
1476 >                else                           // cyclic scan when j >= 0
1477 >                    k += (m >>> 1) | 1;        // step by half to reduce bias
1478  
1479                  if ((q = ws[k & m]) != null && (b = q.base) - q.top < 0) {
1480                      if (ec >= 0)
1481 <                        task = q.pollAt(b);     // steal
1481 >                        task = q.pollAt(b);    // steal
1482                      break;
1483                  }
1484                  else if (j > m) {
# Line 1520 | Line 1546 | public class ForkJoinPool extends Abstra
1546  
1547      /**
1548       * If inactivating worker w has caused pool to become quiescent,
1549 <     * check for pool termination, and, so long as this is not the
1550 <     * only worker, wait for event for up to SHRINK_RATE nanosecs On
1551 <     * timeout, if ctl has not changed, terminate the worker, which
1552 <     * will in turn wake up another worker to possibly repeat this
1553 <     * process.
1549 >     * checks for pool termination, and, so long as this is not the
1550 >     * only worker, waits for event for up to SHRINK_RATE nanosecs.
1551 >     * On timeout, if ctl has not changed, terminates the worker,
1552 >     * which will in turn wake up another worker to possibly repeat
1553 >     * this process.
1554       *
1555       * @param w the calling worker
1556       */
# Line 1720 | Line 1746 | public class ForkJoinPool extends Abstra
1746      }
1747  
1748      /**
1749 <     * Gets and removes a local or stolen task for the given worker
1749 >     * Gets and removes a local or stolen task for the given worker.
1750       *
1751       * @return a task, if available
1752       */
# Line 2553 | Line 2579 | public class ForkJoinPool extends Abstra
2579       *
2580       * <p>If the caller is not a {@link ForkJoinTask}, this method is
2581       * behaviorally equivalent to
2582 < a     *  <pre> {@code
2582 >     *  <pre> {@code
2583       * while (!blocker.isReleasable())
2584       *   if (blocker.block())
2585       *     return;

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines