52 |
|
* convenient form for informal monitoring. |
53 |
|
* |
54 |
|
* <p> As is the case with other ExecutorServices, there are three |
55 |
< |
* main task execution methods summarized in the follwoing |
55 |
> |
* main task execution methods summarized in the following |
56 |
|
* table. These are designed to be used by clients not already engaged |
57 |
|
* in fork/join computations in the current pool. The main forms of |
58 |
|
* these methods accept instances of {@code ForkJoinTask}, but |
110 |
|
* |
111 |
|
* <p>This implementation rejects submitted tasks (that is, by throwing |
112 |
|
* {@link RejectedExecutionException}) only when the pool is shut down |
113 |
< |
* or internal resources have been exhuasted. |
113 |
> |
* or internal resources have been exhausted. |
114 |
|
* |
115 |
|
* @since 1.7 |
116 |
|
* @author Doug Lea |
138 |
|
* cache pollution effects.) |
139 |
|
* |
140 |
|
* Beyond work-stealing support and essential bookkeeping, the |
141 |
< |
* main responsibility of this framework is to arrange tactics for |
142 |
< |
* when one worker is waiting to join a task stolen (or always |
143 |
< |
* held by) another. Becauae we are multiplexing many tasks on to |
144 |
< |
* a pool of workers, we can't just let them block (as in |
145 |
< |
* Thread.join). We also cannot just reassign the joiner's |
146 |
< |
* run-time stack with another and replace it later, which would |
147 |
< |
* be a form of "continuation", that even if possible is not |
148 |
< |
* necessarily a good idea. Given that the creation costs of most |
149 |
< |
* threads on most systems mainly surrounds setting up runtime |
150 |
< |
* stacks, thread creation and switching is usually not much more |
151 |
< |
* expensive than stack creation and switching, and is more |
152 |
< |
* flexible). Instead we combine two tactics: |
141 |
> |
* main responsibility of this framework is to take actions when |
142 |
> |
* one worker is waiting to join a task stolen (or always held by) |
143 |
> |
* another. Becauae we are multiplexing many tasks on to a pool |
144 |
> |
* of workers, we can't just let them block (as in Thread.join). |
145 |
> |
* We also cannot just reassign the joiner's run-time stack with |
146 |
> |
* another and replace it later, which would be a form of |
147 |
> |
* "continuation", that even if possible is not necessarily a good |
148 |
> |
* idea. Given that the creation costs of most threads on most |
149 |
> |
* systems mainly surrounds setting up runtime stacks, thread |
150 |
> |
* creation and switching is usually not much more expensive than |
151 |
> |
* stack creation and switching, and is more flexible). Instead we |
152 |
> |
* combine two tactics: |
153 |
|
* |
154 |
< |
* 1. Arranging for the joiner to execute some task that it |
154 |
> |
* Helping: Arranging for the joiner to execute some task that it |
155 |
|
* would be running if the steal had not occurred. Method |
156 |
|
* ForkJoinWorkerThread.helpJoinTask tracks joining->stealing |
157 |
|
* links to try to find such a task. |
158 |
|
* |
159 |
< |
* 2. Unless there are already enough live threads, creating or |
160 |
< |
* or re-activating a spare thread to compensate for the |
161 |
< |
* (blocked) joiner until it unblocks. Spares then suspend |
162 |
< |
* at their next opportunity or eventually die if unused for |
163 |
< |
* too long. See below and the internal documentation |
164 |
< |
* for tryAwaitJoin for more details about compensation |
165 |
< |
* rules. |
159 |
> |
* Compensating: Unless there are already enough live threads, |
160 |
> |
* method helpMaintainParallelism() may create or or |
161 |
> |
* re-activate a spare thread to compensate for blocked |
162 |
> |
* joiners until they unblock. |
163 |
|
* |
164 |
|
* Because the determining existence of conservatively safe |
165 |
|
* helping targets, the availability of already-created spares, |
166 |
|
* and the apparent need to create new spares are all racy and |
167 |
< |
* require heuristic guidance, joins (in |
168 |
< |
* ForkJoinWorkerThread.joinTask) interleave these options until |
169 |
< |
* successful. Creating a new spare always succeeds, but also |
170 |
< |
* increases application footprint, so we try to avoid it, within |
171 |
< |
* reason. |
167 |
> |
* require heuristic guidance, we rely on multiple retries of |
168 |
> |
* each. Further, because it is impossible to keep exactly the |
169 |
> |
* target (parallelism) number of threads running at any given |
170 |
> |
* time, we allow compensation during joins to fail, and enlist |
171 |
> |
* all other threads to help out whenever they are not otherwise |
172 |
> |
* occupied (i.e., mainly in method preStep). |
173 |
|
* |
174 |
< |
* The ManagedBlocker extension API can't use option (1) so uses a |
175 |
< |
* special version of (2) in method awaitBlocker. |
174 |
> |
* The ManagedBlocker extension API can't use helping so relies |
175 |
> |
* only on compensation in method awaitBlocker. |
176 |
|
* |
177 |
|
* The main throughput advantages of work-stealing stem from |
178 |
|
* decentralized control -- workers mostly steal tasks from each |
205 |
|
* blocked workers. However, all other support code is set up to |
206 |
|
* work with other policies. |
207 |
|
* |
208 |
+ |
* To ensure that we do not hold on to worker references that |
209 |
+ |
* would prevent GC, ALL accesses to workers are via indices into |
210 |
+ |
* the workers array (which is one source of some of the unusual |
211 |
+ |
* code constructions here). In essence, the workers array serves |
212 |
+ |
* as a WeakReference mechanism. Thus for example the event queue |
213 |
+ |
* stores worker indices, not worker references. Access to the |
214 |
+ |
* workers in associated methods (for example releaseEventWaiters) |
215 |
+ |
* must both index-check and null-check the IDs. All such accesses |
216 |
+ |
* ignore bad IDs by returning out early from what they are doing, |
217 |
+ |
* since this can only be associated with shutdown, in which case |
218 |
+ |
* it is OK to give up. On termination, we just clobber these |
219 |
+ |
* data structures without trying to use them. |
220 |
+ |
* |
221 |
|
* 2. Bookkeeping for dynamically adding and removing workers. We |
222 |
|
* aim to approximately maintain the given level of parallelism. |
223 |
|
* When some workers are known to be blocked (on joins or via |
259 |
|
* workers that previously could not find a task to now find one: |
260 |
|
* Submission of a new task to the pool, or another worker pushing |
261 |
|
* a task onto a previously empty queue. (We also use this |
262 |
< |
* mechanism for termination and reconfiguration actions that |
263 |
< |
* require wakeups of idle workers). Each worker maintains its |
264 |
< |
* last known event count, and blocks when a scan for work did not |
265 |
< |
* find a task AND its lastEventCount matches the current |
266 |
< |
* eventCount. Waiting idle workers are recorded in a variant of |
267 |
< |
* Treiber stack headed by field eventWaiters which, when nonzero, |
268 |
< |
* encodes the thread index and count awaited for by the worker |
269 |
< |
* thread most recently calling eventSync. This thread in turn has |
270 |
< |
* a record (field nextEventWaiter) for the next waiting worker. |
271 |
< |
* In addition to allowing simpler decisions about need for |
272 |
< |
* wakeup, the event count bits in eventWaiters serve the role of |
273 |
< |
* tags to avoid ABA errors in Treiber stacks. To reduce delays |
274 |
< |
* in task diffusion, workers not otherwise occupied may invoke |
275 |
< |
* method releaseWaiters, that removes and signals (unparks) |
276 |
< |
* workers not waiting on current count. To minimize task |
277 |
< |
* production stalls associate with signalling, any worker pushing |
278 |
< |
* a task on an empty queue invokes the weaker method signalWork, |
279 |
< |
* that only releases idle workers until it detects interference |
280 |
< |
* by other threads trying to release, and lets them take |
281 |
< |
* over. The net effect is a tree-like diffusion of signals, where |
282 |
< |
* released threads (and possibly others) help with unparks. To |
283 |
< |
* further reduce contention effects a bit, failed CASes to |
262 |
> |
* mechanism for termination actions that require wakeups of idle |
263 |
> |
* workers). Each worker maintains its last known event count, |
264 |
> |
* and blocks when a scan for work did not find a task AND its |
265 |
> |
* lastEventCount matches the current eventCount. Waiting idle |
266 |
> |
* workers are recorded in a variant of Treiber stack headed by |
267 |
> |
* field eventWaiters which, when nonzero, encodes the thread |
268 |
> |
* index and count awaited for by the worker thread most recently |
269 |
> |
* calling eventSync. This thread in turn has a record (field |
270 |
> |
* nextEventWaiter) for the next waiting worker. In addition to |
271 |
> |
* allowing simpler decisions about need for wakeup, the event |
272 |
> |
* count bits in eventWaiters serve the role of tags to avoid ABA |
273 |
> |
* errors in Treiber stacks. To reduce delays in task diffusion, |
274 |
> |
* workers not otherwise occupied may invoke method |
275 |
> |
* releaseEventWaiters, that removes and signals (unparks) workers |
276 |
> |
* not waiting on current count. To reduce stalls, To minimize |
277 |
> |
* task production stalls associate with signalling, any worker |
278 |
> |
* pushing a task on an empty queue invokes the weaker method |
279 |
> |
* signalWork, that only releases idle workers until it detects |
280 |
> |
* interference by other threads trying to release, and lets them |
281 |
> |
* take over. The net effect is a tree-like diffusion of signals, |
282 |
> |
* where released threads (and possibly others) help with unparks. |
283 |
> |
* To further reduce contention effects a bit, failed CASes to |
284 |
|
* increment field eventCount are tolerated without retries. |
285 |
|
* Conceptually they are merged into the same event, which is OK |
286 |
|
* when their only purpose is to enable workers to scan for work. |
296 |
|
* spare threads from normal "core" threads: On each call to |
297 |
|
* preStep (the only point at which we can do this) a worker |
298 |
|
* checks to see if there are now too many running workers, and if |
299 |
< |
* so, suspends itself. Methods tryAwaitJoin and awaitBlocker |
300 |
< |
* look for suspended threads to resume before considering |
301 |
< |
* creating a new replacement. We don't need a special data |
302 |
< |
* structure to maintain spares; simply scanning the workers array |
303 |
< |
* looking for worker.isSuspended() is fine because the calling |
304 |
< |
* thread is otherwise not doing anything useful anyway; we are at |
305 |
< |
* least as happy if after locating a spare, the caller doesn't |
306 |
< |
* actually block because the join is ready before we try to |
307 |
< |
* adjust and compensate. Note that this is intrinsically racy. |
308 |
< |
* One thread may become a spare at about the same time as another |
309 |
< |
* is needlessly being created. We counteract this and related |
310 |
< |
* slop in part by requiring resumed spares to immediately recheck |
311 |
< |
* (in preStep) to see whether they they should re-suspend. The |
312 |
< |
* only effective difference between "extra" and "core" threads is |
313 |
< |
* that we allow the "extra" ones to time out and die if they are |
303 |
< |
* not resumed within a keep-alive interval of a few seconds. This |
304 |
< |
* is implemented mainly within ForkJoinWorkerThread, but requires |
305 |
< |
* some coordination (isTrimmed() -- meaning killed while |
306 |
< |
* suspended) to correctly maintain pool counts. |
299 |
> |
* so, suspends itself. Method helpMaintainParallelism looks for |
300 |
> |
* suspended threads to resume before considering creating a new |
301 |
> |
* replacement. The spares themselves are encoded on another |
302 |
> |
* variant of a Treiber Stack, headed at field "spareWaiters". |
303 |
> |
* Note that the use of spares is intrinsically racy. One thread |
304 |
> |
* may become a spare at about the same time as another is |
305 |
> |
* needlessly being created. We counteract this and related slop |
306 |
> |
* in part by requiring resumed spares to immediately recheck (in |
307 |
> |
* preStep) to see whether they they should re-suspend. To avoid |
308 |
> |
* long-term build-up of spares, the oldest spare (see |
309 |
> |
* ForkJoinWorkerThread.suspendAsSpare) occasionally wakes up if |
310 |
> |
* not signalled and calls tryTrimSpare, which uses two different |
311 |
> |
* thresholds: Always killing if the number of spares is greater |
312 |
> |
* that 25% of total, and killing others only at a slower rate |
313 |
> |
* (UNUSED_SPARE_TRIM_RATE_NANOS). |
314 |
|
* |
315 |
|
* 6. Deciding when to create new workers. The main dynamic |
316 |
< |
* control in this class is deciding when to create extra threads, |
317 |
< |
* in methods awaitJoin and awaitBlocker. We always need to create |
318 |
< |
* one when the number of running threads would become zero and |
319 |
< |
* all workers are busy. However, this is not easy to detect |
320 |
< |
* reliably in the presence of transients so we use retries and |
321 |
< |
* allow slack (in tryAwaitJoin) to reduce false alarms. These |
322 |
< |
* effectively reduce churn at the price of systematically |
323 |
< |
* undershooting target parallelism when many threads are blocked. |
324 |
< |
* However, biasing toward undeshooting partially compensates for |
325 |
< |
* the above mechanics to suspend extra threads, that normally |
326 |
< |
* lead to overshoot because we can only suspend workers |
327 |
< |
* in-between top-level actions. It also better copes with the |
328 |
< |
* fact that some of the methods in this class tend to never |
329 |
< |
* become compiled (but are interpreted), so some components of |
330 |
< |
* the entire set of controls might execute many times faster than |
331 |
< |
* others. And similarly for cases where the apparent lack of work |
332 |
< |
* is just due to GC stalls and other transient system activity. |
316 |
> |
* control in this class is deciding when to create extra threads |
317 |
> |
* in method helpMaintainParallelism. We would like to keep |
318 |
> |
* exactly #parallelism threads running, which is an impossble |
319 |
> |
* task. We always need to create one when the number of running |
320 |
> |
* threads would become zero and all workers are busy. Beyond |
321 |
> |
* this, we must rely on heuristics that work well in the the |
322 |
> |
* presence of transients phenomena such as GC stalls, dynamic |
323 |
> |
* compilation, and wake-up lags. These transients are extremely |
324 |
> |
* common -- we are normally trying to fully saturate the CPUs on |
325 |
> |
* a machine, so almost any activity other than running tasks |
326 |
> |
* impedes accuracy. Our main defense is to allow some slack in |
327 |
> |
* creation thresholds, using rules that reflect the fact that the |
328 |
> |
* more threads we have running, the more likely that we are |
329 |
> |
* underestimating the number running threads. The rules also |
330 |
> |
* better cope with the fact that some of the methods in this |
331 |
> |
* class tend to never become compiled (but are interpreted), so |
332 |
> |
* some components of the entire set of controls might execute 100 |
333 |
> |
* times faster than others. And similarly for cases where the |
334 |
> |
* apparent lack of work is just due to GC stalls and other |
335 |
> |
* transient system activity. |
336 |
|
* |
337 |
|
* Beware that there is a lot of representation-level coupling |
338 |
|
* among classes ForkJoinPool, ForkJoinWorkerThread, and |
345 |
|
* |
346 |
|
* Style notes: There are lots of inline assignments (of form |
347 |
|
* "while ((local = field) != 0)") which are usually the simplest |
348 |
< |
* way to ensure read orderings. Also several occurrences of the |
349 |
< |
* unusual "do {} while(!cas...)" which is the simplest way to |
350 |
< |
* force an update of a CAS'ed variable. There are also other |
351 |
< |
* coding oddities that help some methods perform reasonably even |
352 |
< |
* when interpreted (not compiled), at the expense of messiness. |
348 |
> |
* way to ensure the required read orderings (which are sometimes |
349 |
> |
* critical). Also several occurrences of the unusual "do {} |
350 |
> |
* while(!cas...)" which is the simplest way to force an update of |
351 |
> |
* a CAS'ed variable. There are also other coding oddities that |
352 |
> |
* help some methods perform reasonably even when interpreted (not |
353 |
> |
* compiled), at the expense of some messy constructions that |
354 |
> |
* reduce byte code counts. |
355 |
|
* |
356 |
|
* The order of declarations in this file is: (1) statics (2) |
357 |
|
* fields (along with constants used when unpacking some of them) |
419 |
|
new AtomicInteger(); |
420 |
|
|
421 |
|
/** |
422 |
< |
* Absolute bound for parallelism level. Twice this number must |
423 |
< |
* fit into a 16bit field to enable word-packing for some counts. |
422 |
> |
* Absolute bound for parallelism level. Twice this number plus |
423 |
> |
* one (i.e., 0xfff) must fit into a 16bit field to enable |
424 |
> |
* word-packing for some counts and indices. |
425 |
|
*/ |
426 |
< |
private static final int MAX_THREADS = 0x7fff; |
426 |
> |
private static final int MAX_WORKERS = 0x7fff; |
427 |
|
|
428 |
|
/** |
429 |
|
* Array holding all worker threads in the pool. Array size must |
463 |
|
private volatile long stealCount; |
464 |
|
|
465 |
|
/** |
466 |
+ |
* The last nanoTime that a spare thread was trimmed |
467 |
+ |
*/ |
468 |
+ |
private volatile long trimTime; |
469 |
+ |
|
470 |
+ |
/** |
471 |
+ |
* The rate at which to trim unused spares |
472 |
+ |
*/ |
473 |
+ |
static final long UNUSED_SPARE_TRIM_RATE_NANOS = |
474 |
+ |
1000L * 1000L * 1000L; // 1 sec |
475 |
+ |
|
476 |
+ |
/** |
477 |
|
* Encoded record of top of treiber stack of threads waiting for |
478 |
|
* events. The top 32 bits contain the count being waited for. The |
479 |
< |
* bottom word contains one plus the pool index of waiting worker |
480 |
< |
* thread. |
479 |
> |
* bottom 16 bits contains one plus the pool index of waiting |
480 |
> |
* worker thread. (Bits 16-31 are unused.) |
481 |
|
*/ |
482 |
|
private volatile long eventWaiters; |
483 |
|
|
484 |
|
private static final int EVENT_COUNT_SHIFT = 32; |
485 |
< |
private static final long WAITER_ID_MASK = (1L << EVENT_COUNT_SHIFT)-1L; |
485 |
> |
private static final long WAITER_ID_MASK = (1L << 16) - 1L; |
486 |
|
|
487 |
|
/** |
488 |
|
* A counter for events that may wake up worker threads: |
489 |
|
* - Submission of a new task to the pool |
490 |
|
* - A worker pushing a task on an empty queue |
491 |
< |
* - termination and reconfiguration |
491 |
> |
* - termination |
492 |
|
*/ |
493 |
|
private volatile int eventCount; |
494 |
|
|
495 |
|
/** |
496 |
+ |
* Encoded record of top of treiber stack of spare threads waiting |
497 |
+ |
* for resumption. The top 16 bits contain an arbitrary count to |
498 |
+ |
* avoid ABA effects. The bottom 16bits contains one plus the pool |
499 |
+ |
* index of waiting worker thread. |
500 |
+ |
*/ |
501 |
+ |
private volatile int spareWaiters; |
502 |
+ |
|
503 |
+ |
private static final int SPARE_COUNT_SHIFT = 16; |
504 |
+ |
private static final int SPARE_ID_MASK = (1 << 16) - 1; |
505 |
+ |
|
506 |
+ |
/** |
507 |
|
* Lifecycle control. The low word contains the number of workers |
508 |
|
* that are (probably) executing tasks. This value is atomically |
509 |
|
* incremented before a worker gets a task to run, and decremented |
532 |
|
* making decisions about creating and suspending spare |
533 |
|
* threads. Updated only by CAS. Note that adding a new worker |
534 |
|
* requires incrementing both counts, since workers start off in |
535 |
< |
* running state. This field is also used for memory-fencing |
501 |
< |
* configuration parameters. |
535 |
> |
* running state. |
536 |
|
*/ |
537 |
|
private volatile int workerCounts; |
538 |
|
|
564 |
|
*/ |
565 |
|
private final int poolNumber; |
566 |
|
|
567 |
+ |
|
568 |
|
// Utilities for CASing fields. Note that several of these |
569 |
|
// are manually inlined by callers |
570 |
|
|
571 |
|
/** |
572 |
< |
* Increments running count. Also used by ForkJoinTask. |
572 |
> |
* Increments running count part of workerCounts |
573 |
|
*/ |
574 |
|
final void incrementRunningCount() { |
575 |
|
int c; |
590 |
|
} |
591 |
|
|
592 |
|
/** |
593 |
< |
* Tries to increment running count |
593 |
> |
* Forces decrement of encoded workerCounts, awaiting nonzero if |
594 |
> |
* (rarely) necessary when other count updates lag. |
595 |
> |
* |
596 |
> |
* @param dr -- either zero or ONE_RUNNING |
597 |
> |
* @param dt == either zero or ONE_TOTAL |
598 |
|
*/ |
599 |
< |
final boolean tryIncrementRunningCount() { |
600 |
< |
int wc; |
601 |
< |
return UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
602 |
< |
wc = workerCounts, wc + ONE_RUNNING); |
599 |
> |
private void decrementWorkerCounts(int dr, int dt) { |
600 |
> |
for (;;) { |
601 |
> |
int wc = workerCounts; |
602 |
> |
if (wc == 0 && (runState & TERMINATED) != 0) |
603 |
> |
return; // lagging termination on a backout |
604 |
> |
if ((wc & RUNNING_COUNT_MASK) - dr < 0 || |
605 |
> |
(wc >>> TOTAL_COUNT_SHIFT) - dt < 0) |
606 |
> |
Thread.yield(); |
607 |
> |
if (UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
608 |
> |
wc, wc - (dr + dt))) |
609 |
> |
return; |
610 |
> |
} |
611 |
> |
} |
612 |
> |
|
613 |
> |
/** |
614 |
> |
* Increments event count |
615 |
> |
*/ |
616 |
> |
private void advanceEventCount() { |
617 |
> |
int c; |
618 |
> |
do {} while(!UNSAFE.compareAndSwapInt(this, eventCountOffset, |
619 |
> |
c = eventCount, c+1)); |
620 |
|
} |
621 |
|
|
622 |
|
/** |
667 |
|
lock.lock(); |
668 |
|
try { |
669 |
|
ForkJoinWorkerThread[] ws = workers; |
670 |
< |
int nws = ws.length; |
671 |
< |
if (k < 0 || k >= nws || ws[k] != null) { |
672 |
< |
for (k = 0; k < nws && ws[k] != null; ++k) |
670 |
> |
int n = ws.length; |
671 |
> |
if (k < 0 || k >= n || ws[k] != null) { |
672 |
> |
for (k = 0; k < n && ws[k] != null; ++k) |
673 |
|
; |
674 |
< |
if (k == nws) |
675 |
< |
ws = Arrays.copyOf(ws, nws << 1); |
674 |
> |
if (k == n) |
675 |
> |
ws = Arrays.copyOf(ws, n << 1); |
676 |
|
} |
677 |
|
ws[k] = w; |
678 |
|
workers = ws; // volatile array write ensures slot visibility |
705 |
|
* Tries to create and add new worker. Assumes that worker counts |
706 |
|
* are already updated to accommodate the worker, so adjusts on |
707 |
|
* failure. |
652 |
– |
* |
653 |
– |
* @return new worker or null if creation failed |
708 |
|
*/ |
709 |
< |
private ForkJoinWorkerThread addWorker() { |
709 |
> |
private void addWorker() { |
710 |
|
ForkJoinWorkerThread w = null; |
711 |
|
try { |
712 |
|
w = factory.newThread(this); |
713 |
|
} finally { // Adjust on either null or exceptional factory return |
714 |
|
if (w == null) { |
715 |
< |
onWorkerCreationFailure(); |
716 |
< |
return null; |
715 |
> |
decrementWorkerCounts(ONE_RUNNING, ONE_TOTAL); |
716 |
> |
tryTerminate(false); // in case of failure during shutdown |
717 |
|
} |
718 |
|
} |
719 |
< |
w.start(recordWorker(w), ueh); |
720 |
< |
return w; |
667 |
< |
} |
668 |
< |
|
669 |
< |
/** |
670 |
< |
* Adjusts counts upon failure to create worker |
671 |
< |
*/ |
672 |
< |
private void onWorkerCreationFailure() { |
673 |
< |
for (;;) { |
674 |
< |
int wc = workerCounts; |
675 |
< |
if ((wc >>> TOTAL_COUNT_SHIFT) == 0) |
676 |
< |
Thread.yield(); // wait for other counts to settle |
677 |
< |
else if (UNSAFE.compareAndSwapInt(this, workerCountsOffset, wc, |
678 |
< |
wc - (ONE_RUNNING|ONE_TOTAL))) |
679 |
< |
break; |
680 |
< |
} |
681 |
< |
tryTerminate(false); // in case of failure during shutdown |
682 |
< |
} |
683 |
< |
|
684 |
< |
/** |
685 |
< |
* Creates and/or resumes enough workers to establish target |
686 |
< |
* parallelism, giving up if terminating or addWorker fails |
687 |
< |
* |
688 |
< |
* TODO: recast this to support lazier creation and automated |
689 |
< |
* parallelism maintenance |
690 |
< |
*/ |
691 |
< |
private void ensureEnoughWorkers() { |
692 |
< |
while ((runState & TERMINATING) == 0) { |
693 |
< |
int pc = parallelism; |
694 |
< |
int wc = workerCounts; |
695 |
< |
int rc = wc & RUNNING_COUNT_MASK; |
696 |
< |
int tc = wc >>> TOTAL_COUNT_SHIFT; |
697 |
< |
if (tc < pc) { |
698 |
< |
if (UNSAFE.compareAndSwapInt |
699 |
< |
(this, workerCountsOffset, |
700 |
< |
wc, wc + (ONE_RUNNING|ONE_TOTAL)) && |
701 |
< |
addWorker() == null) |
702 |
< |
break; |
703 |
< |
} |
704 |
< |
else if (tc > pc && rc < pc && |
705 |
< |
tc > (runState & ACTIVE_COUNT_MASK)) { |
706 |
< |
ForkJoinWorkerThread spare = null; |
707 |
< |
ForkJoinWorkerThread[] ws = workers; |
708 |
< |
int nws = ws.length; |
709 |
< |
for (int i = 0; i < nws; ++i) { |
710 |
< |
ForkJoinWorkerThread w = ws[i]; |
711 |
< |
if (w != null && w.isSuspended()) { |
712 |
< |
if ((workerCounts & RUNNING_COUNT_MASK) > pc) |
713 |
< |
return; |
714 |
< |
if (w.tryResumeSpare()) |
715 |
< |
incrementRunningCount(); |
716 |
< |
break; |
717 |
< |
} |
718 |
< |
} |
719 |
< |
} |
720 |
< |
else |
721 |
< |
break; |
722 |
< |
} |
719 |
> |
if (w != null) |
720 |
> |
w.start(recordWorker(w), ueh); |
721 |
|
} |
722 |
|
|
723 |
|
/** |
724 |
|
* Final callback from terminating worker. Removes record of |
725 |
|
* worker from array, and adjusts counts. If pool is shutting |
726 |
< |
* down, tries to complete terminatation, else possibly replaces |
729 |
< |
* the worker. |
726 |
> |
* down, tries to complete terminatation. |
727 |
|
* |
728 |
|
* @param w the worker |
729 |
|
*/ |
730 |
|
final void workerTerminated(ForkJoinWorkerThread w) { |
734 |
– |
if (w.active) { // force inactive |
735 |
– |
w.active = false; |
736 |
– |
do {} while (!tryDecrementActiveCount()); |
737 |
– |
} |
731 |
|
forgetWorker(w); |
732 |
< |
|
733 |
< |
// Decrement total count, and if was running, running count |
734 |
< |
// Spin (waiting for other updates) if either would be negative |
735 |
< |
int nr = w.isTrimmed() ? 0 : ONE_RUNNING; |
743 |
< |
int unit = ONE_TOTAL + nr; |
744 |
< |
for (;;) { |
745 |
< |
int wc = workerCounts; |
746 |
< |
int rc = wc & RUNNING_COUNT_MASK; |
747 |
< |
if (rc - nr < 0 || (wc >>> TOTAL_COUNT_SHIFT) == 0) |
748 |
< |
Thread.yield(); // back off if waiting for other updates |
749 |
< |
else if (UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
750 |
< |
wc, wc - unit)) |
751 |
< |
break; |
752 |
< |
} |
753 |
< |
|
754 |
< |
accumulateStealCount(w); // collect final count |
755 |
< |
if (!tryTerminate(false)) |
756 |
< |
ensureEnoughWorkers(); |
732 |
> |
decrementWorkerCounts(w.isTrimmed()? 0 : ONE_RUNNING, ONE_TOTAL); |
733 |
> |
while (w.stealCount != 0) // collect final count |
734 |
> |
tryAccumulateStealCount(w); |
735 |
> |
tryTerminate(false); |
736 |
|
} |
737 |
|
|
738 |
|
// Waiting for and signalling events |
739 |
|
|
740 |
|
/** |
741 |
|
* Releases workers blocked on a count not equal to current count. |
742 |
< |
* @return true if any released |
742 |
> |
* Normally called after precheck that eventWaiters isn't zero to |
743 |
> |
* avoid wasted array checks. |
744 |
> |
* |
745 |
> |
* @param signalling true if caller is a signalling worker so can |
746 |
> |
* exit upon (conservatively) detected contention by other threads |
747 |
> |
* who will continue to release |
748 |
|
*/ |
749 |
< |
private void releaseWaiters() { |
750 |
< |
long top; |
751 |
< |
while ((top = eventWaiters) != 0L) { |
752 |
< |
ForkJoinWorkerThread[] ws = workers; |
753 |
< |
int n = ws.length; |
754 |
< |
for (;;) { |
755 |
< |
int i = ((int)(top & WAITER_ID_MASK)) - 1; |
756 |
< |
if (i < 0 || (int)(top >>> EVENT_COUNT_SHIFT) == eventCount) |
757 |
< |
return; |
758 |
< |
ForkJoinWorkerThread w; |
759 |
< |
if (i < n && (w = ws[i]) != null && |
760 |
< |
UNSAFE.compareAndSwapLong(this, eventWaitersOffset, |
761 |
< |
top, w.nextWaiter)) { |
778 |
< |
LockSupport.unpark(w); |
779 |
< |
top = eventWaiters; |
780 |
< |
} |
781 |
< |
else |
782 |
< |
break; // possibly stale; reread |
783 |
< |
} |
749 |
> |
private void releaseEventWaiters(boolean signalling) { |
750 |
> |
ForkJoinWorkerThread[] ws = workers; |
751 |
> |
int n = ws.length; |
752 |
> |
long h; // head of stack |
753 |
> |
ForkJoinWorkerThread w; int id, ec; |
754 |
> |
while ((id = ((int)((h = eventWaiters) & WAITER_ID_MASK)) - 1) >= 0 && |
755 |
> |
(int)(h >>> EVENT_COUNT_SHIFT) != (ec = eventCount) && |
756 |
> |
id < n && (w = ws[id]) != null) { |
757 |
> |
if (UNSAFE.compareAndSwapLong(this, eventWaitersOffset, |
758 |
> |
h, h = w.nextWaiter)) |
759 |
> |
LockSupport.unpark(w); |
760 |
> |
if (signalling && (eventCount != ec || eventWaiters != h)) |
761 |
> |
break; |
762 |
|
} |
763 |
|
} |
764 |
|
|
765 |
|
/** |
766 |
< |
* Ensures eventCount on exit is different (mod 2^32) than on |
767 |
< |
* entry and wakes up all waiters |
766 |
> |
* Tries to advance eventCount and releases waiters. Called only |
767 |
> |
* from workers. |
768 |
|
*/ |
769 |
< |
private void signalEvent() { |
770 |
< |
int c; |
771 |
< |
do {} while (!UNSAFE.compareAndSwapInt(this, eventCountOffset, |
772 |
< |
c = eventCount, c+1)); |
773 |
< |
releaseWaiters(); |
769 |
> |
final void signalWork() { |
770 |
> |
int c; // try to increment event count -- CAS failure OK |
771 |
> |
UNSAFE.compareAndSwapInt(this, eventCountOffset, c = eventCount, c+1); |
772 |
> |
if (eventWaiters != 0L) |
773 |
> |
releaseEventWaiters(true); |
774 |
|
} |
775 |
|
|
776 |
|
/** |
777 |
< |
* Advances eventCount and releases waiters until interference by |
778 |
< |
* other releasing threads is detected. |
777 |
> |
* Blocks worker until terminating or event count |
778 |
> |
* advances from last value held by worker |
779 |
> |
* |
780 |
> |
* @param w the calling worker thread |
781 |
|
*/ |
782 |
< |
final void signalWork() { |
783 |
< |
int c; |
784 |
< |
UNSAFE.compareAndSwapInt(this, eventCountOffset, c=eventCount, c+1); |
785 |
< |
long top; |
786 |
< |
while ((top = eventWaiters) != 0L) { |
787 |
< |
int ec = eventCount; |
788 |
< |
ForkJoinWorkerThread[] ws = workers; |
789 |
< |
int n = ws.length; |
790 |
< |
for (;;) { |
791 |
< |
int i = ((int)(top & WAITER_ID_MASK)) - 1; |
792 |
< |
if (i < 0 || (int)(top >>> EVENT_COUNT_SHIFT) == ec) |
793 |
< |
return; |
794 |
< |
ForkJoinWorkerThread w; |
795 |
< |
if (i < n && (w = ws[i]) != null && |
796 |
< |
UNSAFE.compareAndSwapLong(this, eventWaitersOffset, |
797 |
< |
top, top = w.nextWaiter)) { |
798 |
< |
LockSupport.unpark(w); |
819 |
< |
if (top != eventWaiters) // let someone else take over |
820 |
< |
return; |
782 |
> |
private void eventSync(ForkJoinWorkerThread w) { |
783 |
> |
int wec = w.lastEventCount; |
784 |
> |
long nh = (((long)wec) << EVENT_COUNT_SHIFT) | ((long)(w.poolIndex+1)); |
785 |
> |
long h; |
786 |
> |
while ((runState < SHUTDOWN || !tryTerminate(false)) && |
787 |
> |
((h = eventWaiters) == 0L || |
788 |
> |
(int)(h >>> EVENT_COUNT_SHIFT) == wec) && |
789 |
> |
eventCount == wec) { |
790 |
> |
if (UNSAFE.compareAndSwapLong(this, eventWaitersOffset, |
791 |
> |
w.nextWaiter = h, nh)) { |
792 |
> |
while (runState < TERMINATING && eventCount == wec) { |
793 |
> |
if (!tryAccumulateStealCount(w)) // transfer while idle |
794 |
> |
continue; |
795 |
> |
Thread.interrupted(); // clear/ignore interrupt |
796 |
> |
if (eventCount != wec) |
797 |
> |
break; |
798 |
> |
LockSupport.park(w); |
799 |
|
} |
800 |
< |
else |
823 |
< |
break; // possibly stale; reread |
800 |
> |
break; |
801 |
|
} |
802 |
|
} |
803 |
+ |
w.lastEventCount = eventCount; |
804 |
|
} |
805 |
|
|
806 |
+ |
// Maintaining spares |
807 |
+ |
|
808 |
|
/** |
809 |
< |
* If worker is inactive, blocks until terminating or event count |
830 |
< |
* advances from last value held by worker; in any case helps |
831 |
< |
* release others. |
832 |
< |
* |
833 |
< |
* @param w the calling worker thread |
834 |
< |
* @param retries the number of scans by caller failing to find work |
835 |
< |
* @return false if now too many threads running |
809 |
> |
* Pushes worker onto the spare stack |
810 |
|
*/ |
811 |
< |
private boolean eventSync(ForkJoinWorkerThread w, int retries) { |
812 |
< |
int wec = w.lastEventCount; |
813 |
< |
if (retries > 1) { // can only block after 2nd miss |
814 |
< |
long nextTop = (((long)wec << EVENT_COUNT_SHIFT) | |
815 |
< |
((long)(w.poolIndex + 1))); |
816 |
< |
long top; |
817 |
< |
while ((runState < SHUTDOWN || !tryTerminate(false)) && |
818 |
< |
(((int)(top = eventWaiters) & WAITER_ID_MASK) == 0 || |
819 |
< |
(int)(top >>> EVENT_COUNT_SHIFT) == wec) && |
820 |
< |
eventCount == wec) { |
821 |
< |
if (UNSAFE.compareAndSwapLong(this, eventWaitersOffset, |
822 |
< |
w.nextWaiter = top, nextTop)) { |
823 |
< |
accumulateStealCount(w); // transfer steals while idle |
824 |
< |
Thread.interrupted(); // clear/ignore interrupt |
825 |
< |
while (eventCount == wec) |
826 |
< |
w.doPark(); |
827 |
< |
break; |
828 |
< |
} |
811 |
> |
final void pushSpare(ForkJoinWorkerThread w) { |
812 |
> |
int ns = (++w.spareCount << SPARE_COUNT_SHIFT) | (w.poolIndex+1); |
813 |
> |
do {} while (!UNSAFE.compareAndSwapInt(this, spareWaitersOffset, |
814 |
> |
w.nextSpare = spareWaiters,ns)); |
815 |
> |
} |
816 |
> |
|
817 |
> |
/** |
818 |
> |
* Tries (once) to resume a spare if running count is less than |
819 |
> |
* target parallelism. Fails on contention or stale workers. |
820 |
> |
*/ |
821 |
> |
private void tryResumeSpare() { |
822 |
> |
int sw, id; |
823 |
> |
ForkJoinWorkerThread w; |
824 |
> |
ForkJoinWorkerThread[] ws; |
825 |
> |
if ((id = ((sw = spareWaiters) & SPARE_ID_MASK) - 1) >= 0 && |
826 |
> |
id < (ws = workers).length && (w = ws[id]) != null && |
827 |
> |
(workerCounts & RUNNING_COUNT_MASK) < parallelism && |
828 |
> |
eventWaiters == 0L && |
829 |
> |
spareWaiters == sw && |
830 |
> |
UNSAFE.compareAndSwapInt(this, spareWaitersOffset, |
831 |
> |
sw, w.nextSpare) && |
832 |
> |
w.tryUnsuspend()) { |
833 |
> |
int c; // try increment; if contended, finish after unpark |
834 |
> |
boolean inc = UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
835 |
> |
c = workerCounts, |
836 |
> |
c + ONE_RUNNING); |
837 |
> |
LockSupport.unpark(w); |
838 |
> |
if (!inc) { |
839 |
> |
do {} while(!UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
840 |
> |
c = workerCounts, |
841 |
> |
c + ONE_RUNNING)); |
842 |
|
} |
856 |
– |
wec = eventCount; |
843 |
|
} |
844 |
< |
releaseWaiters(); |
844 |
> |
} |
845 |
> |
|
846 |
> |
/** |
847 |
> |
* Callback from oldest spare occasionally waking up. Tries |
848 |
> |
* (once) to shutdown a spare if more than 25% spare overage, or |
849 |
> |
* if UNUSED_SPARE_TRIM_RATE_NANOS have elapsed and there are at |
850 |
> |
* least #parallelism running threads. Note that we don't need CAS |
851 |
> |
* or locks here because the method is called only from the oldest |
852 |
> |
* suspended spare occasionally waking (and even misfires are OK). |
853 |
> |
* |
854 |
> |
* @param now the wake up nanoTime of caller |
855 |
> |
*/ |
856 |
> |
final void tryTrimSpare(long now) { |
857 |
> |
long lastTrim = trimTime; |
858 |
> |
trimTime = now; |
859 |
> |
helpMaintainParallelism(); // first, help wake up any needed spares |
860 |
> |
int sw, id; |
861 |
> |
ForkJoinWorkerThread w; |
862 |
> |
ForkJoinWorkerThread[] ws; |
863 |
> |
int pc = parallelism; |
864 |
|
int wc = workerCounts; |
865 |
< |
if ((wc & RUNNING_COUNT_MASK) <= parallelism) { |
866 |
< |
w.lastEventCount = wec; |
867 |
< |
return true; |
865 |
> |
if ((wc & RUNNING_COUNT_MASK) >= pc && |
866 |
> |
(((wc >>> TOTAL_COUNT_SHIFT) - pc) > (pc >>> 2) + 1 ||// approx 25% |
867 |
> |
now - lastTrim >= UNUSED_SPARE_TRIM_RATE_NANOS) && |
868 |
> |
(id = ((sw = spareWaiters) & SPARE_ID_MASK) - 1) >= 0 && |
869 |
> |
id < (ws = workers).length && (w = ws[id]) != null && |
870 |
> |
UNSAFE.compareAndSwapInt(this, spareWaitersOffset, |
871 |
> |
sw, w.nextSpare)) |
872 |
> |
w.shutdown(false); |
873 |
> |
} |
874 |
> |
|
875 |
> |
/** |
876 |
> |
* Does at most one of: |
877 |
> |
* |
878 |
> |
* 1. Help wake up existing workers waiting for work via |
879 |
> |
* releaseEventWaiters. (If any exist, then it probably doesn't |
880 |
> |
* matter right now if under target parallelism level.) |
881 |
> |
* |
882 |
> |
* 2. If below parallelism level and a spare exists, try (once) |
883 |
> |
* to resume it via tryResumeSpare. |
884 |
> |
* |
885 |
> |
* 3. If neither of the above, tries (once) to add a new |
886 |
> |
* worker if either there are not enough total, or if all |
887 |
> |
* existing workers are busy, there are either no running |
888 |
> |
* workers or the deficit is at least twice the surplus. |
889 |
> |
*/ |
890 |
> |
private void helpMaintainParallelism() { |
891 |
> |
// uglified to work better when not compiled |
892 |
> |
int pc, wc, rc, tc, rs; long h; |
893 |
> |
if ((h = eventWaiters) != 0L) { |
894 |
> |
if ((int)(h >>> EVENT_COUNT_SHIFT) != eventCount) |
895 |
> |
releaseEventWaiters(false); // avoid useless call |
896 |
> |
} |
897 |
> |
else if ((pc = parallelism) > |
898 |
> |
(rc = ((wc = workerCounts) & RUNNING_COUNT_MASK))) { |
899 |
> |
if (spareWaiters != 0) |
900 |
> |
tryResumeSpare(); |
901 |
> |
else if ((rs = runState) < TERMINATING && |
902 |
> |
((tc = wc >>> TOTAL_COUNT_SHIFT) < pc || |
903 |
> |
(tc == (rs & ACTIVE_COUNT_MASK) && // all busy |
904 |
> |
(rc == 0 || // must add |
905 |
> |
rc < pc - ((tc - pc) << 1)) && // within slack |
906 |
> |
tc < MAX_WORKERS && runState == rs)) && // recheck busy |
907 |
> |
workerCounts == wc && |
908 |
> |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, wc, |
909 |
> |
wc + (ONE_RUNNING|ONE_TOTAL))) |
910 |
> |
addWorker(); |
911 |
|
} |
864 |
– |
if (wec != w.lastEventCount) // back up if may re-wait |
865 |
– |
w.lastEventCount = wec - (wc >>> TOTAL_COUNT_SHIFT); |
866 |
– |
return false; |
912 |
|
} |
913 |
|
|
914 |
|
/** |
915 |
|
* Callback from workers invoked upon each top-level action (i.e., |
916 |
|
* stealing a task or taking a submission and running |
917 |
< |
* it). Performs one or both of the following: |
917 |
> |
* it). Performs one or more of the following: |
918 |
> |
* |
919 |
> |
* 1. If the worker cannot find work (misses > 0), updates its |
920 |
> |
* active status to inactive and updates activeCount unless |
921 |
> |
* this is the first miss and there is contention, in which |
922 |
> |
* case it may try again (either in this or a subsequent |
923 |
> |
* call). |
924 |
> |
* |
925 |
> |
* 2. If there are at least 2 misses, awaits the next task event |
926 |
> |
* via eventSync |
927 |
> |
* |
928 |
> |
* 3. If there are too many running threads, suspends this worker |
929 |
> |
* (first forcing inactivation if necessary). If it is not |
930 |
> |
* needed, it may be killed while suspended via |
931 |
> |
* tryTrimSpare. Otherwise, upon resume it rechecks to make |
932 |
> |
* sure that it is still needed. |
933 |
|
* |
934 |
< |
* * If the worker cannot find work, updates its active status to |
935 |
< |
* inactive and updates activeCount unless there is contention, in |
876 |
< |
* which case it may try again (either in this or a subsequent |
877 |
< |
* call). Additionally, awaits the next task event and/or helps |
878 |
< |
* wake up other releasable waiters. |
879 |
< |
* |
880 |
< |
* * If there are too many running threads, suspends this worker |
881 |
< |
* (first forcing inactivation if necessary). If it is not |
882 |
< |
* resumed before a keepAlive elapses, the worker may be "trimmed" |
883 |
< |
* -- killed while suspended within suspendAsSpare. Otherwise, |
884 |
< |
* upon resume it rechecks to make sure that it is still needed. |
934 |
> |
* 4. Helps release and/or reactivate other workers via |
935 |
> |
* helpMaintainParallelism |
936 |
|
* |
937 |
|
* @param w the worker |
938 |
< |
* @param retries the number of scans by caller failing to find work |
939 |
< |
* find any (in which case it may block waiting for work). |
938 |
> |
* @param misses the number of scans by caller failing to find work |
939 |
> |
* (saturating at 2 just to avoid wraparound) |
940 |
|
*/ |
941 |
< |
final void preStep(ForkJoinWorkerThread w, int retries) { |
941 |
> |
final void preStep(ForkJoinWorkerThread w, int misses) { |
942 |
|
boolean active = w.active; |
943 |
< |
boolean inactivate = active && retries != 0; |
943 |
> |
int pc = parallelism; |
944 |
|
for (;;) { |
945 |
< |
int rs, wc; |
946 |
< |
if (inactivate && |
947 |
< |
UNSAFE.compareAndSwapInt(this, runStateOffset, |
948 |
< |
rs = runState, rs - ONE_ACTIVE)) |
949 |
< |
inactivate = active = w.active = false; |
950 |
< |
if (((wc = workerCounts) & RUNNING_COUNT_MASK) <= parallelism) { |
951 |
< |
if (active || eventSync(w, retries)) |
945 |
> |
int wc = workerCounts; |
946 |
> |
int rc = wc & RUNNING_COUNT_MASK; |
947 |
> |
if (active && (misses > 0 || rc > pc)) { |
948 |
> |
int rs; // try inactivate |
949 |
> |
if (UNSAFE.compareAndSwapInt(this, runStateOffset, |
950 |
> |
rs = runState, rs - ONE_ACTIVE)) |
951 |
> |
active = w.active = false; |
952 |
> |
else if (misses > 1 || rc > pc || |
953 |
> |
(rs & ACTIVE_COUNT_MASK) >= pc) |
954 |
> |
continue; // force inactivate |
955 |
> |
} |
956 |
> |
if (misses > 1) { |
957 |
> |
misses = 0; // don't re-sync |
958 |
> |
eventSync(w); // continue loop to recheck rc |
959 |
> |
} |
960 |
> |
else if (rc > pc) { |
961 |
> |
if (workerCounts == wc && // try to suspend as spare |
962 |
> |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
963 |
> |
wc, wc - ONE_RUNNING) && |
964 |
> |
!w.suspendAsSpare()) // false if killed |
965 |
|
break; |
966 |
|
} |
967 |
< |
else if (!(inactivate |= active) && // must inactivate to suspend |
968 |
< |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
969 |
< |
wc, wc - ONE_RUNNING) && |
906 |
< |
!w.suspendAsSpare()) // false if trimmed |
967 |
> |
else { |
968 |
> |
if (rc < pc || eventWaiters != 0L) |
969 |
> |
helpMaintainParallelism(); |
970 |
|
break; |
971 |
+ |
} |
972 |
|
} |
973 |
|
} |
974 |
|
|
975 |
|
/** |
976 |
< |
* Awaits join of the given task if enough threads, or can resume |
977 |
< |
* or create a spare. Fails (in which case the given task might |
978 |
< |
* not be done) upon contention or lack of decision about |
979 |
< |
* blocking. Returns void because caller must check |
980 |
< |
* task status on return anyway. |
917 |
< |
* |
918 |
< |
* We allow blocking if: |
919 |
< |
* |
920 |
< |
* 1. There would still be at least as many running threads as |
921 |
< |
* parallelism level if this thread blocks. |
922 |
< |
* |
923 |
< |
* 2. A spare is resumed to replace this worker. We tolerate |
924 |
< |
* slop in the decision to replace if a spare is found without |
925 |
< |
* first decrementing run count. This may release too many, |
926 |
< |
* but if so, the superfluous ones will re-suspend via |
927 |
< |
* preStep(). |
928 |
< |
* |
929 |
< |
* 3. After #spares repeated checks, there are no fewer than #spare |
930 |
< |
* threads not running. We allow this slack to avoid hysteresis |
931 |
< |
* and as a hedge against lag/uncertainty of running count |
932 |
< |
* estimates when signalling or unblocking stalls. |
933 |
< |
* |
934 |
< |
* 4. All existing workers are busy (as rechecked via repeated |
935 |
< |
* retries by caller) and a new spare is created. |
936 |
< |
* |
937 |
< |
* If none of the above hold, we try to escape out by |
938 |
< |
* re-incrementing count and returning to caller, which can retry |
939 |
< |
* later. |
976 |
> |
* Helps and/or blocks awaiting join of the given task. |
977 |
> |
* Alternates between helpJoinTask() and helpMaintainParallelism() |
978 |
> |
* as many times as there is a deficit in running count (or longer |
979 |
> |
* if running count would become zero), then blocks if task still |
980 |
> |
* not done. |
981 |
|
* |
982 |
|
* @param joinMe the task to join |
942 |
– |
* @param retries if negative, then serve only as a precheck |
943 |
– |
* that the thread can be replaced by a spare. Otherwise, |
944 |
– |
* the number of repeated calls to this method returning busy |
945 |
– |
* @return true if the call must be retried because there |
946 |
– |
* none of the blocking checks hold |
983 |
|
*/ |
984 |
< |
final boolean tryAwaitJoin(ForkJoinTask<?> joinMe, int retries) { |
985 |
< |
if (joinMe.status < 0) // precheck for cancellation |
986 |
< |
return false; |
987 |
< |
if ((runState & TERMINATING) != 0) { // shutting down |
988 |
< |
joinMe.cancelIgnoringExceptions(); |
989 |
< |
return false; |
990 |
< |
} |
991 |
< |
|
992 |
< |
int pc = parallelism; |
993 |
< |
boolean running = true; // false when running count decremented |
994 |
< |
outer:for (;;) { |
995 |
< |
int wc = workerCounts; |
996 |
< |
int rc = wc & RUNNING_COUNT_MASK; |
961 |
< |
int tc = wc >>> TOTAL_COUNT_SHIFT; |
962 |
< |
if (running) { // replace with spare or decrement count |
963 |
< |
if (rc <= pc && tc > pc && |
964 |
< |
(retries > 0 || tc > (runState & ACTIVE_COUNT_MASK))) { |
965 |
< |
ForkJoinWorkerThread[] ws = workers; |
966 |
< |
int nws = ws.length; |
967 |
< |
for (int i = 0; i < nws; ++i) { // search for spare |
968 |
< |
ForkJoinWorkerThread w = ws[i]; |
969 |
< |
if (w != null) { |
970 |
< |
if (joinMe.status < 0) |
971 |
< |
return false; |
972 |
< |
if (w.isSuspended()) { |
973 |
< |
if ((workerCounts & RUNNING_COUNT_MASK)>=pc && |
974 |
< |
w.tryResumeSpare()) { |
975 |
< |
running = false; |
976 |
< |
break outer; |
977 |
< |
} |
978 |
< |
continue outer; // rescan |
979 |
< |
} |
980 |
< |
} |
981 |
< |
} |
982 |
< |
} |
983 |
< |
if (retries < 0 || // < 0 means replacement check only |
984 |
< |
rc == 0 || joinMe.status < 0 || workerCounts != wc || |
985 |
< |
!UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
986 |
< |
wc, wc - ONE_RUNNING)) |
987 |
< |
return false; // done or inconsistent or contended |
988 |
< |
running = false; |
989 |
< |
if (rc > pc) |
990 |
< |
break; |
984 |
> |
final void awaitJoin(ForkJoinTask<?> joinMe, ForkJoinWorkerThread worker) { |
985 |
> |
int threshold = parallelism; // descend blocking thresholds |
986 |
> |
while (joinMe.status >= 0) { |
987 |
> |
boolean block; int wc; |
988 |
> |
worker.helpJoinTask(joinMe); |
989 |
> |
if (joinMe.status < 0) |
990 |
> |
break; |
991 |
> |
if (((wc = workerCounts) & RUNNING_COUNT_MASK) <= threshold) { |
992 |
> |
if (threshold > 0) |
993 |
> |
--threshold; |
994 |
> |
else |
995 |
> |
advanceEventCount(); // force release |
996 |
> |
block = false; |
997 |
|
} |
998 |
< |
else { // allow blocking if enough threads |
999 |
< |
if (rc >= pc || joinMe.status < 0) |
1000 |
< |
break; |
1001 |
< |
int sc = tc - pc + 1; // = spare threads, plus the one to add |
1002 |
< |
if (retries > sc) { |
1003 |
< |
if (rc > 0 && rc >= pc - sc) // allow slack |
1004 |
< |
break; |
1005 |
< |
if (tc < MAX_THREADS && |
1006 |
< |
tc == (runState & ACTIVE_COUNT_MASK) && |
1007 |
< |
workerCounts == wc && |
1008 |
< |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, wc, |
1003 |
< |
wc+(ONE_RUNNING|ONE_TOTAL))) { |
1004 |
< |
addWorker(); |
1005 |
< |
break; |
1006 |
< |
} |
1007 |
< |
} |
1008 |
< |
if (workerCounts == wc && // back out to allow rescan |
1009 |
< |
UNSAFE.compareAndSwapInt (this, workerCountsOffset, |
1010 |
< |
wc, wc + ONE_RUNNING)) { |
1011 |
< |
releaseWaiters(); // help others progress |
1012 |
< |
return true; // let caller retry |
1013 |
< |
} |
998 |
> |
else |
999 |
> |
block = UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
1000 |
> |
wc, wc - ONE_RUNNING); |
1001 |
> |
helpMaintainParallelism(); |
1002 |
> |
if (block) { |
1003 |
> |
int c; |
1004 |
> |
joinMe.internalAwaitDone(); |
1005 |
> |
do {} while (!UNSAFE.compareAndSwapInt |
1006 |
> |
(this, workerCountsOffset, |
1007 |
> |
c = workerCounts, c + ONE_RUNNING)); |
1008 |
> |
break; |
1009 |
|
} |
1010 |
|
} |
1016 |
– |
// arrive here if can block |
1017 |
– |
joinMe.internalAwaitDone(); |
1018 |
– |
int c; // to inline incrementRunningCount |
1019 |
– |
do {} while (!UNSAFE.compareAndSwapInt |
1020 |
– |
(this, workerCountsOffset, |
1021 |
– |
c = workerCounts, c + ONE_RUNNING)); |
1022 |
– |
return false; |
1011 |
|
} |
1012 |
|
|
1013 |
|
/** |
1014 |
< |
* Same idea as (and shares many code snippets with) tryAwaitJoin, |
1027 |
< |
* but self-contained because there are no caller retries. |
1028 |
< |
* TODO: Rework to use simpler API. |
1014 |
> |
* Same idea as awaitJoin, but no helping |
1015 |
|
*/ |
1016 |
|
final void awaitBlocker(ManagedBlocker blocker) |
1017 |
|
throws InterruptedException { |
1018 |
< |
boolean done; |
1019 |
< |
if (done = blocker.isReleasable()) |
1020 |
< |
return; |
1021 |
< |
int pc = parallelism; |
1022 |
< |
int retries = 0; |
1023 |
< |
boolean running = true; // false when running count decremented |
1024 |
< |
outer:for (;;) { |
1025 |
< |
int wc = workerCounts; |
1026 |
< |
int rc = wc & RUNNING_COUNT_MASK; |
1041 |
< |
int tc = wc >>> TOTAL_COUNT_SHIFT; |
1042 |
< |
if (running) { |
1043 |
< |
if (rc <= pc && tc > pc && |
1044 |
< |
(retries > 0 || tc > (runState & ACTIVE_COUNT_MASK))) { |
1045 |
< |
ForkJoinWorkerThread[] ws = workers; |
1046 |
< |
int nws = ws.length; |
1047 |
< |
for (int i = 0; i < nws; ++i) { |
1048 |
< |
ForkJoinWorkerThread w = ws[i]; |
1049 |
< |
if (w != null) { |
1050 |
< |
if (done = blocker.isReleasable()) |
1051 |
< |
return; |
1052 |
< |
if (w.isSuspended()) { |
1053 |
< |
if ((workerCounts & RUNNING_COUNT_MASK)>=pc && |
1054 |
< |
w.tryResumeSpare()) { |
1055 |
< |
running = false; |
1056 |
< |
break outer; |
1057 |
< |
} |
1058 |
< |
continue outer; // rescan |
1059 |
< |
} |
1060 |
< |
} |
1061 |
< |
} |
1062 |
< |
} |
1063 |
< |
if (done = blocker.isReleasable()) |
1064 |
< |
return; |
1065 |
< |
if (rc == 0 || workerCounts != wc || |
1066 |
< |
!UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
1067 |
< |
wc, wc - ONE_RUNNING)) |
1068 |
< |
continue; |
1069 |
< |
running = false; |
1070 |
< |
if (rc > pc) |
1071 |
< |
break; |
1018 |
> |
int threshold = parallelism; |
1019 |
> |
while (!blocker.isReleasable()) { |
1020 |
> |
boolean block; int wc; |
1021 |
> |
if (((wc = workerCounts) & RUNNING_COUNT_MASK) <= threshold) { |
1022 |
> |
if (threshold > 0) |
1023 |
> |
--threshold; |
1024 |
> |
else |
1025 |
> |
advanceEventCount(); |
1026 |
> |
block = false; |
1027 |
|
} |
1028 |
< |
else { |
1029 |
< |
if (rc >= pc || (done = blocker.isReleasable())) |
1030 |
< |
break; |
1031 |
< |
int sc = tc - pc + 1; |
1032 |
< |
if (retries++ > sc) { |
1033 |
< |
if (rc > 0 && rc >= pc - sc) |
1034 |
< |
break; |
1035 |
< |
if (tc < MAX_THREADS && |
1036 |
< |
tc == (runState & ACTIVE_COUNT_MASK) && |
1037 |
< |
workerCounts == wc && |
1038 |
< |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, wc, |
1039 |
< |
wc+(ONE_RUNNING|ONE_TOTAL))) { |
1085 |
< |
addWorker(); |
1086 |
< |
break; |
1087 |
< |
} |
1028 |
> |
else |
1029 |
> |
block = UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
1030 |
> |
wc, wc - ONE_RUNNING); |
1031 |
> |
helpMaintainParallelism(); |
1032 |
> |
if (block) { |
1033 |
> |
try { |
1034 |
> |
do {} while (!blocker.isReleasable() && !blocker.block()); |
1035 |
> |
} finally { |
1036 |
> |
int c; |
1037 |
> |
do {} while (!UNSAFE.compareAndSwapInt |
1038 |
> |
(this, workerCountsOffset, |
1039 |
> |
c = workerCounts, c + ONE_RUNNING)); |
1040 |
|
} |
1041 |
< |
Thread.yield(); |
1090 |
< |
} |
1091 |
< |
} |
1092 |
< |
|
1093 |
< |
try { |
1094 |
< |
if (!done) |
1095 |
< |
do {} while (!blocker.isReleasable() && !blocker.block()); |
1096 |
< |
} finally { |
1097 |
< |
if (!running) { |
1098 |
< |
int c; |
1099 |
< |
do {} while (!UNSAFE.compareAndSwapInt |
1100 |
< |
(this, workerCountsOffset, |
1101 |
< |
c = workerCounts, c + ONE_RUNNING)); |
1041 |
> |
break; |
1042 |
|
} |
1043 |
|
} |
1044 |
|
} |
1071 |
|
|
1072 |
|
/** |
1073 |
|
* Actions on transition to TERMINATING |
1074 |
+ |
* |
1075 |
+ |
* Runs up to four passes through workers: (0) shutting down each |
1076 |
+ |
* quietly (without waking up if parked) to quickly spread |
1077 |
+ |
* notifications without unnecessary bouncing around event queues |
1078 |
+ |
* etc (1) wake up and help cancel tasks (2) interrupt (3) mop up |
1079 |
+ |
* races with interrupted workers |
1080 |
|
*/ |
1081 |
|
private void startTerminating() { |
1082 |
< |
for (int i = 0; i < 2; ++i) { // twice to mop up newly created workers |
1083 |
< |
cancelSubmissions(); |
1084 |
< |
shutdownWorkers(); |
1085 |
< |
cancelWorkerTasks(); |
1086 |
< |
signalEvent(); |
1087 |
< |
interruptWorkers(); |
1082 |
> |
cancelSubmissions(); |
1083 |
> |
for (int passes = 0; passes < 4 && workerCounts != 0; ++passes) { |
1084 |
> |
advanceEventCount(); |
1085 |
> |
eventWaiters = 0L; // clobber lists |
1086 |
> |
spareWaiters = 0; |
1087 |
> |
ForkJoinWorkerThread[] ws = workers; |
1088 |
> |
int n = ws.length; |
1089 |
> |
for (int i = 0; i < n; ++i) { |
1090 |
> |
ForkJoinWorkerThread w = ws[i]; |
1091 |
> |
if (w != null) { |
1092 |
> |
w.shutdown(true); |
1093 |
> |
if (passes > 0 && !w.isTerminated()) { |
1094 |
> |
w.cancelTasks(); |
1095 |
> |
LockSupport.unpark(w); |
1096 |
> |
if (passes > 1) { |
1097 |
> |
try { |
1098 |
> |
w.interrupt(); |
1099 |
> |
} catch (SecurityException ignore) { |
1100 |
> |
} |
1101 |
> |
} |
1102 |
> |
} |
1103 |
> |
} |
1104 |
> |
} |
1105 |
|
} |
1106 |
|
} |
1107 |
|
|
1118 |
|
} |
1119 |
|
} |
1120 |
|
|
1158 |
– |
/** |
1159 |
– |
* Sets all worker run states to at least shutdown, |
1160 |
– |
* also resuming suspended workers |
1161 |
– |
*/ |
1162 |
– |
private void shutdownWorkers() { |
1163 |
– |
ForkJoinWorkerThread[] ws = workers; |
1164 |
– |
int nws = ws.length; |
1165 |
– |
for (int i = 0; i < nws; ++i) { |
1166 |
– |
ForkJoinWorkerThread w = ws[i]; |
1167 |
– |
if (w != null) |
1168 |
– |
w.shutdown(); |
1169 |
– |
} |
1170 |
– |
} |
1171 |
– |
|
1172 |
– |
/** |
1173 |
– |
* Clears out and cancels all locally queued tasks |
1174 |
– |
*/ |
1175 |
– |
private void cancelWorkerTasks() { |
1176 |
– |
ForkJoinWorkerThread[] ws = workers; |
1177 |
– |
int nws = ws.length; |
1178 |
– |
for (int i = 0; i < nws; ++i) { |
1179 |
– |
ForkJoinWorkerThread w = ws[i]; |
1180 |
– |
if (w != null) |
1181 |
– |
w.cancelTasks(); |
1182 |
– |
} |
1183 |
– |
} |
1184 |
– |
|
1185 |
– |
/** |
1186 |
– |
* Unsticks all workers blocked on joins etc |
1187 |
– |
*/ |
1188 |
– |
private void interruptWorkers() { |
1189 |
– |
ForkJoinWorkerThread[] ws = workers; |
1190 |
– |
int nws = ws.length; |
1191 |
– |
for (int i = 0; i < nws; ++i) { |
1192 |
– |
ForkJoinWorkerThread w = ws[i]; |
1193 |
– |
if (w != null && !w.isTerminated()) { |
1194 |
– |
try { |
1195 |
– |
w.interrupt(); |
1196 |
– |
} catch (SecurityException ignore) { |
1197 |
– |
} |
1198 |
– |
} |
1199 |
– |
} |
1200 |
– |
} |
1201 |
– |
|
1121 |
|
// misc support for ForkJoinWorkerThread |
1122 |
|
|
1123 |
|
/** |
1128 |
|
} |
1129 |
|
|
1130 |
|
/** |
1131 |
< |
* Accumulates steal count from a worker, clearing |
1132 |
< |
* the worker's value |
1131 |
> |
* Tries to accumulates steal count from a worker, clearing |
1132 |
> |
* the worker's value. |
1133 |
> |
* |
1134 |
> |
* @return true if worker steal count now zero |
1135 |
|
*/ |
1136 |
< |
final void accumulateStealCount(ForkJoinWorkerThread w) { |
1136 |
> |
final boolean tryAccumulateStealCount(ForkJoinWorkerThread w) { |
1137 |
|
int sc = w.stealCount; |
1138 |
< |
if (sc != 0) { |
1139 |
< |
long c; |
1140 |
< |
w.stealCount = 0; |
1141 |
< |
do {} while (!UNSAFE.compareAndSwapLong(this, stealCountOffset, |
1142 |
< |
c = stealCount, c + sc)); |
1138 |
> |
long c = stealCount; |
1139 |
> |
// CAS even if zero, for fence effects |
1140 |
> |
if (UNSAFE.compareAndSwapLong(this, stealCountOffset, c, c + sc)) { |
1141 |
> |
if (sc != 0) |
1142 |
> |
w.stealCount = 0; |
1143 |
> |
return true; |
1144 |
|
} |
1145 |
+ |
return sc == 0; |
1146 |
|
} |
1147 |
|
|
1148 |
|
/** |
1225 |
|
checkPermission(); |
1226 |
|
if (factory == null) |
1227 |
|
throw new NullPointerException(); |
1228 |
< |
if (parallelism <= 0 || parallelism > MAX_THREADS) |
1228 |
> |
if (parallelism <= 0 || parallelism > MAX_WORKERS) |
1229 |
|
throw new IllegalArgumentException(); |
1230 |
|
this.parallelism = parallelism; |
1231 |
|
this.factory = factory; |
1237 |
|
this.workerLock = new ReentrantLock(); |
1238 |
|
this.termination = new Phaser(1); |
1239 |
|
this.poolNumber = poolNumberGenerator.incrementAndGet(); |
1240 |
+ |
this.trimTime = System.nanoTime(); |
1241 |
|
} |
1242 |
|
|
1243 |
|
/** |
1245 |
|
* @param pc the initial parallelism level |
1246 |
|
*/ |
1247 |
|
private static int initialArraySizeFor(int pc) { |
1248 |
< |
// See Hackers Delight, sec 3.2. We know MAX_THREADS < (1 >>> 16) |
1249 |
< |
int size = pc < MAX_THREADS ? pc + 1 : MAX_THREADS; |
1248 |
> |
// See Hackers Delight, sec 3.2. We know MAX_WORKERS < (1 >>> 16) |
1249 |
> |
int size = pc < MAX_WORKERS ? pc + 1 : MAX_WORKERS; |
1250 |
|
size |= size >>> 1; |
1251 |
|
size |= size >>> 2; |
1252 |
|
size |= size >>> 4; |
1265 |
|
if (runState >= SHUTDOWN) |
1266 |
|
throw new RejectedExecutionException(); |
1267 |
|
submissionQueue.offer(task); |
1268 |
< |
signalEvent(); |
1269 |
< |
ensureEnoughWorkers(); |
1268 |
> |
advanceEventCount(); |
1269 |
> |
helpMaintainParallelism(); // start or wake up workers |
1270 |
|
} |
1271 |
|
|
1272 |
|
/** |
1513 |
|
public long getQueuedTaskCount() { |
1514 |
|
long count = 0; |
1515 |
|
ForkJoinWorkerThread[] ws = workers; |
1516 |
< |
int nws = ws.length; |
1517 |
< |
for (int i = 0; i < nws; ++i) { |
1516 |
> |
int n = ws.length; |
1517 |
> |
for (int i = 0; i < n; ++i) { |
1518 |
|
ForkJoinWorkerThread w = ws[i]; |
1519 |
|
if (w != null) |
1520 |
|
count += w.getQueueSize(); |
1572 |
|
* @return the number of elements transferred |
1573 |
|
*/ |
1574 |
|
protected int drainTasksTo(Collection<? super ForkJoinTask<?>> c) { |
1575 |
< |
int n = submissionQueue.drainTo(c); |
1652 |
< |
ForkJoinWorkerThread[] ws = workers; |
1653 |
< |
int nws = ws.length; |
1654 |
< |
for (int i = 0; i < nws; ++i) { |
1655 |
< |
ForkJoinWorkerThread w = ws[i]; |
1656 |
< |
if (w != null) |
1657 |
< |
n += w.drainTasksTo(c); |
1658 |
< |
} |
1659 |
< |
return n; |
1660 |
< |
} |
1661 |
< |
|
1662 |
< |
/** |
1663 |
< |
* Returns count of total parks by existing workers. |
1664 |
< |
* Used during development only since not meaningful to users. |
1665 |
< |
*/ |
1666 |
< |
private int collectParkCount() { |
1667 |
< |
int count = 0; |
1575 |
> |
int count = submissionQueue.drainTo(c); |
1576 |
|
ForkJoinWorkerThread[] ws = workers; |
1577 |
< |
int nws = ws.length; |
1578 |
< |
for (int i = 0; i < nws; ++i) { |
1577 |
> |
int n = ws.length; |
1578 |
> |
for (int i = 0; i < n; ++i) { |
1579 |
|
ForkJoinWorkerThread w = ws[i]; |
1580 |
|
if (w != null) |
1581 |
< |
count += w.parkCount; |
1581 |
> |
count += w.drainTasksTo(c); |
1582 |
|
} |
1583 |
|
return count; |
1584 |
|
} |
1600 |
|
int pc = parallelism; |
1601 |
|
int rs = runState; |
1602 |
|
int ac = rs & ACTIVE_COUNT_MASK; |
1695 |
– |
// int pk = collectParkCount(); |
1603 |
|
return super.toString() + |
1604 |
|
"[" + runLevelToString(rs) + |
1605 |
|
", parallelism = " + pc + |
1609 |
|
", steals = " + st + |
1610 |
|
", tasks = " + qt + |
1611 |
|
", submissions = " + qs + |
1705 |
– |
// ", parks = " + pk + |
1612 |
|
"]"; |
1613 |
|
} |
1614 |
|
|
1715 |
|
* Interface for extending managed parallelism for tasks running |
1716 |
|
* in {@link ForkJoinPool}s. |
1717 |
|
* |
1718 |
< |
* <p>A {@code ManagedBlocker} provides two methods. |
1719 |
< |
* Method {@code isReleasable} must return {@code true} if |
1720 |
< |
* blocking is not necessary. Method {@code block} blocks the |
1721 |
< |
* current thread if necessary (perhaps internally invoking |
1722 |
< |
* {@code isReleasable} before actually blocking). |
1718 |
> |
* <p>A {@code ManagedBlocker} provides two methods. Method |
1719 |
> |
* {@code isReleasable} must return {@code true} if blocking is |
1720 |
> |
* not necessary. Method {@code block} blocks the current thread |
1721 |
> |
* if necessary (perhaps internally invoking {@code isReleasable} |
1722 |
> |
* before actually blocking). The unusual methods in this API |
1723 |
> |
* accommodate synchronizers that may, but don't usually, block |
1724 |
> |
* for long periods. Similarly, they allow more efficient internal |
1725 |
> |
* handling of cases in which additional workers may be, but |
1726 |
> |
* usually are not, needed to ensure sufficient parallelism. |
1727 |
> |
* Toward this end, implementations of method {@code isReleasable} |
1728 |
> |
* must be amenable to repeated invocation. |
1729 |
|
* |
1730 |
|
* <p>For example, here is a ManagedBlocker based on a |
1731 |
|
* ReentrantLock: |
1743 |
|
* return hasLock || (hasLock = lock.tryLock()); |
1744 |
|
* } |
1745 |
|
* }}</pre> |
1746 |
+ |
* |
1747 |
+ |
* <p>Here is a class that possibly blocks waiting for an |
1748 |
+ |
* item on a given queue: |
1749 |
+ |
* <pre> {@code |
1750 |
+ |
* class QueueTaker<E> implements ManagedBlocker { |
1751 |
+ |
* final BlockingQueue<E> queue; |
1752 |
+ |
* volatile E item = null; |
1753 |
+ |
* QueueTaker(BlockingQueue<E> q) { this.queue = q; } |
1754 |
+ |
* public boolean block() throws InterruptedException { |
1755 |
+ |
* if (item == null) |
1756 |
+ |
* item = queue.take |
1757 |
+ |
* return true; |
1758 |
+ |
* } |
1759 |
+ |
* public boolean isReleasable() { |
1760 |
+ |
* return item != null || (item = queue.poll) != null; |
1761 |
+ |
* } |
1762 |
+ |
* public E getItem() { // call after pool.managedBlock completes |
1763 |
+ |
* return item; |
1764 |
+ |
* } |
1765 |
+ |
* }}</pre> |
1766 |
|
*/ |
1767 |
|
public static interface ManagedBlocker { |
1768 |
|
/** |
1805 |
|
public static void managedBlock(ManagedBlocker blocker) |
1806 |
|
throws InterruptedException { |
1807 |
|
Thread t = Thread.currentThread(); |
1808 |
< |
if (t instanceof ForkJoinWorkerThread) |
1809 |
< |
((ForkJoinWorkerThread) t).pool.awaitBlocker(blocker); |
1808 |
> |
if (t instanceof ForkJoinWorkerThread) { |
1809 |
> |
ForkJoinWorkerThread w = (ForkJoinWorkerThread) t; |
1810 |
> |
w.pool.awaitBlocker(blocker); |
1811 |
> |
} |
1812 |
|
else { |
1813 |
|
do {} while (!blocker.isReleasable() && !blocker.block()); |
1814 |
|
} |
1839 |
|
objectFieldOffset("eventWaiters",ForkJoinPool.class); |
1840 |
|
private static final long stealCountOffset = |
1841 |
|
objectFieldOffset("stealCount",ForkJoinPool.class); |
1842 |
+ |
private static final long spareWaitersOffset = |
1843 |
+ |
objectFieldOffset("spareWaiters",ForkJoinPool.class); |
1844 |
|
|
1845 |
|
private static long objectFieldOffset(String field, Class<?> klazz) { |
1846 |
|
try { |