69 |
|
* <td ALIGN=CENTER> <b>Call from within fork/join computations</b></td> |
70 |
|
* </tr> |
71 |
|
* <tr> |
72 |
< |
* <td> <b>Arange async execution</td> |
72 |
> |
* <td> <b>Arrange async execution</td> |
73 |
|
* <td> {@link #execute(ForkJoinTask)}</td> |
74 |
|
* <td> {@link ForkJoinTask#fork}</td> |
75 |
|
* </tr> |
110 |
|
* |
111 |
|
* <p>This implementation rejects submitted tasks (that is, by throwing |
112 |
|
* {@link RejectedExecutionException}) only when the pool is shut down |
113 |
< |
* or internal resources have been exhuasted. |
113 |
> |
* or internal resources have been exhausted. |
114 |
|
* |
115 |
|
* @since 1.7 |
116 |
|
* @author Doug Lea |
140 |
|
* Beyond work-stealing support and essential bookkeeping, the |
141 |
|
* main responsibility of this framework is to take actions when |
142 |
|
* one worker is waiting to join a task stolen (or always held by) |
143 |
< |
* another. Becauae we are multiplexing many tasks on to a pool |
143 |
> |
* another. Because we are multiplexing many tasks on to a pool |
144 |
|
* of workers, we can't just let them block (as in Thread.join). |
145 |
|
* We also cannot just reassign the joiner's run-time stack with |
146 |
|
* another and replace it later, which would be a form of |
156 |
|
* ForkJoinWorkerThread.helpJoinTask tracks joining->stealing |
157 |
|
* links to try to find such a task. |
158 |
|
* |
159 |
< |
* Compensating: Unless there are already enough live threads, |
160 |
< |
* creating or or re-activating a spare thread to compensate |
161 |
< |
* for the (blocked) joiner until it unblocks. Spares then |
162 |
< |
* suspend at their next opportunity or eventually die if |
163 |
< |
* unused for too long. See below and the internal |
164 |
< |
* documentation for tryAwaitJoin for more details about |
165 |
< |
* compensation rules. |
166 |
< |
* |
167 |
< |
* Because the determining existence of conservatively safe |
168 |
< |
* helping targets, the availability of already-created spares, |
169 |
< |
* and the apparent need to create new spares are all racy and |
170 |
< |
* require heuristic guidance, joins (in |
171 |
< |
* ForkJoinWorkerThread.joinTask) interleave these options until |
172 |
< |
* successful. Creating a new spare always succeeds, but also |
173 |
< |
* increases application footprint, so we try to avoid it, within |
174 |
< |
* reason. |
159 |
> |
* Compensating: Unless there are already enough live threads, |
160 |
> |
* method helpMaintainParallelism() may create or |
161 |
> |
* re-activate a spare thread to compensate for blocked |
162 |
> |
* joiners until they unblock. |
163 |
> |
* |
164 |
> |
* It is impossible to keep exactly the target (parallelism) |
165 |
> |
* number of threads running at any given time. Determining |
166 |
> |
* existence of conservatively safe helping targets, the |
167 |
> |
* availability of already-created spares, and the apparent need |
168 |
> |
* to create new spares are all racy and require heuristic |
169 |
> |
* guidance, so we rely on multiple retries of each. Compensation |
170 |
> |
* occurs in slow-motion. It is triggered only upon timeouts of |
171 |
> |
* Object.wait used for joins. This reduces poor decisions that |
172 |
> |
* would otherwise be made when threads are waiting for others |
173 |
> |
* that are stalled because of unrelated activities such as |
174 |
> |
* garbage collection. |
175 |
|
* |
176 |
< |
* The ManagedBlocker extension API can't use helping so uses a |
177 |
< |
* special version of compensation in method awaitBlocker. |
176 |
> |
* The ManagedBlocker extension API can't use helping so relies |
177 |
> |
* only on compensation in method awaitBlocker. |
178 |
|
* |
179 |
|
* The main throughput advantages of work-stealing stem from |
180 |
|
* decentralized control -- workers mostly steal tasks from each |
207 |
|
* blocked workers. However, all other support code is set up to |
208 |
|
* work with other policies. |
209 |
|
* |
210 |
+ |
* To ensure that we do not hold on to worker references that |
211 |
+ |
* would prevent GC, ALL accesses to workers are via indices into |
212 |
+ |
* the workers array (which is one source of some of the unusual |
213 |
+ |
* code constructions here). In essence, the workers array serves |
214 |
+ |
* as a WeakReference mechanism. Thus for example the event queue |
215 |
+ |
* stores worker indices, not worker references. Access to the |
216 |
+ |
* workers in associated methods (for example releaseEventWaiters) |
217 |
+ |
* must both index-check and null-check the IDs. All such accesses |
218 |
+ |
* ignore bad IDs by returning out early from what they are doing, |
219 |
+ |
* since this can only be associated with shutdown, in which case |
220 |
+ |
* it is OK to give up. On termination, we just clobber these |
221 |
+ |
* data structures without trying to use them. |
222 |
+ |
* |
223 |
|
* 2. Bookkeeping for dynamically adding and removing workers. We |
224 |
|
* aim to approximately maintain the given level of parallelism. |
225 |
|
* When some workers are known to be blocked (on joins or via |
226 |
|
* ManagedBlocker), we may create or resume others to take their |
227 |
|
* place until they unblock (see below). Implementing this |
228 |
|
* requires counts of the number of "running" threads (i.e., those |
229 |
< |
* that are neither blocked nor artifically suspended) as well as |
229 |
> |
* that are neither blocked nor artificially suspended) as well as |
230 |
|
* the total number. These two values are packed into one field, |
231 |
|
* "workerCounts" because we need accurate snapshots when deciding |
232 |
|
* to create, resume or suspend. Note however that the |
233 |
< |
* correspondance of these counts to reality is not guaranteed. In |
233 |
> |
* correspondence of these counts to reality is not guaranteed. In |
234 |
|
* particular updates for unblocked threads may lag until they |
235 |
|
* actually wake up. |
236 |
|
* |
261 |
|
* workers that previously could not find a task to now find one: |
262 |
|
* Submission of a new task to the pool, or another worker pushing |
263 |
|
* a task onto a previously empty queue. (We also use this |
264 |
< |
* mechanism for termination and reconfiguration actions that |
264 |
> |
* mechanism for configuration and termination actions that |
265 |
|
* require wakeups of idle workers). Each worker maintains its |
266 |
|
* last known event count, and blocks when a scan for work did not |
267 |
|
* find a task AND its lastEventCount matches the current |
272 |
|
* a record (field nextEventWaiter) for the next waiting worker. |
273 |
|
* In addition to allowing simpler decisions about need for |
274 |
|
* wakeup, the event count bits in eventWaiters serve the role of |
275 |
< |
* tags to avoid ABA errors in Treiber stacks. To reduce delays |
276 |
< |
* in task diffusion, workers not otherwise occupied may invoke |
277 |
< |
* method releaseWaiters, that removes and signals (unparks) |
278 |
< |
* workers not waiting on current count. To minimize task |
279 |
< |
* production stalls associate with signalling, any worker pushing |
280 |
< |
* a task on an empty queue invokes the weaker method signalWork, |
268 |
< |
* that only releases idle workers until it detects interference |
269 |
< |
* by other threads trying to release, and lets them take |
270 |
< |
* over. The net effect is a tree-like diffusion of signals, where |
271 |
< |
* released threads (and possibly others) help with unparks. To |
272 |
< |
* further reduce contention effects a bit, failed CASes to |
273 |
< |
* increment field eventCount are tolerated without retries. |
275 |
> |
* tags to avoid ABA errors in Treiber stacks. Upon any wakeup, |
276 |
> |
* released threads also try to release at most two others. The |
277 |
> |
* net effect is a tree-like diffusion of signals, where released |
278 |
> |
* threads (and possibly others) help with unparks. To further |
279 |
> |
* reduce contention effects a bit, failed CASes to increment |
280 |
> |
* field eventCount are tolerated without retries in signalWork. |
281 |
|
* Conceptually they are merged into the same event, which is OK |
282 |
|
* when their only purpose is to enable workers to scan for work. |
283 |
|
* |
284 |
< |
* 5. Managing suspension of extra workers. When a worker is about |
285 |
< |
* to block waiting for a join (or via ManagedBlockers), we may |
286 |
< |
* create a new thread to maintain parallelism level, or at least |
287 |
< |
* avoid starvation. Usually, extra threads are needed for only |
288 |
< |
* very short periods, yet join dependencies are such that we |
289 |
< |
* sometimes need them in bursts. Rather than create new threads |
290 |
< |
* each time this happens, we suspend no-longer-needed extra ones |
291 |
< |
* as "spares". For most purposes, we don't distinguish "extra" |
292 |
< |
* spare threads from normal "core" threads: On each call to |
293 |
< |
* preStep (the only point at which we can do this) a worker |
294 |
< |
* checks to see if there are now too many running workers, and if |
295 |
< |
* so, suspends itself. Methods tryAwaitJoin and awaitBlocker |
296 |
< |
* look for suspended threads to resume before considering |
297 |
< |
* creating a new replacement. We don't need a special data |
298 |
< |
* structure to maintain spares; simply scanning the workers array |
299 |
< |
* looking for worker.isSuspended() is fine because the calling |
300 |
< |
* thread is otherwise not doing anything useful anyway; we are at |
301 |
< |
* least as happy if after locating a spare, the caller doesn't |
302 |
< |
* actually block because the join is ready before we try to |
303 |
< |
* adjust and compensate. Note that this is intrinsically racy. |
304 |
< |
* One thread may become a spare at about the same time as another |
305 |
< |
* is needlessly being created. We counteract this and related |
306 |
< |
* slop in part by requiring resumed spares to immediately recheck |
307 |
< |
* (in preStep) to see whether they they should re-suspend. The |
308 |
< |
* only effective difference between "extra" and "core" threads is |
309 |
< |
* that we allow the "extra" ones to time out and die if they are |
310 |
< |
* not resumed within a keep-alive interval of a few seconds. This |
311 |
< |
* is implemented mainly within ForkJoinWorkerThread, but requires |
312 |
< |
* some coordination (isTrimmed() -- meaning killed while |
313 |
< |
* suspended) to correctly maintain pool counts. |
314 |
< |
* |
315 |
< |
* 6. Deciding when to create new workers. The main dynamic |
316 |
< |
* control in this class is deciding when to create extra threads, |
317 |
< |
* in methods awaitJoin and awaitBlocker. We always need to create |
318 |
< |
* one when the number of running threads would become zero and |
319 |
< |
* all workers are busy. However, this is not easy to detect |
320 |
< |
* reliably in the presence of transients so we use retries and |
321 |
< |
* allow slack (in tryAwaitJoin) to reduce false alarms. These |
322 |
< |
* effectively reduce churn at the price of systematically |
323 |
< |
* undershooting target parallelism when many threads are blocked. |
324 |
< |
* However, biasing toward undeshooting partially compensates for |
325 |
< |
* the above mechanics to suspend extra threads, that normally |
326 |
< |
* lead to overshoot because we can only suspend workers |
327 |
< |
* in-between top-level actions. It also better copes with the |
328 |
< |
* fact that some of the methods in this class tend to never |
329 |
< |
* become compiled (but are interpreted), so some components of |
330 |
< |
* the entire set of controls might execute many times faster than |
331 |
< |
* others. And similarly for cases where the apparent lack of work |
332 |
< |
* is just due to GC stalls and other transient system activity. |
284 |
> |
* 5. Managing suspension of extra workers. When a worker notices |
285 |
> |
* (usually upon timeout of a wait()) that there are too few |
286 |
> |
* running threads, we may create a new thread to maintain |
287 |
> |
* parallelism level, or at least avoid starvation. Usually, extra |
288 |
> |
* threads are needed for only very short periods, yet join |
289 |
> |
* dependencies are such that we sometimes need them in |
290 |
> |
* bursts. Rather than create new threads each time this happens, |
291 |
> |
* we suspend no-longer-needed extra ones as "spares". For most |
292 |
> |
* purposes, we don't distinguish "extra" spare threads from |
293 |
> |
* normal "core" threads: On each call to preStep (the only point |
294 |
> |
* at which we can do this) a worker checks to see if there are |
295 |
> |
* now too many running workers, and if so, suspends itself. |
296 |
> |
* Method helpMaintainParallelism looks for suspended threads to |
297 |
> |
* resume before considering creating a new replacement. The |
298 |
> |
* spares themselves are encoded on another variant of a Treiber |
299 |
> |
* Stack, headed at field "spareWaiters". Note that the use of |
300 |
> |
* spares is intrinsically racy. One thread may become a spare at |
301 |
> |
* about the same time as another is needlessly being created. We |
302 |
> |
* counteract this and related slop in part by requiring resumed |
303 |
> |
* spares to immediately recheck (in preStep) to see whether they |
304 |
> |
* they should re-suspend. |
305 |
> |
* |
306 |
> |
* 6. Killing off unneeded workers. A timeout mechanism is used to |
307 |
> |
* shed unused workers: The oldest (first) event queue waiter uses |
308 |
> |
* a timed rather than hard wait. When this wait times out without |
309 |
> |
* a normal wakeup, it tries to shutdown any one (for convenience |
310 |
> |
* the newest) other spare or event waiter via |
311 |
> |
* tryShutdownUnusedWorker. This eventually reduces the number of |
312 |
> |
* worker threads to a minimum of one after a long enough period |
313 |
> |
* without use. |
314 |
> |
* |
315 |
> |
* 7. Deciding when to create new workers. The main dynamic |
316 |
> |
* control in this class is deciding when to create extra threads |
317 |
> |
* in method helpMaintainParallelism. We would like to keep |
318 |
> |
* exactly #parallelism threads running, which is an impossible |
319 |
> |
* task. We always need to create one when the number of running |
320 |
> |
* threads would become zero and all workers are busy. Beyond |
321 |
> |
* this, we must rely on heuristics that work well in the |
322 |
> |
* presence of transient phenomena such as GC stalls, dynamic |
323 |
> |
* compilation, and wake-up lags. These transients are extremely |
324 |
> |
* common -- we are normally trying to fully saturate the CPUs on |
325 |
> |
* a machine, so almost any activity other than running tasks |
326 |
> |
* impedes accuracy. Our main defense is to allow parallelism to |
327 |
> |
* lapse for a while during joins, and use a timeout to see if, |
328 |
> |
* after the resulting settling, there is still a need for |
329 |
> |
* additional workers. This also better copes with the fact that |
330 |
> |
* some of the methods in this class tend to never become compiled |
331 |
> |
* (but are interpreted), so some components of the entire set of |
332 |
> |
* controls might execute 100 times faster than others. And |
333 |
> |
* similarly for cases where the apparent lack of work is just due |
334 |
> |
* to GC stalls and other transient system activity. |
335 |
|
* |
336 |
|
* Beware that there is a lot of representation-level coupling |
337 |
|
* among classes ForkJoinPool, ForkJoinWorkerThread, and |
344 |
|
* |
345 |
|
* Style notes: There are lots of inline assignments (of form |
346 |
|
* "while ((local = field) != 0)") which are usually the simplest |
347 |
< |
* way to ensure read orderings. Also several occurrences of the |
348 |
< |
* unusual "do {} while(!cas...)" which is the simplest way to |
349 |
< |
* force an update of a CAS'ed variable. There are also other |
350 |
< |
* coding oddities that help some methods perform reasonably even |
351 |
< |
* when interpreted (not compiled), at the expense of messiness. |
347 |
> |
* way to ensure the required read orderings (which are sometimes |
348 |
> |
* critical). Also several occurrences of the unusual "do {} |
349 |
> |
* while(!cas...)" which is the simplest way to force an update of |
350 |
> |
* a CAS'ed variable. There are also other coding oddities that |
351 |
> |
* help some methods perform reasonably even when interpreted (not |
352 |
> |
* compiled), at the expense of some messy constructions that |
353 |
> |
* reduce byte code counts. |
354 |
|
* |
355 |
|
* The order of declarations in this file is: (1) statics (2) |
356 |
|
* fields (along with constants used when unpacking some of them) |
418 |
|
new AtomicInteger(); |
419 |
|
|
420 |
|
/** |
421 |
< |
* Absolute bound for parallelism level. Twice this number must |
422 |
< |
* fit into a 16bit field to enable word-packing for some counts. |
421 |
> |
* The time to block in a join (see awaitJoin) before checking if |
422 |
> |
* a new worker should be (re)started to maintain parallelism |
423 |
> |
* level. The value should be short enough to maintain global |
424 |
> |
* responsiveness and progress but long enough to avoid |
425 |
> |
* counterproductive firings during GC stalls or unrelated system |
426 |
> |
* activity, and to not bog down systems with continual re-firings |
427 |
> |
* on GCs or legitimately long waits. |
428 |
> |
*/ |
429 |
> |
private static final long JOIN_TIMEOUT_MILLIS = 250L; // 4 per second |
430 |
> |
|
431 |
> |
/** |
432 |
> |
* The wakeup interval (in nanoseconds) for the oldest worker |
433 |
> |
* worker waiting for an event invokes tryShutdownUnusedWorker to shrink |
434 |
> |
* the number of workers. The exact value does not matter too |
435 |
> |
* much, but should be long enough to slowly release resources |
436 |
> |
* during long periods without use without disrupting normal use. |
437 |
> |
*/ |
438 |
> |
private static final long SHRINK_RATE_NANOS = |
439 |
> |
30L * 1000L * 1000L * 1000L; // 2 per minute |
440 |
> |
|
441 |
> |
/** |
442 |
> |
* Absolute bound for parallelism level. Twice this number plus |
443 |
> |
* one (i.e., 0xfff) must fit into a 16bit field to enable |
444 |
> |
* word-packing for some counts and indices. |
445 |
|
*/ |
446 |
< |
private static final int MAX_THREADS = 0x7fff; |
446 |
> |
private static final int MAX_WORKERS = 0x7fff; |
447 |
|
|
448 |
|
/** |
449 |
|
* Array holding all worker threads in the pool. Array size must |
483 |
|
private volatile long stealCount; |
484 |
|
|
485 |
|
/** |
486 |
< |
* Encoded record of top of treiber stack of threads waiting for |
486 |
> |
* Encoded record of top of Treiber stack of threads waiting for |
487 |
|
* events. The top 32 bits contain the count being waited for. The |
488 |
< |
* bottom word contains one plus the pool index of waiting worker |
489 |
< |
* thread. |
488 |
> |
* bottom 16 bits contains one plus the pool index of waiting |
489 |
> |
* worker thread. (Bits 16-31 are unused.) |
490 |
|
*/ |
491 |
|
private volatile long eventWaiters; |
492 |
|
|
493 |
|
private static final int EVENT_COUNT_SHIFT = 32; |
494 |
< |
private static final long WAITER_ID_MASK = (1L << EVENT_COUNT_SHIFT)-1L; |
494 |
> |
private static final long WAITER_ID_MASK = (1L << 16) - 1L; |
495 |
|
|
496 |
|
/** |
497 |
|
* A counter for events that may wake up worker threads: |
498 |
|
* - Submission of a new task to the pool |
499 |
|
* - A worker pushing a task on an empty queue |
500 |
< |
* - termination and reconfiguration |
500 |
> |
* - termination |
501 |
|
*/ |
502 |
|
private volatile int eventCount; |
503 |
|
|
504 |
|
/** |
505 |
+ |
* Encoded record of top of Treiber stack of spare threads waiting |
506 |
+ |
* for resumption. The top 16 bits contain an arbitrary count to |
507 |
+ |
* avoid ABA effects. The bottom 16bits contains one plus the pool |
508 |
+ |
* index of waiting worker thread. |
509 |
+ |
*/ |
510 |
+ |
private volatile int spareWaiters; |
511 |
+ |
|
512 |
+ |
private static final int SPARE_COUNT_SHIFT = 16; |
513 |
+ |
private static final int SPARE_ID_MASK = (1 << 16) - 1; |
514 |
+ |
|
515 |
+ |
/** |
516 |
|
* Lifecycle control. The low word contains the number of workers |
517 |
|
* that are (probably) executing tasks. This value is atomically |
518 |
|
* incremented before a worker gets a task to run, and decremented |
523 |
|
* These are bundled together to ensure consistent read for |
524 |
|
* termination checks (i.e., that runLevel is at least SHUTDOWN |
525 |
|
* and active threads is zero). |
526 |
+ |
* |
527 |
+ |
* Notes: Most direct CASes are dependent on these bitfield |
528 |
+ |
* positions. Also, this field is non-private to enable direct |
529 |
+ |
* performance-sensitive CASes in ForkJoinWorkerThread. |
530 |
|
*/ |
531 |
< |
private volatile int runState; |
531 |
> |
volatile int runState; |
532 |
|
|
533 |
|
// Note: The order among run level values matters. |
534 |
|
private static final int RUNLEVEL_SHIFT = 16; |
536 |
|
private static final int TERMINATING = 1 << (RUNLEVEL_SHIFT + 1); |
537 |
|
private static final int TERMINATED = 1 << (RUNLEVEL_SHIFT + 2); |
538 |
|
private static final int ACTIVE_COUNT_MASK = (1 << RUNLEVEL_SHIFT) - 1; |
491 |
– |
private static final int ONE_ACTIVE = 1; // active update delta |
539 |
|
|
540 |
|
/** |
541 |
|
* Holds number of total (i.e., created and not yet terminated) |
576 |
|
*/ |
577 |
|
private final int poolNumber; |
578 |
|
|
579 |
< |
// Utilities for CASing fields. Note that several of these |
580 |
< |
// are manually inlined by callers |
579 |
> |
// Utilities for CASing fields. Note that most of these |
580 |
> |
// are usually manually inlined by callers |
581 |
|
|
582 |
|
/** |
583 |
< |
* Increments running count. Also used by ForkJoinTask. |
583 |
> |
* Increments running count part of workerCounts |
584 |
|
*/ |
585 |
|
final void incrementRunningCount() { |
586 |
|
int c; |
601 |
|
} |
602 |
|
|
603 |
|
/** |
604 |
< |
* Tries to increment running count |
605 |
< |
*/ |
559 |
< |
final boolean tryIncrementRunningCount() { |
560 |
< |
int wc; |
561 |
< |
return UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
562 |
< |
wc = workerCounts, wc + ONE_RUNNING); |
563 |
< |
} |
564 |
< |
|
565 |
< |
/** |
566 |
< |
* Tries incrementing active count; fails on contention. |
567 |
< |
* Called by workers before executing tasks. |
604 |
> |
* Forces decrement of encoded workerCounts, awaiting nonzero if |
605 |
> |
* (rarely) necessary when other count updates lag. |
606 |
|
* |
607 |
< |
* @return true on success |
607 |
> |
* @param dr -- either zero or ONE_RUNNING |
608 |
> |
* @param dt == either zero or ONE_TOTAL |
609 |
|
*/ |
610 |
< |
final boolean tryIncrementActiveCount() { |
611 |
< |
int c; |
612 |
< |
return UNSAFE.compareAndSwapInt(this, runStateOffset, |
613 |
< |
c = runState, c + ONE_ACTIVE); |
610 |
> |
private void decrementWorkerCounts(int dr, int dt) { |
611 |
> |
for (;;) { |
612 |
> |
int wc = workerCounts; |
613 |
> |
if ((wc & RUNNING_COUNT_MASK) - dr < 0 || |
614 |
> |
(wc >>> TOTAL_COUNT_SHIFT) - dt < 0) { |
615 |
> |
if ((runState & TERMINATED) != 0) |
616 |
> |
return; // lagging termination on a backout |
617 |
> |
Thread.yield(); |
618 |
> |
} |
619 |
> |
if (UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
620 |
> |
wc, wc - (dr + dt))) |
621 |
> |
return; |
622 |
> |
} |
623 |
|
} |
624 |
|
|
625 |
|
/** |
629 |
|
final boolean tryDecrementActiveCount() { |
630 |
|
int c; |
631 |
|
return UNSAFE.compareAndSwapInt(this, runStateOffset, |
632 |
< |
c = runState, c - ONE_ACTIVE); |
632 |
> |
c = runState, c - 1); |
633 |
|
} |
634 |
|
|
635 |
|
/** |
658 |
|
lock.lock(); |
659 |
|
try { |
660 |
|
ForkJoinWorkerThread[] ws = workers; |
661 |
< |
int nws = ws.length; |
662 |
< |
if (k < 0 || k >= nws || ws[k] != null) { |
663 |
< |
for (k = 0; k < nws && ws[k] != null; ++k) |
661 |
> |
int n = ws.length; |
662 |
> |
if (k < 0 || k >= n || ws[k] != null) { |
663 |
> |
for (k = 0; k < n && ws[k] != null; ++k) |
664 |
|
; |
665 |
< |
if (k == nws) |
666 |
< |
ws = Arrays.copyOf(ws, nws << 1); |
665 |
> |
if (k == n) |
666 |
> |
ws = Arrays.copyOf(ws, n << 1); |
667 |
|
} |
668 |
|
ws[k] = w; |
669 |
|
workers = ws; // volatile array write ensures slot visibility |
678 |
|
*/ |
679 |
|
private void forgetWorker(ForkJoinWorkerThread w) { |
680 |
|
int idx = w.poolIndex; |
681 |
< |
// Locking helps method recordWorker avoid unecessary expansion |
681 |
> |
// Locking helps method recordWorker avoid unnecessary expansion |
682 |
|
final ReentrantLock lock = this.workerLock; |
683 |
|
lock.lock(); |
684 |
|
try { |
690 |
|
} |
691 |
|
} |
692 |
|
|
645 |
– |
// adding and removing workers |
646 |
– |
|
693 |
|
/** |
694 |
< |
* Tries to create and add new worker. Assumes that worker counts |
695 |
< |
* are already updated to accommodate the worker, so adjusts on |
696 |
< |
* failure. |
694 |
> |
* Final callback from terminating worker. Removes record of |
695 |
> |
* worker from array, and adjusts counts. If pool is shutting |
696 |
> |
* down, tries to complete termination. |
697 |
|
* |
698 |
< |
* @return new worker or null if creation failed |
698 |
> |
* @param w the worker |
699 |
|
*/ |
700 |
< |
private ForkJoinWorkerThread addWorker() { |
701 |
< |
ForkJoinWorkerThread w = null; |
702 |
< |
try { |
703 |
< |
w = factory.newThread(this); |
704 |
< |
} finally { // Adjust on either null or exceptional factory return |
705 |
< |
if (w == null) |
660 |
< |
onWorkerCreationFailure(); |
661 |
< |
} |
662 |
< |
if (w != null) |
663 |
< |
w.start(recordWorker(w), ueh); |
664 |
< |
return w; |
700 |
> |
final void workerTerminated(ForkJoinWorkerThread w) { |
701 |
> |
forgetWorker(w); |
702 |
> |
decrementWorkerCounts(w.isTrimmed()? 0 : ONE_RUNNING, ONE_TOTAL); |
703 |
> |
while (w.stealCount != 0) // collect final count |
704 |
> |
tryAccumulateStealCount(w); |
705 |
> |
tryTerminate(false); |
706 |
|
} |
707 |
|
|
708 |
+ |
// Waiting for and signalling events |
709 |
+ |
|
710 |
|
/** |
711 |
< |
* Adjusts counts upon failure to create worker |
711 |
> |
* Releases workers blocked on a count not equal to current count. |
712 |
> |
* Normally called after precheck that eventWaiters isn't zero to |
713 |
> |
* avoid wasted array checks. Gives up upon a change in count or |
714 |
> |
* upon releasing two workers, letting others take over. |
715 |
|
*/ |
716 |
< |
private void onWorkerCreationFailure() { |
717 |
< |
for (;;) { |
718 |
< |
int wc = workerCounts; |
719 |
< |
int rc = wc & RUNNING_COUNT_MASK; |
720 |
< |
int tc = wc >>> TOTAL_COUNT_SHIFT; |
721 |
< |
if (rc == 0 || wc == 0) |
722 |
< |
Thread.yield(); // must wait for other counts to settle |
723 |
< |
else if (UNSAFE.compareAndSwapInt(this, workerCountsOffset, wc, |
724 |
< |
wc - (ONE_RUNNING|ONE_TOTAL))) |
716 |
> |
private void releaseEventWaiters() { |
717 |
> |
ForkJoinWorkerThread[] ws = workers; |
718 |
> |
int n = ws.length; |
719 |
> |
long h = eventWaiters; |
720 |
> |
int ec = eventCount; |
721 |
> |
boolean releasedOne = false; |
722 |
> |
ForkJoinWorkerThread w; int id; |
723 |
> |
while ((id = ((int)(h & WAITER_ID_MASK)) - 1) >= 0 && |
724 |
> |
(int)(h >>> EVENT_COUNT_SHIFT) != ec && |
725 |
> |
id < n && (w = ws[id]) != null) { |
726 |
> |
if (UNSAFE.compareAndSwapLong(this, eventWaitersOffset, |
727 |
> |
h, w.nextWaiter)) { |
728 |
> |
LockSupport.unpark(w); |
729 |
> |
if (releasedOne) // exit on second release |
730 |
> |
break; |
731 |
> |
releasedOne = true; |
732 |
> |
} |
733 |
> |
if (eventCount != ec) |
734 |
|
break; |
735 |
+ |
h = eventWaiters; |
736 |
|
} |
681 |
– |
tryTerminate(false); // in case of failure during shutdown |
737 |
|
} |
738 |
|
|
739 |
|
/** |
740 |
< |
* Creates enough total workers to establish target parallelism, |
741 |
< |
* giving up if terminating or addWorker fails |
740 |
> |
* Tries to advance eventCount and releases waiters. Called only |
741 |
> |
* from workers. |
742 |
|
*/ |
743 |
< |
private void ensureEnoughTotalWorkers() { |
744 |
< |
int wc; |
745 |
< |
while (((wc = workerCounts) >>> TOTAL_COUNT_SHIFT) < parallelism && |
746 |
< |
runState < TERMINATING) { |
747 |
< |
if ((UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
693 |
< |
wc, wc + (ONE_RUNNING|ONE_TOTAL)) && |
694 |
< |
addWorker() == null)) |
695 |
< |
break; |
696 |
< |
} |
743 |
> |
final void signalWork() { |
744 |
> |
int c; // try to increment event count -- CAS failure OK |
745 |
> |
UNSAFE.compareAndSwapInt(this, eventCountOffset, c = eventCount, c+1); |
746 |
> |
if (eventWaiters != 0L) |
747 |
> |
releaseEventWaiters(); |
748 |
|
} |
749 |
|
|
750 |
|
/** |
751 |
< |
* Final callback from terminating worker. Removes record of |
752 |
< |
* worker from array, and adjusts counts. If pool is shutting |
702 |
< |
* down, tries to complete terminatation, else possibly replaces |
703 |
< |
* the worker. |
751 |
> |
* Adds the given worker to event queue and blocks until |
752 |
> |
* terminating or event count advances from the given value |
753 |
|
* |
754 |
< |
* @param w the worker |
754 |
> |
* @param w the calling worker thread |
755 |
> |
* @param ec the count |
756 |
|
*/ |
757 |
< |
final void workerTerminated(ForkJoinWorkerThread w) { |
758 |
< |
if (w.active) { // force inactive |
759 |
< |
w.active = false; |
760 |
< |
do {} while (!tryDecrementActiveCount()); |
761 |
< |
} |
762 |
< |
forgetWorker(w); |
763 |
< |
|
764 |
< |
// Decrement total count, and if was running, running count |
765 |
< |
// Spin (waiting for other updates) if either would be negative |
766 |
< |
int nr = w.isTrimmed() ? 0 : ONE_RUNNING; |
717 |
< |
int unit = ONE_TOTAL + nr; |
718 |
< |
for (;;) { |
719 |
< |
int wc = workerCounts; |
720 |
< |
int rc = wc & RUNNING_COUNT_MASK; |
721 |
< |
int tc = wc >>> TOTAL_COUNT_SHIFT; |
722 |
< |
if (rc - nr < 0 || tc == 0) |
723 |
< |
Thread.yield(); // back off if waiting for other updates |
724 |
< |
else if (UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
725 |
< |
wc, wc - unit)) |
757 |
> |
private void eventSync(ForkJoinWorkerThread w, int ec) { |
758 |
> |
long nh = (((long)ec) << EVENT_COUNT_SHIFT) | ((long)(w.poolIndex+1)); |
759 |
> |
long h; |
760 |
> |
while ((runState < SHUTDOWN || !tryTerminate(false)) && |
761 |
> |
(((int)((h = eventWaiters) & WAITER_ID_MASK)) == 0 || |
762 |
> |
(int)(h >>> EVENT_COUNT_SHIFT) == ec) && |
763 |
> |
eventCount == ec) { |
764 |
> |
if (UNSAFE.compareAndSwapLong(this, eventWaitersOffset, |
765 |
> |
w.nextWaiter = h, nh)) { |
766 |
> |
awaitEvent(w, ec); |
767 |
|
break; |
768 |
+ |
} |
769 |
|
} |
728 |
– |
|
729 |
– |
accumulateStealCount(w); // collect final count |
730 |
– |
if (!tryTerminate(false)) |
731 |
– |
ensureEnoughTotalWorkers(); |
770 |
|
} |
771 |
|
|
734 |
– |
// Waiting for and signalling events |
735 |
– |
|
772 |
|
/** |
773 |
< |
* Releases workers blocked on a count not equal to current count. |
774 |
< |
* @return true if any released |
773 |
> |
* Blocks the given worker (that has already been entered as an |
774 |
> |
* event waiter) until terminating or event count advances from |
775 |
> |
* the given value. The oldest (first) waiter uses a timed wait to |
776 |
> |
* occasionally one-by-one shrink the number of workers (to a |
777 |
> |
* minimum of one) if the pool has not been used for extended |
778 |
> |
* periods. |
779 |
> |
* |
780 |
> |
* @param w the calling worker thread |
781 |
> |
* @param ec the count |
782 |
|
*/ |
783 |
< |
private void releaseWaiters() { |
784 |
< |
long top; |
785 |
< |
while ((top = eventWaiters) != 0L) { |
786 |
< |
ForkJoinWorkerThread[] ws = workers; |
787 |
< |
int n = ws.length; |
788 |
< |
for (;;) { |
789 |
< |
int i = ((int)(top & WAITER_ID_MASK)) - 1; |
790 |
< |
int e = (int)(top >>> EVENT_COUNT_SHIFT); |
791 |
< |
if (i < 0 || e == eventCount) |
792 |
< |
return; |
793 |
< |
ForkJoinWorkerThread w; |
794 |
< |
if (i < n && (w = ws[i]) != null && |
795 |
< |
UNSAFE.compareAndSwapLong(this, eventWaitersOffset, |
796 |
< |
top, w.nextWaiter)) { |
797 |
< |
LockSupport.unpark(w); |
798 |
< |
top = eventWaiters; |
783 |
> |
private void awaitEvent(ForkJoinWorkerThread w, int ec) { |
784 |
> |
while (eventCount == ec) { |
785 |
> |
if (tryAccumulateStealCount(w)) { // transfer while idle |
786 |
> |
boolean untimed = (w.nextWaiter != 0L || |
787 |
> |
(workerCounts & RUNNING_COUNT_MASK) <= 1); |
788 |
> |
long startTime = untimed? 0 : System.nanoTime(); |
789 |
> |
Thread.interrupted(); // clear/ignore interrupt |
790 |
> |
if (eventCount != ec || w.runState != 0 || |
791 |
> |
runState >= TERMINATING) // recheck after clear |
792 |
> |
break; |
793 |
> |
if (untimed) |
794 |
> |
LockSupport.park(w); |
795 |
> |
else { |
796 |
> |
LockSupport.parkNanos(w, SHRINK_RATE_NANOS); |
797 |
> |
if (eventCount != ec || w.runState != 0 || |
798 |
> |
runState >= TERMINATING) |
799 |
> |
break; |
800 |
> |
if (System.nanoTime() - startTime >= SHRINK_RATE_NANOS) |
801 |
> |
tryShutdownUnusedWorker(ec); |
802 |
|
} |
757 |
– |
else |
758 |
– |
break; // possibly stale; reread |
803 |
|
} |
804 |
|
} |
805 |
|
} |
806 |
|
|
807 |
+ |
// Maintaining parallelism |
808 |
+ |
|
809 |
|
/** |
810 |
< |
* Ensures eventCount on exit is different (mod 2^32) than on |
765 |
< |
* entry and wakes up all waiters |
810 |
> |
* Pushes worker onto the spare stack |
811 |
|
*/ |
812 |
< |
private void signalEvent() { |
813 |
< |
int c; |
814 |
< |
do {} while (!UNSAFE.compareAndSwapInt(this, eventCountOffset, |
815 |
< |
c = eventCount, c+1)); |
771 |
< |
releaseWaiters(); |
812 |
> |
final void pushSpare(ForkJoinWorkerThread w) { |
813 |
> |
int ns = (++w.spareCount << SPARE_COUNT_SHIFT) | (w.poolIndex + 1); |
814 |
> |
do {} while (!UNSAFE.compareAndSwapInt(this, spareWaitersOffset, |
815 |
> |
w.nextSpare = spareWaiters,ns)); |
816 |
|
} |
817 |
|
|
818 |
|
/** |
819 |
< |
* Advances eventCount and releases waiters until interference by |
820 |
< |
* other releasing threads is detected. |
819 |
> |
* Tries (once) to resume a spare if the number of running |
820 |
> |
* threads is less than target. |
821 |
|
*/ |
822 |
< |
final void signalWork() { |
823 |
< |
int c; |
824 |
< |
UNSAFE.compareAndSwapInt(this, eventCountOffset, c=eventCount, c+1); |
825 |
< |
long top; |
826 |
< |
while ((top = eventWaiters) != 0L) { |
827 |
< |
int ec = eventCount; |
828 |
< |
ForkJoinWorkerThread[] ws = workers; |
829 |
< |
int n = ws.length; |
830 |
< |
for (;;) { |
831 |
< |
int i = ((int)(top & WAITER_ID_MASK)) - 1; |
832 |
< |
int e = (int)(top >>> EVENT_COUNT_SHIFT); |
833 |
< |
if (i < 0 || e == ec) |
834 |
< |
return; |
835 |
< |
ForkJoinWorkerThread w; |
836 |
< |
if (i < n && (w = ws[i]) != null && |
837 |
< |
UNSAFE.compareAndSwapLong(this, eventWaitersOffset, |
838 |
< |
top, top = w.nextWaiter)) { |
839 |
< |
LockSupport.unpark(w); |
840 |
< |
if (top != eventWaiters) // let someone else take over |
841 |
< |
return; |
822 |
> |
private void tryResumeSpare() { |
823 |
> |
int sw, id; |
824 |
> |
ForkJoinWorkerThread[] ws = workers; |
825 |
> |
int n = ws.length; |
826 |
> |
ForkJoinWorkerThread w; |
827 |
> |
if ((sw = spareWaiters) != 0 && |
828 |
> |
(id = (sw & SPARE_ID_MASK) - 1) >= 0 && |
829 |
> |
id < n && (w = ws[id]) != null && |
830 |
> |
(workerCounts & RUNNING_COUNT_MASK) < parallelism && |
831 |
> |
spareWaiters == sw && |
832 |
> |
UNSAFE.compareAndSwapInt(this, spareWaitersOffset, |
833 |
> |
sw, w.nextSpare)) { |
834 |
> |
int c; // increment running count before resume |
835 |
> |
do {} while(!UNSAFE.compareAndSwapInt |
836 |
> |
(this, workerCountsOffset, |
837 |
> |
c = workerCounts, c + ONE_RUNNING)); |
838 |
> |
if (w.tryUnsuspend()) |
839 |
> |
LockSupport.unpark(w); |
840 |
> |
else // back out if w was shutdown |
841 |
> |
decrementWorkerCounts(ONE_RUNNING, 0); |
842 |
> |
} |
843 |
> |
} |
844 |
> |
|
845 |
> |
/** |
846 |
> |
* Tries to increase the number of running workers if below target |
847 |
> |
* parallelism: If a spare exists tries to resume it via |
848 |
> |
* tryResumeSpare. Otherwise, if not enough total workers or all |
849 |
> |
* existing workers are busy, adds a new worker. In all cases also |
850 |
> |
* helps wake up releasable workers waiting for work. |
851 |
> |
*/ |
852 |
> |
private void helpMaintainParallelism() { |
853 |
> |
int pc = parallelism; |
854 |
> |
int wc, rs, tc; |
855 |
> |
while (((wc = workerCounts) & RUNNING_COUNT_MASK) < pc && |
856 |
> |
(rs = runState) < TERMINATING) { |
857 |
> |
if (spareWaiters != 0) |
858 |
> |
tryResumeSpare(); |
859 |
> |
else if ((tc = wc >>> TOTAL_COUNT_SHIFT) >= MAX_WORKERS || |
860 |
> |
(tc >= pc && (rs & ACTIVE_COUNT_MASK) != tc)) |
861 |
> |
break; // enough total |
862 |
> |
else if (runState == rs && workerCounts == wc && |
863 |
> |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, wc, |
864 |
> |
wc + (ONE_RUNNING|ONE_TOTAL))) { |
865 |
> |
ForkJoinWorkerThread w = null; |
866 |
> |
try { |
867 |
> |
w = factory.newThread(this); |
868 |
> |
} finally { // adjust on null or exceptional factory return |
869 |
> |
if (w == null) { |
870 |
> |
decrementWorkerCounts(ONE_RUNNING, ONE_TOTAL); |
871 |
> |
tryTerminate(false); // handle failure during shutdown |
872 |
> |
} |
873 |
> |
} |
874 |
> |
if (w == null) |
875 |
> |
break; |
876 |
> |
w.start(recordWorker(w), ueh); |
877 |
> |
if ((workerCounts >>> TOTAL_COUNT_SHIFT) >= pc) { |
878 |
> |
int c; // advance event count |
879 |
> |
UNSAFE.compareAndSwapInt(this, eventCountOffset, |
880 |
> |
c = eventCount, c+1); |
881 |
> |
break; // add at most one unless total below target |
882 |
|
} |
799 |
– |
else |
800 |
– |
break; // possibly stale; reread |
883 |
|
} |
884 |
|
} |
885 |
+ |
if (eventWaiters != 0L) |
886 |
+ |
releaseEventWaiters(); |
887 |
|
} |
888 |
|
|
889 |
|
/** |
890 |
< |
* Blockss worker until terminating or event count |
891 |
< |
* advances from last value held by worker |
890 |
> |
* Callback from the oldest waiter in awaitEvent waking up after a |
891 |
> |
* period of non-use. If all workers are idle, tries (once) to |
892 |
> |
* shutdown an event waiter or a spare, if one exists. Note that |
893 |
> |
* we don't need CAS or locks here because the method is called |
894 |
> |
* only from one thread occasionally waking (and even misfires are |
895 |
> |
* OK). Note that until the shutdown worker fully terminates, |
896 |
> |
* workerCounts will overestimate total count, which is tolerable. |
897 |
|
* |
898 |
< |
* @param w the calling worker thread |
898 |
> |
* @param ec the event count waited on by caller (to abort |
899 |
> |
* attempt if count has since changed). |
900 |
|
*/ |
901 |
< |
private void eventSync(ForkJoinWorkerThread w) { |
902 |
< |
int wec = w.lastEventCount; |
903 |
< |
long nextTop = (((long)wec << EVENT_COUNT_SHIFT) | |
904 |
< |
((long)(w.poolIndex + 1))); |
905 |
< |
long top; |
906 |
< |
while ((runState < SHUTDOWN || !tryTerminate(false)) && |
907 |
< |
(((int)(top = eventWaiters) & WAITER_ID_MASK) == 0 || |
908 |
< |
(int)(top >>> EVENT_COUNT_SHIFT) == wec) && |
909 |
< |
eventCount == wec) { |
910 |
< |
if (UNSAFE.compareAndSwapLong(this, eventWaitersOffset, |
911 |
< |
w.nextWaiter = top, nextTop)) { |
912 |
< |
accumulateStealCount(w); // transfer steals while idle |
913 |
< |
Thread.interrupted(); // clear/ignore interrupt |
914 |
< |
while (eventCount == wec) |
915 |
< |
w.doPark(); |
916 |
< |
break; |
901 |
> |
private void tryShutdownUnusedWorker(int ec) { |
902 |
> |
if (runState == 0 && eventCount == ec) { // only trigger if all idle |
903 |
> |
ForkJoinWorkerThread[] ws = workers; |
904 |
> |
int n = ws.length; |
905 |
> |
ForkJoinWorkerThread w = null; |
906 |
> |
boolean shutdown = false; |
907 |
> |
int sw; |
908 |
> |
long h; |
909 |
> |
if ((sw = spareWaiters) != 0) { // prefer killing spares |
910 |
> |
int id = (sw & SPARE_ID_MASK) - 1; |
911 |
> |
if (id >= 0 && id < n && (w = ws[id]) != null && |
912 |
> |
UNSAFE.compareAndSwapInt(this, spareWaitersOffset, |
913 |
> |
sw, w.nextSpare)) |
914 |
> |
shutdown = true; |
915 |
> |
} |
916 |
> |
else if ((h = eventWaiters) != 0L) { |
917 |
> |
long nh; |
918 |
> |
int id = ((int)(h & WAITER_ID_MASK)) - 1; |
919 |
> |
if (id >= 0 && id < n && (w = ws[id]) != null && |
920 |
> |
(nh = w.nextWaiter) != 0L && // keep at least one worker |
921 |
> |
UNSAFE.compareAndSwapLong(this, eventWaitersOffset, h, nh)) |
922 |
> |
shutdown = true; |
923 |
> |
} |
924 |
> |
if (w != null && shutdown) { |
925 |
> |
w.shutdown(); |
926 |
> |
LockSupport.unpark(w); |
927 |
|
} |
928 |
|
} |
929 |
< |
w.lastEventCount = eventCount; |
929 |
> |
releaseEventWaiters(); // in case of interference |
930 |
|
} |
931 |
|
|
932 |
|
/** |
933 |
|
* Callback from workers invoked upon each top-level action (i.e., |
934 |
< |
* stealing a task or taking a submission and running |
935 |
< |
* it). Performs one or both of the following: |
934 |
> |
* stealing a task or taking a submission and running it). |
935 |
> |
* Performs one or more of the following: |
936 |
|
* |
937 |
< |
* * If the worker cannot find work, updates its active status to |
938 |
< |
* inactive and updates activeCount unless there is contention, in |
939 |
< |
* which case it may try again (either in this or a subsequent |
940 |
< |
* call). Additionally, awaits the next task event and/or helps |
941 |
< |
* wake up other releasable waiters. |
942 |
< |
* |
943 |
< |
* * If there are too many running threads, suspends this worker |
944 |
< |
* (first forcing inactivation if necessary). If it is not |
945 |
< |
* resumed before a keepAlive elapses, the worker may be "trimmed" |
946 |
< |
* -- killed while suspended within suspendAsSpare. Otherwise, |
947 |
< |
* upon resume it rechecks to make sure that it is still needed. |
937 |
> |
* 1. If the worker is active and either did not run a task |
938 |
> |
* or there are too many workers, try to set its active status |
939 |
> |
* to inactive and update activeCount. On contention, we may |
940 |
> |
* try again in this or a subsequent call. |
941 |
> |
* |
942 |
> |
* 2. If not enough total workers, help create some. |
943 |
> |
* |
944 |
> |
* 3. If there are too many running workers, suspend this worker |
945 |
> |
* (first forcing inactive if necessary). If it is not needed, |
946 |
> |
* it may be shutdown while suspended (via |
947 |
> |
* tryShutdownUnusedWorker). Otherwise, upon resume it |
948 |
> |
* rechecks running thread count and need for event sync. |
949 |
> |
* |
950 |
> |
* 4. If worker did not run a task, await the next task event via |
951 |
> |
* eventSync if necessary (first forcing inactivation), upon |
952 |
> |
* which the worker may be shutdown via |
953 |
> |
* tryShutdownUnusedWorker. Otherwise, help release any |
954 |
> |
* existing event waiters that are now releasable, |
955 |
|
* |
956 |
|
* @param w the worker |
957 |
< |
* @param retries the number of scans by caller failing to find work |
851 |
< |
* find any (in which case it may block waiting for work). |
957 |
> |
* @param ran true if worker ran a task since last call to this method |
958 |
|
*/ |
959 |
< |
final void preStep(ForkJoinWorkerThread w, int retries) { |
959 |
> |
final void preStep(ForkJoinWorkerThread w, boolean ran) { |
960 |
> |
int wec = w.lastEventCount; |
961 |
|
boolean active = w.active; |
962 |
< |
boolean inactivate = active && retries > 0; |
963 |
< |
for (;;) { |
964 |
< |
int rs, wc; |
965 |
< |
if (inactivate && |
966 |
< |
UNSAFE.compareAndSwapInt(this, runStateOffset, |
967 |
< |
rs = runState, rs - ONE_ACTIVE)) |
962 |
> |
boolean inactivate = false; |
963 |
> |
int pc = parallelism; |
964 |
> |
int rs; |
965 |
> |
while (w.runState == 0 && (rs = runState) < TERMINATING) { |
966 |
> |
if ((inactivate || (active && (rs & ACTIVE_COUNT_MASK) >= pc)) && |
967 |
> |
UNSAFE.compareAndSwapInt(this, runStateOffset, rs, rs - 1)) |
968 |
|
inactivate = active = w.active = false; |
969 |
< |
if (((wc = workerCounts) & RUNNING_COUNT_MASK) <= parallelism) { |
970 |
< |
if (retries > 0) { |
971 |
< |
if (retries > 1 && !active) |
972 |
< |
eventSync(w); |
973 |
< |
releaseWaiters(); |
969 |
> |
int wc = workerCounts; |
970 |
> |
if ((wc & RUNNING_COUNT_MASK) > pc) { |
971 |
> |
if (!(inactivate |= active) && // must inactivate to suspend |
972 |
> |
workerCounts == wc && // try to suspend as spare |
973 |
> |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
974 |
> |
wc, wc - ONE_RUNNING)) |
975 |
> |
w.suspendAsSpare(); |
976 |
> |
} |
977 |
> |
else if ((wc >>> TOTAL_COUNT_SHIFT) < pc) |
978 |
> |
helpMaintainParallelism(); // not enough workers |
979 |
> |
else if (!ran) { |
980 |
> |
long h = eventWaiters; |
981 |
> |
int ec = eventCount; |
982 |
> |
if (h != 0L && (int)(h >>> EVENT_COUNT_SHIFT) != ec) |
983 |
> |
releaseEventWaiters(); // release others before waiting |
984 |
> |
else if (ec != wec) { |
985 |
> |
w.lastEventCount = ec; // no need to wait |
986 |
> |
break; |
987 |
|
} |
988 |
< |
break; |
988 |
> |
else if (!(inactivate |= active)) |
989 |
> |
eventSync(w, wec); // must inactivate before sync |
990 |
|
} |
991 |
< |
if (!(inactivate |= active) && // must inactivate to suspend |
871 |
< |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
872 |
< |
wc, wc - ONE_RUNNING) && |
873 |
< |
!w.suspendAsSpare()) // false if trimmed |
991 |
> |
else |
992 |
|
break; |
993 |
|
} |
994 |
|
} |
995 |
|
|
996 |
|
/** |
997 |
< |
* Awaits join of the given task if enough threads, or can resume |
998 |
< |
* or create a spare. Fails (in which case the given task might |
881 |
< |
* not be done) upon contention or lack of decision about |
882 |
< |
* blocking. |
883 |
< |
* |
884 |
< |
* We allow blocking if: |
885 |
< |
* |
886 |
< |
* 1. There would still be at least as many running threads as |
887 |
< |
* parallelism level if this thread blocks. |
888 |
< |
* |
889 |
< |
* 2. A spare is resumed to replace this worker. We tolerate |
890 |
< |
* races in the decision to replace when a spare is found. |
891 |
< |
* This may release too many, but if so, the superfluous ones |
892 |
< |
* will re-suspend via preStep(). |
893 |
< |
* |
894 |
< |
* 3. After #spares repeated retries, there are fewer than #spare |
895 |
< |
* threads not running. We allow this slack to avoid hysteresis |
896 |
< |
* and as a hedge against lag/uncertainty of running count |
897 |
< |
* estimates when signalling or unblocking stalls. |
898 |
< |
* |
899 |
< |
* 4. All existing workers are busy (as rechecked via #spares |
900 |
< |
* repeated retries by caller) and a new spare is created. |
901 |
< |
* |
902 |
< |
* If none of the above hold, we escape out by re-incrementing |
903 |
< |
* count and returning to caller, which can retry later. |
997 |
> |
* Helps and/or blocks awaiting join of the given task. |
998 |
> |
* See above for explanation. |
999 |
|
* |
1000 |
|
* @param joinMe the task to join |
1001 |
< |
* @param retries the number of calls to this method for this join |
1001 |
> |
* @param worker the current worker thread |
1002 |
|
*/ |
1003 |
< |
final void tryAwaitJoin(ForkJoinTask<?> joinMe, int retries) { |
1004 |
< |
int pc = parallelism; |
1005 |
< |
boolean running = true; // false when running count decremented |
1006 |
< |
outer:while (joinMe.status >= 0) { |
1007 |
< |
int wc = workerCounts; |
1008 |
< |
int rc = wc & RUNNING_COUNT_MASK; |
1009 |
< |
int tc = wc >>> TOTAL_COUNT_SHIFT; |
1010 |
< |
if (running) { // replace with spare or decrement count |
1011 |
< |
if (rc <= pc && tc > pc && |
1012 |
< |
(retries > 0 || tc > (runState & ACTIVE_COUNT_MASK))) { |
1013 |
< |
ForkJoinWorkerThread[] ws = workers; // search for spare |
1014 |
< |
int nws = ws.length; |
1015 |
< |
for (int i = 0; i < nws; ++i) { |
1016 |
< |
ForkJoinWorkerThread w = ws[i]; |
1017 |
< |
if (w != null && w.isSuspended()) { |
1018 |
< |
if ((workerCounts & RUNNING_COUNT_MASK) > pc) |
1019 |
< |
continue outer; |
1020 |
< |
if (joinMe.status < 0) |
1021 |
< |
break outer; |
1022 |
< |
if (w.tryResumeSpare()) { |
1023 |
< |
running = false; |
1024 |
< |
break outer; |
1025 |
< |
} |
1026 |
< |
continue outer; // rescan on failure to resume |
1027 |
< |
} |
1028 |
< |
} |
1029 |
< |
} |
935 |
< |
if ((rc <= pc && (rc == 0 || --retries < 0)) || // no retry |
936 |
< |
joinMe.status < 0) |
937 |
< |
break; |
938 |
< |
if (workerCounts == wc && |
939 |
< |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
940 |
< |
wc, wc - ONE_RUNNING)) |
941 |
< |
running = false; |
942 |
< |
} |
943 |
< |
else { // allow blocking if enough threads |
944 |
< |
int sc = tc - pc + 1; // = spares, plus the one to add |
945 |
< |
if (sc > 0 && rc > 0 && rc >= pc - sc && rc > pc - retries) |
946 |
< |
break; |
947 |
< |
if (--retries > sc && tc < MAX_THREADS && |
948 |
< |
tc == (runState & ACTIVE_COUNT_MASK) && |
949 |
< |
workerCounts == wc && |
950 |
< |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, wc, |
951 |
< |
wc + (ONE_RUNNING|ONE_TOTAL))) { |
952 |
< |
addWorker(); |
953 |
< |
break; |
954 |
< |
} |
955 |
< |
if (workerCounts == wc && |
956 |
< |
UNSAFE.compareAndSwapInt (this, workerCountsOffset, |
957 |
< |
wc, wc + ONE_RUNNING)) { |
958 |
< |
running = true; // back out; allow retry |
959 |
< |
break; |
960 |
< |
} |
1003 |
> |
final void awaitJoin(ForkJoinTask<?> joinMe, ForkJoinWorkerThread worker) { |
1004 |
> |
int retries = 2 + (parallelism >> 2); // #helpJoins before blocking |
1005 |
> |
while (joinMe.status >= 0) { |
1006 |
> |
int wc; |
1007 |
> |
worker.helpJoinTask(joinMe); |
1008 |
> |
if (joinMe.status < 0) |
1009 |
> |
break; |
1010 |
> |
else if (retries > 0) |
1011 |
> |
--retries; |
1012 |
> |
else if (((wc = workerCounts) & RUNNING_COUNT_MASK) != 0 && |
1013 |
> |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
1014 |
> |
wc, wc - ONE_RUNNING)) { |
1015 |
> |
int stat, c; long h; |
1016 |
> |
while ((stat = joinMe.status) >= 0 && |
1017 |
> |
(h = eventWaiters) != 0L && // help release others |
1018 |
> |
(int)(h >>> EVENT_COUNT_SHIFT) != eventCount) |
1019 |
> |
releaseEventWaiters(); |
1020 |
> |
if (stat >= 0 && |
1021 |
> |
((workerCounts & RUNNING_COUNT_MASK) == 0 || |
1022 |
> |
(stat = |
1023 |
> |
joinMe.internalAwaitDone(JOIN_TIMEOUT_MILLIS)) >= 0)) |
1024 |
> |
helpMaintainParallelism(); // timeout or no running workers |
1025 |
> |
do {} while (!UNSAFE.compareAndSwapInt |
1026 |
> |
(this, workerCountsOffset, |
1027 |
> |
c = workerCounts, c + ONE_RUNNING)); |
1028 |
> |
if (stat < 0) |
1029 |
> |
break; // else restart |
1030 |
|
} |
1031 |
|
} |
963 |
– |
if (!running) { // can block |
964 |
– |
int c; // to inline incrementRunningCount |
965 |
– |
joinMe.internalAwaitDone(); |
966 |
– |
do {} while (!UNSAFE.compareAndSwapInt |
967 |
– |
(this, workerCountsOffset, |
968 |
– |
c = workerCounts, c + ONE_RUNNING)); |
969 |
– |
} |
1032 |
|
} |
1033 |
|
|
1034 |
|
/** |
1035 |
< |
* Same idea as (and shares many code snippets with) tryAwaitJoin, |
974 |
< |
* but self-contained because there are no caller retries. |
975 |
< |
* TODO: Rework to use simpler API. |
1035 |
> |
* Same idea as awaitJoin, but no helping, retries, or timeouts. |
1036 |
|
*/ |
1037 |
|
final void awaitBlocker(ManagedBlocker blocker) |
1038 |
|
throws InterruptedException { |
1039 |
< |
int pc = parallelism; |
980 |
< |
boolean running = true; |
981 |
< |
int retries = 0; |
982 |
< |
boolean done; |
983 |
< |
outer:while (!(done = blocker.isReleasable())) { |
1039 |
> |
while (!blocker.isReleasable()) { |
1040 |
|
int wc = workerCounts; |
1041 |
< |
int rc = wc & RUNNING_COUNT_MASK; |
1042 |
< |
int tc = wc >>> TOTAL_COUNT_SHIFT; |
1043 |
< |
if (running) { |
1044 |
< |
if (rc <= pc && tc > pc && |
1045 |
< |
(retries > 0 || tc > (runState & ACTIVE_COUNT_MASK))) { |
1046 |
< |
ForkJoinWorkerThread[] ws = workers; |
1047 |
< |
int nws = ws.length; |
1048 |
< |
for (int i = 0; i < nws; ++i) { |
1049 |
< |
ForkJoinWorkerThread w = ws[i]; |
1050 |
< |
if (w != null && w.isSuspended()) { |
1051 |
< |
if ((workerCounts & RUNNING_COUNT_MASK) > pc) |
1052 |
< |
continue outer; |
1053 |
< |
if (done = blocker.isReleasable()) |
1054 |
< |
break outer; |
999 |
< |
if (w.tryResumeSpare()) { |
1000 |
< |
running = false; |
1001 |
< |
break outer; |
1002 |
< |
} |
1003 |
< |
continue outer; |
1004 |
< |
} |
1041 |
> |
if ((wc & RUNNING_COUNT_MASK) != 0 && |
1042 |
> |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
1043 |
> |
wc, wc - ONE_RUNNING)) { |
1044 |
> |
try { |
1045 |
> |
while (!blocker.isReleasable()) { |
1046 |
> |
long h = eventWaiters; |
1047 |
> |
if (h != 0L && |
1048 |
> |
(int)(h >>> EVENT_COUNT_SHIFT) != eventCount) |
1049 |
> |
releaseEventWaiters(); |
1050 |
> |
else if ((workerCounts & RUNNING_COUNT_MASK) == 0 && |
1051 |
> |
runState < TERMINATING) |
1052 |
> |
helpMaintainParallelism(); |
1053 |
> |
else if (blocker.block()) |
1054 |
> |
break; |
1055 |
|
} |
1056 |
< |
if (done = blocker.isReleasable()) |
1057 |
< |
break; |
1056 |
> |
} finally { |
1057 |
> |
int c; |
1058 |
> |
do {} while (!UNSAFE.compareAndSwapInt |
1059 |
> |
(this, workerCountsOffset, |
1060 |
> |
c = workerCounts, c + ONE_RUNNING)); |
1061 |
|
} |
1009 |
– |
if (rc > 0 && workerCounts == wc && |
1010 |
– |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
1011 |
– |
wc, wc - ONE_RUNNING)) { |
1012 |
– |
running = false; |
1013 |
– |
if (rc > pc) |
1014 |
– |
break; |
1015 |
– |
} |
1016 |
– |
} |
1017 |
– |
else if (rc >= pc) |
1018 |
– |
break; |
1019 |
– |
else if (tc < MAX_THREADS && |
1020 |
– |
tc == (runState & ACTIVE_COUNT_MASK) && |
1021 |
– |
workerCounts == wc && |
1022 |
– |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, wc, |
1023 |
– |
wc + (ONE_RUNNING|ONE_TOTAL))) { |
1024 |
– |
addWorker(); |
1062 |
|
break; |
1063 |
|
} |
1027 |
– |
else if (workerCounts == wc && |
1028 |
– |
UNSAFE.compareAndSwapInt (this, workerCountsOffset, |
1029 |
– |
wc, wc + ONE_RUNNING)) { |
1030 |
– |
Thread.yield(); |
1031 |
– |
++retries; |
1032 |
– |
running = true; // allow rescan |
1033 |
– |
} |
1034 |
– |
} |
1035 |
– |
|
1036 |
– |
try { |
1037 |
– |
if (!done) |
1038 |
– |
do {} while (!blocker.isReleasable() && !blocker.block()); |
1039 |
– |
} finally { |
1040 |
– |
if (!running) { |
1041 |
– |
int c; |
1042 |
– |
do {} while (!UNSAFE.compareAndSwapInt |
1043 |
– |
(this, workerCountsOffset, |
1044 |
– |
c = workerCounts, c + ONE_RUNNING)); |
1045 |
– |
} |
1064 |
|
} |
1065 |
|
} |
1066 |
|
|
1092 |
|
|
1093 |
|
/** |
1094 |
|
* Actions on transition to TERMINATING |
1095 |
+ |
* |
1096 |
+ |
* Runs up to four passes through workers: (0) shutting down each |
1097 |
+ |
* (without waking up if parked) to quickly spread notifications |
1098 |
+ |
* without unnecessary bouncing around event queues etc (1) wake |
1099 |
+ |
* up and help cancel tasks (2) interrupt (3) mop up races with |
1100 |
+ |
* interrupted workers |
1101 |
|
*/ |
1102 |
|
private void startTerminating() { |
1103 |
< |
for (int i = 0; i < 2; ++i) { // twice to mop up newly created workers |
1104 |
< |
cancelSubmissions(); |
1105 |
< |
shutdownWorkers(); |
1106 |
< |
cancelWorkerTasks(); |
1107 |
< |
signalEvent(); |
1108 |
< |
interruptWorkers(); |
1103 |
> |
cancelSubmissions(); |
1104 |
> |
for (int passes = 0; passes < 4 && workerCounts != 0; ++passes) { |
1105 |
> |
int c; // advance event count |
1106 |
> |
UNSAFE.compareAndSwapInt(this, eventCountOffset, |
1107 |
> |
c = eventCount, c+1); |
1108 |
> |
eventWaiters = 0L; // clobber lists |
1109 |
> |
spareWaiters = 0; |
1110 |
> |
ForkJoinWorkerThread[] ws = workers; |
1111 |
> |
int n = ws.length; |
1112 |
> |
for (int i = 0; i < n; ++i) { |
1113 |
> |
ForkJoinWorkerThread w = ws[i]; |
1114 |
> |
if (w != null) { |
1115 |
> |
w.shutdown(); |
1116 |
> |
if (passes > 0 && !w.isTerminated()) { |
1117 |
> |
w.cancelTasks(); |
1118 |
> |
LockSupport.unpark(w); |
1119 |
> |
if (passes > 1) { |
1120 |
> |
try { |
1121 |
> |
w.interrupt(); |
1122 |
> |
} catch (SecurityException ignore) { |
1123 |
> |
} |
1124 |
> |
} |
1125 |
> |
} |
1126 |
> |
} |
1127 |
> |
} |
1128 |
|
} |
1129 |
|
} |
1130 |
|
|
1141 |
|
} |
1142 |
|
} |
1143 |
|
|
1101 |
– |
/** |
1102 |
– |
* Sets all worker run states to at least shutdown, |
1103 |
– |
* also resuming suspended workers |
1104 |
– |
*/ |
1105 |
– |
private void shutdownWorkers() { |
1106 |
– |
ForkJoinWorkerThread[] ws = workers; |
1107 |
– |
int nws = ws.length; |
1108 |
– |
for (int i = 0; i < nws; ++i) { |
1109 |
– |
ForkJoinWorkerThread w = ws[i]; |
1110 |
– |
if (w != null) |
1111 |
– |
w.shutdown(); |
1112 |
– |
} |
1113 |
– |
} |
1114 |
– |
|
1115 |
– |
/** |
1116 |
– |
* Clears out and cancels all locally queued tasks |
1117 |
– |
*/ |
1118 |
– |
private void cancelWorkerTasks() { |
1119 |
– |
ForkJoinWorkerThread[] ws = workers; |
1120 |
– |
int nws = ws.length; |
1121 |
– |
for (int i = 0; i < nws; ++i) { |
1122 |
– |
ForkJoinWorkerThread w = ws[i]; |
1123 |
– |
if (w != null) |
1124 |
– |
w.cancelTasks(); |
1125 |
– |
} |
1126 |
– |
} |
1127 |
– |
|
1128 |
– |
/** |
1129 |
– |
* Unsticks all workers blocked on joins etc |
1130 |
– |
*/ |
1131 |
– |
private void interruptWorkers() { |
1132 |
– |
ForkJoinWorkerThread[] ws = workers; |
1133 |
– |
int nws = ws.length; |
1134 |
– |
for (int i = 0; i < nws; ++i) { |
1135 |
– |
ForkJoinWorkerThread w = ws[i]; |
1136 |
– |
if (w != null && !w.isTerminated()) { |
1137 |
– |
try { |
1138 |
– |
w.interrupt(); |
1139 |
– |
} catch (SecurityException ignore) { |
1140 |
– |
} |
1141 |
– |
} |
1142 |
– |
} |
1143 |
– |
} |
1144 |
– |
|
1144 |
|
// misc support for ForkJoinWorkerThread |
1145 |
|
|
1146 |
|
/** |
1151 |
|
} |
1152 |
|
|
1153 |
|
/** |
1154 |
< |
* Accumulates steal count from a worker, clearing |
1155 |
< |
* the worker's value |
1154 |
> |
* Tries to accumulates steal count from a worker, clearing |
1155 |
> |
* the worker's value. |
1156 |
> |
* |
1157 |
> |
* @return true if worker steal count now zero |
1158 |
|
*/ |
1159 |
< |
final void accumulateStealCount(ForkJoinWorkerThread w) { |
1159 |
> |
final boolean tryAccumulateStealCount(ForkJoinWorkerThread w) { |
1160 |
|
int sc = w.stealCount; |
1161 |
< |
if (sc != 0) { |
1162 |
< |
long c; |
1163 |
< |
w.stealCount = 0; |
1164 |
< |
do {} while (!UNSAFE.compareAndSwapLong(this, stealCountOffset, |
1165 |
< |
c = stealCount, c + sc)); |
1161 |
> |
long c = stealCount; |
1162 |
> |
// CAS even if zero, for fence effects |
1163 |
> |
if (UNSAFE.compareAndSwapLong(this, stealCountOffset, c, c + sc)) { |
1164 |
> |
if (sc != 0) |
1165 |
> |
w.stealCount = 0; |
1166 |
> |
return true; |
1167 |
|
} |
1168 |
+ |
return sc == 0; |
1169 |
|
} |
1170 |
|
|
1171 |
|
/** |
1174 |
|
*/ |
1175 |
|
final int idlePerActive() { |
1176 |
|
int pc = parallelism; // use parallelism, not rc |
1177 |
< |
int ac = runState; // no mask -- artifically boosts during shutdown |
1177 |
> |
int ac = runState; // no mask -- artificially boosts during shutdown |
1178 |
|
// Use exact results for small values, saturate past 4 |
1179 |
|
return pc <= ac? 0 : pc >>> 1 <= ac? 1 : pc >>> 2 <= ac? 3 : pc >>> 3; |
1180 |
|
} |
1248 |
|
checkPermission(); |
1249 |
|
if (factory == null) |
1250 |
|
throw new NullPointerException(); |
1251 |
< |
if (parallelism <= 0 || parallelism > MAX_THREADS) |
1251 |
> |
if (parallelism <= 0 || parallelism > MAX_WORKERS) |
1252 |
|
throw new IllegalArgumentException(); |
1253 |
|
this.parallelism = parallelism; |
1254 |
|
this.factory = factory; |
1267 |
|
* @param pc the initial parallelism level |
1268 |
|
*/ |
1269 |
|
private static int initialArraySizeFor(int pc) { |
1270 |
< |
// See Hackers Delight, sec 3.2. We know MAX_THREADS < (1 >>> 16) |
1271 |
< |
int size = pc < MAX_THREADS ? pc + 1 : MAX_THREADS; |
1270 |
> |
// If possible, initially allocate enough space for one spare |
1271 |
> |
int size = pc < MAX_WORKERS ? pc + 1 : MAX_WORKERS; |
1272 |
> |
// See Hackers Delight, sec 3.2. We know MAX_WORKERS < (1 >>> 16) |
1273 |
|
size |= size >>> 1; |
1274 |
|
size |= size >>> 2; |
1275 |
|
size |= size >>> 4; |
1288 |
|
if (runState >= SHUTDOWN) |
1289 |
|
throw new RejectedExecutionException(); |
1290 |
|
submissionQueue.offer(task); |
1291 |
< |
signalEvent(); |
1292 |
< |
ensureEnoughTotalWorkers(); |
1291 |
> |
int c; // try to increment event count -- CAS failure OK |
1292 |
> |
UNSAFE.compareAndSwapInt(this, eventCountOffset, c = eventCount, c+1); |
1293 |
> |
helpMaintainParallelism(); // create, start, or resume some workers |
1294 |
|
} |
1295 |
|
|
1296 |
|
/** |
1297 |
|
* Performs the given task, returning its result upon completion. |
1293 |
– |
* If the caller is already engaged in a fork/join computation in |
1294 |
– |
* the current pool, this method is equivalent in effect to |
1295 |
– |
* {@link ForkJoinTask#invoke}. |
1298 |
|
* |
1299 |
|
* @param task the task |
1300 |
|
* @return the task's result |
1309 |
|
|
1310 |
|
/** |
1311 |
|
* Arranges for (asynchronous) execution of the given task. |
1310 |
– |
* If the caller is already engaged in a fork/join computation in |
1311 |
– |
* the current pool, this method is equivalent in effect to |
1312 |
– |
* {@link ForkJoinTask#fork}. |
1312 |
|
* |
1313 |
|
* @param task the task |
1314 |
|
* @throws NullPointerException if the task is null |
1337 |
|
|
1338 |
|
/** |
1339 |
|
* Submits a ForkJoinTask for execution. |
1341 |
– |
* If the caller is already engaged in a fork/join computation in |
1342 |
– |
* the current pool, this method is equivalent in effect to |
1343 |
– |
* {@link ForkJoinTask#fork}. |
1340 |
|
* |
1341 |
|
* @param task the task to submit |
1342 |
|
* @return the task |
1528 |
|
public long getQueuedTaskCount() { |
1529 |
|
long count = 0; |
1530 |
|
ForkJoinWorkerThread[] ws = workers; |
1531 |
< |
int nws = ws.length; |
1532 |
< |
for (int i = 0; i < nws; ++i) { |
1531 |
> |
int n = ws.length; |
1532 |
> |
for (int i = 0; i < n; ++i) { |
1533 |
|
ForkJoinWorkerThread w = ws[i]; |
1534 |
|
if (w != null) |
1535 |
|
count += w.getQueueSize(); |
1587 |
|
* @return the number of elements transferred |
1588 |
|
*/ |
1589 |
|
protected int drainTasksTo(Collection<? super ForkJoinTask<?>> c) { |
1590 |
< |
int n = submissionQueue.drainTo(c); |
1590 |
> |
int count = submissionQueue.drainTo(c); |
1591 |
|
ForkJoinWorkerThread[] ws = workers; |
1592 |
< |
int nws = ws.length; |
1593 |
< |
for (int i = 0; i < nws; ++i) { |
1592 |
> |
int n = ws.length; |
1593 |
> |
for (int i = 0; i < n; ++i) { |
1594 |
|
ForkJoinWorkerThread w = ws[i]; |
1595 |
|
if (w != null) |
1596 |
< |
n += w.drainTasksTo(c); |
1601 |
< |
} |
1602 |
< |
return n; |
1603 |
< |
} |
1604 |
< |
|
1605 |
< |
/** |
1606 |
< |
* Returns count of total parks by existing workers. |
1607 |
< |
* Used during development only since not meaningful to users. |
1608 |
< |
*/ |
1609 |
< |
private int collectParkCount() { |
1610 |
< |
int count = 0; |
1611 |
< |
ForkJoinWorkerThread[] ws = workers; |
1612 |
< |
int nws = ws.length; |
1613 |
< |
for (int i = 0; i < nws; ++i) { |
1614 |
< |
ForkJoinWorkerThread w = ws[i]; |
1615 |
< |
if (w != null) |
1616 |
< |
count += w.parkCount; |
1596 |
> |
count += w.drainTasksTo(c); |
1597 |
|
} |
1598 |
|
return count; |
1599 |
|
} |
1615 |
|
int pc = parallelism; |
1616 |
|
int rs = runState; |
1617 |
|
int ac = rs & ACTIVE_COUNT_MASK; |
1638 |
– |
// int pk = collectParkCount(); |
1618 |
|
return super.toString() + |
1619 |
|
"[" + runLevelToString(rs) + |
1620 |
|
", parallelism = " + pc + |
1624 |
|
", steals = " + st + |
1625 |
|
", tasks = " + qt + |
1626 |
|
", submissions = " + qs + |
1648 |
– |
// ", parks = " + pk + |
1627 |
|
"]"; |
1628 |
|
} |
1629 |
|
|
1730 |
|
* Interface for extending managed parallelism for tasks running |
1731 |
|
* in {@link ForkJoinPool}s. |
1732 |
|
* |
1733 |
< |
* <p>A {@code ManagedBlocker} provides two methods. |
1734 |
< |
* Method {@code isReleasable} must return {@code true} if |
1735 |
< |
* blocking is not necessary. Method {@code block} blocks the |
1736 |
< |
* current thread if necessary (perhaps internally invoking |
1737 |
< |
* {@code isReleasable} before actually blocking). |
1733 |
> |
* <p>A {@code ManagedBlocker} provides two methods. Method |
1734 |
> |
* {@code isReleasable} must return {@code true} if blocking is |
1735 |
> |
* not necessary. Method {@code block} blocks the current thread |
1736 |
> |
* if necessary (perhaps internally invoking {@code isReleasable} |
1737 |
> |
* before actually blocking). The unusual methods in this API |
1738 |
> |
* accommodate synchronizers that may, but don't usually, block |
1739 |
> |
* for long periods. Similarly, they allow more efficient internal |
1740 |
> |
* handling of cases in which additional workers may be, but |
1741 |
> |
* usually are not, needed to ensure sufficient parallelism. |
1742 |
> |
* Toward this end, implementations of method {@code isReleasable} |
1743 |
> |
* must be amenable to repeated invocation. |
1744 |
|
* |
1745 |
|
* <p>For example, here is a ManagedBlocker based on a |
1746 |
|
* ReentrantLock: |
1758 |
|
* return hasLock || (hasLock = lock.tryLock()); |
1759 |
|
* } |
1760 |
|
* }}</pre> |
1761 |
+ |
* |
1762 |
+ |
* <p>Here is a class that possibly blocks waiting for an |
1763 |
+ |
* item on a given queue: |
1764 |
+ |
* <pre> {@code |
1765 |
+ |
* class QueueTaker<E> implements ManagedBlocker { |
1766 |
+ |
* final BlockingQueue<E> queue; |
1767 |
+ |
* volatile E item = null; |
1768 |
+ |
* QueueTaker(BlockingQueue<E> q) { this.queue = q; } |
1769 |
+ |
* public boolean block() throws InterruptedException { |
1770 |
+ |
* if (item == null) |
1771 |
+ |
* item = queue.take(); |
1772 |
+ |
* return true; |
1773 |
+ |
* } |
1774 |
+ |
* public boolean isReleasable() { |
1775 |
+ |
* return item != null || (item = queue.poll()) != null; |
1776 |
+ |
* } |
1777 |
+ |
* public E getItem() { // call after pool.managedBlock completes |
1778 |
+ |
* return item; |
1779 |
+ |
* } |
1780 |
+ |
* }}</pre> |
1781 |
|
*/ |
1782 |
|
public static interface ManagedBlocker { |
1783 |
|
/** |
1820 |
|
public static void managedBlock(ManagedBlocker blocker) |
1821 |
|
throws InterruptedException { |
1822 |
|
Thread t = Thread.currentThread(); |
1823 |
< |
if (t instanceof ForkJoinWorkerThread) |
1824 |
< |
((ForkJoinWorkerThread) t).pool.awaitBlocker(blocker); |
1823 |
> |
if (t instanceof ForkJoinWorkerThread) { |
1824 |
> |
ForkJoinWorkerThread w = (ForkJoinWorkerThread) t; |
1825 |
> |
w.pool.awaitBlocker(blocker); |
1826 |
> |
} |
1827 |
|
else { |
1828 |
|
do {} while (!blocker.isReleasable() && !blocker.block()); |
1829 |
|
} |
1854 |
|
objectFieldOffset("eventWaiters",ForkJoinPool.class); |
1855 |
|
private static final long stealCountOffset = |
1856 |
|
objectFieldOffset("stealCount",ForkJoinPool.class); |
1857 |
+ |
private static final long spareWaitersOffset = |
1858 |
+ |
objectFieldOffset("spareWaiters",ForkJoinPool.class); |
1859 |
|
|
1860 |
|
private static long objectFieldOffset(String field, Class<?> klazz) { |
1861 |
|
try { |