69 |
|
* <td ALIGN=CENTER> <b>Call from within fork/join computations</b></td> |
70 |
|
* </tr> |
71 |
|
* <tr> |
72 |
< |
* <td> <b>Arange async execution</td> |
72 |
> |
* <td> <b>Arrange async execution</td> |
73 |
|
* <td> {@link #execute(ForkJoinTask)}</td> |
74 |
|
* <td> {@link ForkJoinTask#fork}</td> |
75 |
|
* </tr> |
140 |
|
* Beyond work-stealing support and essential bookkeeping, the |
141 |
|
* main responsibility of this framework is to take actions when |
142 |
|
* one worker is waiting to join a task stolen (or always held by) |
143 |
< |
* another. Becauae we are multiplexing many tasks on to a pool |
143 |
> |
* another. Because we are multiplexing many tasks on to a pool |
144 |
|
* of workers, we can't just let them block (as in Thread.join). |
145 |
|
* We also cannot just reassign the joiner's run-time stack with |
146 |
|
* another and replace it later, which would be a form of |
226 |
|
* ManagedBlocker), we may create or resume others to take their |
227 |
|
* place until they unblock (see below). Implementing this |
228 |
|
* requires counts of the number of "running" threads (i.e., those |
229 |
< |
* that are neither blocked nor artifically suspended) as well as |
229 |
> |
* that are neither blocked nor artificially suspended) as well as |
230 |
|
* the total number. These two values are packed into one field, |
231 |
|
* "workerCounts" because we need accurate snapshots when deciding |
232 |
|
* to create, resume or suspend. Note however that the |
233 |
< |
* correspondance of these counts to reality is not guaranteed. In |
233 |
> |
* correspondence of these counts to reality is not guaranteed. In |
234 |
|
* particular updates for unblocked threads may lag until they |
235 |
|
* actually wake up. |
236 |
|
* |
315 |
|
* 7. Deciding when to create new workers. The main dynamic |
316 |
|
* control in this class is deciding when to create extra threads |
317 |
|
* in method helpMaintainParallelism. We would like to keep |
318 |
< |
* exactly #parallelism threads running, which is an impossble |
318 |
> |
* exactly #parallelism threads running, which is an impossible |
319 |
|
* task. We always need to create one when the number of running |
320 |
|
* threads would become zero and all workers are busy. Beyond |
321 |
|
* this, we must rely on heuristics that work well in the the |
420 |
|
/** |
421 |
|
* The time to block in a join (see awaitJoin) before checking if |
422 |
|
* a new worker should be (re)started to maintain parallelism |
423 |
< |
* level. The value should be short enough to maintain gloabal |
423 |
> |
* level. The value should be short enough to maintain global |
424 |
|
* responsiveness and progress but long enough to avoid |
425 |
|
* counterproductive firings during GC stalls or unrelated system |
426 |
|
* activity, and to not bog down systems with continual re-firings |
483 |
|
private volatile long stealCount; |
484 |
|
|
485 |
|
/** |
486 |
< |
* Encoded record of top of treiber stack of threads waiting for |
486 |
> |
* Encoded record of top of Treiber stack of threads waiting for |
487 |
|
* events. The top 32 bits contain the count being waited for. The |
488 |
|
* bottom 16 bits contains one plus the pool index of waiting |
489 |
|
* worker thread. (Bits 16-31 are unused.) |
502 |
|
private volatile int eventCount; |
503 |
|
|
504 |
|
/** |
505 |
< |
* Encoded record of top of treiber stack of spare threads waiting |
505 |
> |
* Encoded record of top of Treiber stack of spare threads waiting |
506 |
|
* for resumption. The top 16 bits contain an arbitrary count to |
507 |
|
* avoid ABA effects. The bottom 16bits contains one plus the pool |
508 |
|
* index of waiting worker thread. |
678 |
|
*/ |
679 |
|
private void forgetWorker(ForkJoinWorkerThread w) { |
680 |
|
int idx = w.poolIndex; |
681 |
< |
// Locking helps method recordWorker avoid unecessary expansion |
681 |
> |
// Locking helps method recordWorker avoid unnecessary expansion |
682 |
|
final ReentrantLock lock = this.workerLock; |
683 |
|
lock.lock(); |
684 |
|
try { |
693 |
|
/** |
694 |
|
* Final callback from terminating worker. Removes record of |
695 |
|
* worker from array, and adjusts counts. If pool is shutting |
696 |
< |
* down, tries to complete terminatation. |
696 |
> |
* down, tries to complete termination. |
697 |
|
* |
698 |
|
* @param w the worker |
699 |
|
*/ |
846 |
|
* Tries to increase the number of running workers if below target |
847 |
|
* parallelism: If a spare exists tries to resume it via |
848 |
|
* tryResumeSpare. Otherwise, if not enough total workers or all |
849 |
< |
* existing workers are busy, adds a new worker. In all casses also |
849 |
> |
* existing workers are busy, adds a new worker. In all cases also |
850 |
|
* helps wake up releasable workers waiting for work. |
851 |
|
*/ |
852 |
|
private void helpMaintainParallelism() { |
1174 |
|
*/ |
1175 |
|
final int idlePerActive() { |
1176 |
|
int pc = parallelism; // use parallelism, not rc |
1177 |
< |
int ac = runState; // no mask -- artifically boosts during shutdown |
1177 |
> |
int ac = runState; // no mask -- artificially boosts during shutdown |
1178 |
|
// Use exact results for small values, saturate past 4 |
1179 |
|
return pc <= ac? 0 : pc >>> 1 <= ac? 1 : pc >>> 2 <= ac? 3 : pc >>> 3; |
1180 |
|
} |