21 |
|
/** |
22 |
|
* An {@link ExecutorService} for running {@link ForkJoinTask}s. |
23 |
|
* A {@code ForkJoinPool} provides the entry point for submissions |
24 |
< |
* from non-{@code ForkJoinTask}s, as well as management and |
24 |
> |
* from non-{@code ForkJoinTask} clients, as well as management and |
25 |
|
* monitoring operations. |
26 |
|
* |
27 |
|
* <p>A {@code ForkJoinPool} differs from other kinds of {@link |
30 |
|
* execute subtasks created by other active tasks (eventually blocking |
31 |
|
* waiting for work if none exist). This enables efficient processing |
32 |
|
* when most tasks spawn other subtasks (as do most {@code |
33 |
< |
* ForkJoinTask}s). A {@code ForkJoinPool} may also be used for mixed |
34 |
< |
* execution of some plain {@code Runnable}- or {@code Callable}- |
35 |
< |
* based activities along with {@code ForkJoinTask}s. When setting |
36 |
< |
* {@linkplain #setAsyncMode async mode}, a {@code ForkJoinPool} may |
37 |
< |
* also be appropriate for use with fine-grained tasks of any form |
38 |
< |
* that are never joined. Otherwise, other {@code ExecutorService} |
39 |
< |
* implementations are typically more appropriate choices. |
33 |
> |
* ForkJoinTask}s). When setting <em>asyncMode</em> to true in |
34 |
> |
* constructors, {@code ForkJoinPool}s may also be appropriate for use |
35 |
> |
* with event-style tasks that are never joined. |
36 |
|
* |
37 |
|
* <p>A {@code ForkJoinPool} is constructed with a given target |
38 |
|
* parallelism level; by default, equal to the number of available |
39 |
< |
* processors. Unless configured otherwise via {@link |
40 |
< |
* #setMaintainsParallelism}, the pool attempts to maintain this |
41 |
< |
* number of active (or available) threads by dynamically adding, |
42 |
< |
* suspending, or resuming internal worker threads, even if some tasks |
43 |
< |
* are stalled waiting to join others. However, no such adjustments |
44 |
< |
* are performed in the face of blocked IO or other unmanaged |
45 |
< |
* synchronization. The nested {@link ManagedBlocker} interface |
50 |
< |
* enables extension of the kinds of synchronization accommodated. |
51 |
< |
* The target parallelism level may also be changed dynamically |
52 |
< |
* ({@link #setParallelism}). The total number of threads may be |
53 |
< |
* limited using method {@link #setMaximumPoolSize}, in which case it |
54 |
< |
* may become possible for the activities of a pool to stall due to |
55 |
< |
* the lack of available threads to process new tasks. When the pool |
56 |
< |
* is executing tasks, these and other configuration setting methods |
57 |
< |
* may only gradually affect actual pool sizes. It is normally best |
58 |
< |
* practice to invoke these methods only when the pool is known to be |
59 |
< |
* quiescent. |
39 |
> |
* processors. The pool attempts to maintain enough active (or |
40 |
> |
* available) threads by dynamically adding, suspending, or resuming |
41 |
> |
* internal worker threads, even if some tasks are stalled waiting to |
42 |
> |
* join others. However, no such adjustments are guaranteed in the |
43 |
> |
* face of blocked IO or other unmanaged synchronization. The nested |
44 |
> |
* {@link ManagedBlocker} interface enables extension of the kinds of |
45 |
> |
* synchronization accommodated. |
46 |
|
* |
47 |
|
* <p>In addition to execution and lifecycle control methods, this |
48 |
|
* class provides status check methods (for example |
51 |
|
* {@link #toString} returns indications of pool state in a |
52 |
|
* convenient form for informal monitoring. |
53 |
|
* |
54 |
+ |
* <p> As is the case with other ExecutorServices, there are three |
55 |
+ |
* main task execution methods summarized in the following |
56 |
+ |
* table. These are designed to be used by clients not already engaged |
57 |
+ |
* in fork/join computations in the current pool. The main forms of |
58 |
+ |
* these methods accept instances of {@code ForkJoinTask}, but |
59 |
+ |
* overloaded forms also allow mixed execution of plain {@code |
60 |
+ |
* Runnable}- or {@code Callable}- based activities as well. However, |
61 |
+ |
* tasks that are already executing in a pool should normally |
62 |
+ |
* <em>NOT</em> use these pool execution methods, but instead use the |
63 |
+ |
* within-computation forms listed in the table. |
64 |
+ |
* |
65 |
+ |
* <table BORDER CELLPADDING=3 CELLSPACING=1> |
66 |
+ |
* <tr> |
67 |
+ |
* <td></td> |
68 |
+ |
* <td ALIGN=CENTER> <b>Call from non-fork/join clients</b></td> |
69 |
+ |
* <td ALIGN=CENTER> <b>Call from within fork/join computations</b></td> |
70 |
+ |
* </tr> |
71 |
+ |
* <tr> |
72 |
+ |
* <td> <b>Arange async execution</td> |
73 |
+ |
* <td> {@link #execute(ForkJoinTask)}</td> |
74 |
+ |
* <td> {@link ForkJoinTask#fork}</td> |
75 |
+ |
* </tr> |
76 |
+ |
* <tr> |
77 |
+ |
* <td> <b>Await and obtain result</td> |
78 |
+ |
* <td> {@link #invoke(ForkJoinTask)}</td> |
79 |
+ |
* <td> {@link ForkJoinTask#invoke}</td> |
80 |
+ |
* </tr> |
81 |
+ |
* <tr> |
82 |
+ |
* <td> <b>Arrange exec and obtain Future</td> |
83 |
+ |
* <td> {@link #submit(ForkJoinTask)}</td> |
84 |
+ |
* <td> {@link ForkJoinTask#fork} (ForkJoinTasks <em>are</em> Futures)</td> |
85 |
+ |
* </tr> |
86 |
+ |
* </table> |
87 |
+ |
* |
88 |
|
* <p><b>Sample Usage.</b> Normally a single {@code ForkJoinPool} is |
89 |
|
* used for all parallel task execution in a program or subsystem. |
90 |
|
* Otherwise, use would not usually outweigh the construction and |
109 |
|
* {@code IllegalArgumentException}. |
110 |
|
* |
111 |
|
* <p>This implementation rejects submitted tasks (that is, by throwing |
112 |
< |
* {@link RejectedExecutionException}) only when the pool is shut down. |
112 |
> |
* {@link RejectedExecutionException}) only when the pool is shut down |
113 |
> |
* or internal resources have been exhausted. |
114 |
|
* |
115 |
|
* @since 1.7 |
116 |
|
* @author Doug Lea |
137 |
|
* of tasks profit from cache affinities, but others are harmed by |
138 |
|
* cache pollution effects.) |
139 |
|
* |
140 |
+ |
* Beyond work-stealing support and essential bookkeeping, the |
141 |
+ |
* main responsibility of this framework is to take actions when |
142 |
+ |
* one worker is waiting to join a task stolen (or always held by) |
143 |
+ |
* another. Becauae we are multiplexing many tasks on to a pool |
144 |
+ |
* of workers, we can't just let them block (as in Thread.join). |
145 |
+ |
* We also cannot just reassign the joiner's run-time stack with |
146 |
+ |
* another and replace it later, which would be a form of |
147 |
+ |
* "continuation", that even if possible is not necessarily a good |
148 |
+ |
* idea. Given that the creation costs of most threads on most |
149 |
+ |
* systems mainly surrounds setting up runtime stacks, thread |
150 |
+ |
* creation and switching is usually not much more expensive than |
151 |
+ |
* stack creation and switching, and is more flexible). Instead we |
152 |
+ |
* combine two tactics: |
153 |
+ |
* |
154 |
+ |
* Helping: Arranging for the joiner to execute some task that it |
155 |
+ |
* would be running if the steal had not occurred. Method |
156 |
+ |
* ForkJoinWorkerThread.helpJoinTask tracks joining->stealing |
157 |
+ |
* links to try to find such a task. |
158 |
+ |
* |
159 |
+ |
* Compensating: Unless there are already enough live threads, |
160 |
+ |
* method helpMaintainParallelism() may create or or |
161 |
+ |
* re-activate a spare thread to compensate for blocked |
162 |
+ |
* joiners until they unblock. |
163 |
+ |
* |
164 |
+ |
* Because the determining existence of conservatively safe |
165 |
+ |
* helping targets, the availability of already-created spares, |
166 |
+ |
* and the apparent need to create new spares are all racy and |
167 |
+ |
* require heuristic guidance, we rely on multiple retries of |
168 |
+ |
* each. Further, because it is impossible to keep exactly the |
169 |
+ |
* target (parallelism) number of threads running at any given |
170 |
+ |
* time, we allow compensation during joins to fail, and enlist |
171 |
+ |
* all other threads to help out whenever they are not otherwise |
172 |
+ |
* occupied (i.e., mainly in method preStep). |
173 |
+ |
* |
174 |
+ |
* The ManagedBlocker extension API can't use helping so relies |
175 |
+ |
* only on compensation in method awaitBlocker. |
176 |
+ |
* |
177 |
|
* The main throughput advantages of work-stealing stem from |
178 |
|
* decentralized control -- workers mostly steal tasks from each |
179 |
|
* other. We do not want to negate this by creating bottlenecks |
180 |
< |
* implementing the management responsibilities of this class. So |
181 |
< |
* we use a collection of techniques that avoid, reduce, or cope |
182 |
< |
* well with contention. These entail several instances of |
183 |
< |
* bit-packing into CASable fields to maintain only the minimally |
184 |
< |
* required atomicity. To enable such packing, we restrict maximum |
185 |
< |
* parallelism to (1<<15)-1 (enabling twice this to fit into a 16 |
186 |
< |
* bit field), which is far in excess of normal operating range. |
187 |
< |
* Even though updates to some of these bookkeeping fields do |
188 |
< |
* sometimes contend with each other, they don't normally |
189 |
< |
* cache-contend with updates to others enough to warrant memory |
190 |
< |
* padding or isolation. So they are all held as fields of |
191 |
< |
* ForkJoinPool objects. The main capabilities are as follows: |
180 |
> |
* implementing other management responsibilities. So we use a |
181 |
> |
* collection of techniques that avoid, reduce, or cope well with |
182 |
> |
* contention. These entail several instances of bit-packing into |
183 |
> |
* CASable fields to maintain only the minimally required |
184 |
> |
* atomicity. To enable such packing, we restrict maximum |
185 |
> |
* parallelism to (1<<15)-1 (enabling twice this (to accommodate |
186 |
> |
* unbalanced increments and decrements) to fit into a 16 bit |
187 |
> |
* field, which is far in excess of normal operating range. Even |
188 |
> |
* though updates to some of these bookkeeping fields do sometimes |
189 |
> |
* contend with each other, they don't normally cache-contend with |
190 |
> |
* updates to others enough to warrant memory padding or |
191 |
> |
* isolation. So they are all held as fields of ForkJoinPool |
192 |
> |
* objects. The main capabilities are as follows: |
193 |
|
* |
194 |
|
* 1. Creating and removing workers. Workers are recorded in the |
195 |
|
* "workers" array. This is an array as opposed to some other data |
205 |
|
* blocked workers. However, all other support code is set up to |
206 |
|
* work with other policies. |
207 |
|
* |
208 |
+ |
* To ensure that we do not hold on to worker references that |
209 |
+ |
* would prevent GC, ALL accesses to workers are via indices into |
210 |
+ |
* the workers array (which is one source of some of the unusual |
211 |
+ |
* code constructions here). In essence, the workers array serves |
212 |
+ |
* as a WeakReference mechanism. Thus for example the event queue |
213 |
+ |
* stores worker indices, not worker references. Access to the |
214 |
+ |
* workers in associated methods (for example releaseEventWaiters) |
215 |
+ |
* must both index-check and null-check the IDs. All such accesses |
216 |
+ |
* ignore bad IDs by returning out early from what they are doing, |
217 |
+ |
* since this can only be associated with shutdown, in which case |
218 |
+ |
* it is OK to give up. On termination, we just clobber these |
219 |
+ |
* data structures without trying to use them. |
220 |
+ |
* |
221 |
|
* 2. Bookkeeping for dynamically adding and removing workers. We |
222 |
< |
* maintain a given level of parallelism (or, if |
223 |
< |
* maintainsParallelism is false, at least avoid starvation). When |
152 |
< |
* some workers are known to be blocked (on joins or via |
222 |
> |
* aim to approximately maintain the given level of parallelism. |
223 |
> |
* When some workers are known to be blocked (on joins or via |
224 |
|
* ManagedBlocker), we may create or resume others to take their |
225 |
|
* place until they unblock (see below). Implementing this |
226 |
|
* requires counts of the number of "running" threads (i.e., those |
227 |
|
* that are neither blocked nor artifically suspended) as well as |
228 |
|
* the total number. These two values are packed into one field, |
229 |
|
* "workerCounts" because we need accurate snapshots when deciding |
230 |
< |
* to create, resume or suspend. To support these decisions, |
231 |
< |
* updates to spare counts must be prospective (not |
232 |
< |
* retrospective). For example, the running count is decremented |
233 |
< |
* before blocking by a thread about to block as a spare, but |
163 |
< |
* incremented by the thread about to unblock it. Updates upon |
164 |
< |
* resumption ofr threads blocking in awaitJoin or awaitBlocker |
165 |
< |
* cannot usually be prospective, so the running count is in |
166 |
< |
* general an upper bound of the number of productively running |
167 |
< |
* threads Updates to the workerCounts field sometimes transiently |
168 |
< |
* encounter a fair amount of contention when join dependencies |
169 |
< |
* are such that many threads block or unblock at about the same |
170 |
< |
* time. We alleviate this by sometimes bundling updates (for |
171 |
< |
* example blocking one thread on join and resuming a spare cancel |
172 |
< |
* each other out), and in most other cases performing an |
173 |
< |
* alternative action like releasing waiters or locating spares. |
230 |
> |
* to create, resume or suspend. Note however that the |
231 |
> |
* correspondance of these counts to reality is not guaranteed. In |
232 |
> |
* particular updates for unblocked threads may lag until they |
233 |
> |
* actually wake up. |
234 |
|
* |
235 |
|
* 3. Maintaining global run state. The run state of the pool |
236 |
|
* consists of a runLevel (SHUTDOWN, TERMINATING, etc) similar to |
259 |
|
* workers that previously could not find a task to now find one: |
260 |
|
* Submission of a new task to the pool, or another worker pushing |
261 |
|
* a task onto a previously empty queue. (We also use this |
262 |
< |
* mechanism for termination and reconfiguration actions that |
263 |
< |
* require wakeups of idle workers). Each worker maintains its |
264 |
< |
* last known event count, and blocks when a scan for work did not |
265 |
< |
* find a task AND its lastEventCount matches the current |
266 |
< |
* eventCount. Waiting idle workers are recorded in a variant of |
267 |
< |
* Treiber stack headed by field eventWaiters which, when nonzero, |
268 |
< |
* encodes the thread index and count awaited for by the worker |
269 |
< |
* thread most recently calling eventSync. This thread in turn has |
270 |
< |
* a record (field nextEventWaiter) for the next waiting worker. |
271 |
< |
* In addition to allowing simpler decisions about need for |
272 |
< |
* wakeup, the event count bits in eventWaiters serve the role of |
273 |
< |
* tags to avoid ABA errors in Treiber stacks. To reduce delays |
274 |
< |
* in task diffusion, workers not otherwise occupied may invoke |
275 |
< |
* method releaseWaiters, that removes and signals (unparks) |
276 |
< |
* workers not waiting on current count. To minimize task |
277 |
< |
* production stalls associate with signalling, any worker pushing |
278 |
< |
* a task on an empty queue invokes the weaker method signalWork, |
279 |
< |
* that only releases idle workers until it detects interference |
280 |
< |
* by other threads trying to release, and lets them take |
281 |
< |
* over. The net effect is a tree-like diffusion of signals, where |
282 |
< |
* released threads (and possibly others) help with unparks. To |
283 |
< |
* further reduce contention effects a bit, failed CASes to |
262 |
> |
* mechanism for termination actions that require wakeups of idle |
263 |
> |
* workers). Each worker maintains its last known event count, |
264 |
> |
* and blocks when a scan for work did not find a task AND its |
265 |
> |
* lastEventCount matches the current eventCount. Waiting idle |
266 |
> |
* workers are recorded in a variant of Treiber stack headed by |
267 |
> |
* field eventWaiters which, when nonzero, encodes the thread |
268 |
> |
* index and count awaited for by the worker thread most recently |
269 |
> |
* calling eventSync. This thread in turn has a record (field |
270 |
> |
* nextEventWaiter) for the next waiting worker. In addition to |
271 |
> |
* allowing simpler decisions about need for wakeup, the event |
272 |
> |
* count bits in eventWaiters serve the role of tags to avoid ABA |
273 |
> |
* errors in Treiber stacks. To reduce delays in task diffusion, |
274 |
> |
* workers not otherwise occupied may invoke method |
275 |
> |
* releaseEventWaiters, that removes and signals (unparks) workers |
276 |
> |
* not waiting on current count. To reduce stalls, To minimize |
277 |
> |
* task production stalls associate with signalling, any worker |
278 |
> |
* pushing a task on an empty queue invokes the weaker method |
279 |
> |
* signalWork, that only releases idle workers until it detects |
280 |
> |
* interference by other threads trying to release, and lets them |
281 |
> |
* take over. The net effect is a tree-like diffusion of signals, |
282 |
> |
* where released threads (and possibly others) help with unparks. |
283 |
> |
* To further reduce contention effects a bit, failed CASes to |
284 |
|
* increment field eventCount are tolerated without retries. |
285 |
|
* Conceptually they are merged into the same event, which is OK |
286 |
|
* when their only purpose is to enable workers to scan for work. |
288 |
|
* 5. Managing suspension of extra workers. When a worker is about |
289 |
|
* to block waiting for a join (or via ManagedBlockers), we may |
290 |
|
* create a new thread to maintain parallelism level, or at least |
291 |
< |
* avoid starvation (see below). Usually, extra threads are needed |
292 |
< |
* for only very short periods, yet join dependencies are such |
293 |
< |
* that we sometimes need them in bursts. Rather than create new |
294 |
< |
* threads each time this happens, we suspend no-longer-needed |
295 |
< |
* extra ones as "spares". For most purposes, we don't distinguish |
296 |
< |
* "extra" spare threads from normal "core" threads: On each call |
297 |
< |
* to preStep (the only point at which we can do this) a worker |
291 |
> |
* avoid starvation. Usually, extra threads are needed for only |
292 |
> |
* very short periods, yet join dependencies are such that we |
293 |
> |
* sometimes need them in bursts. Rather than create new threads |
294 |
> |
* each time this happens, we suspend no-longer-needed extra ones |
295 |
> |
* as "spares". For most purposes, we don't distinguish "extra" |
296 |
> |
* spare threads from normal "core" threads: On each call to |
297 |
> |
* preStep (the only point at which we can do this) a worker |
298 |
|
* checks to see if there are now too many running workers, and if |
299 |
< |
* so, suspends itself. Methods awaitJoin and awaitBlocker look |
300 |
< |
* for suspended threads to resume before considering creating a |
301 |
< |
* new replacement. We don't need a special data structure to |
302 |
< |
* maintain spares; simply scanning the workers array looking for |
303 |
< |
* worker.isSuspended() is fine because the calling thread is |
244 |
< |
* otherwise not doing anything useful anyway; we are at least as |
245 |
< |
* happy if after locating a spare, the caller doesn't actually |
246 |
< |
* block because the join is ready before we try to adjust and |
247 |
< |
* compensate. Note that this is intrinsically racy. One thread |
299 |
> |
* so, suspends itself. Method helpMaintainParallelism looks for |
300 |
> |
* suspended threads to resume before considering creating a new |
301 |
> |
* replacement. The spares themselves are encoded on another |
302 |
> |
* variant of a Treiber Stack, headed at field "spareWaiters". |
303 |
> |
* Note that the use of spares is intrinsically racy. One thread |
304 |
|
* may become a spare at about the same time as another is |
305 |
|
* needlessly being created. We counteract this and related slop |
306 |
|
* in part by requiring resumed spares to immediately recheck (in |
307 |
< |
* preStep) to see whether they they should re-suspend. The only |
308 |
< |
* effective difference between "extra" and "core" threads is that |
309 |
< |
* we allow the "extra" ones to time out and die if they are not |
310 |
< |
* resumed within a keep-alive interval of a few seconds. This is |
311 |
< |
* implemented mainly within ForkJoinWorkerThread, but requires |
312 |
< |
* some coordination (isTrimmed() -- meaning killed while |
313 |
< |
* suspended) to correctly maintain pool counts. |
307 |
> |
* preStep) to see whether they they should re-suspend. To avoid |
308 |
> |
* long-term build-up of spares, the oldest spare (see |
309 |
> |
* ForkJoinWorkerThread.suspendAsSpare) occasionally wakes up if |
310 |
> |
* not signalled and calls tryTrimSpare, which uses two different |
311 |
> |
* thresholds: Always killing if the number of spares is greater |
312 |
> |
* that 25% of total, and killing others only at a slower rate |
313 |
> |
* (UNUSED_SPARE_TRIM_RATE_NANOS). |
314 |
|
* |
315 |
|
* 6. Deciding when to create new workers. The main dynamic |
316 |
< |
* control in this class is deciding when to create extra threads, |
317 |
< |
* in methods awaitJoin and awaitBlocker. We always |
318 |
< |
* need to create one when the number of running threads becomes |
319 |
< |
* zero. But because blocked joins are typically dependent, we |
320 |
< |
* don't necessarily need or want one-to-one replacement. Using a |
321 |
< |
* one-to-one compensation rule often leads to enough useless |
322 |
< |
* overhead creating, suspending, resuming, and/or killing threads |
323 |
< |
* to signficantly degrade throughput. We use a rule reflecting |
324 |
< |
* the idea that, the more spare threads you already have, the |
325 |
< |
* more evidence you need to create another one. The "evidence" |
326 |
< |
* here takes two forms: (1) Using a creation threshold expressed |
327 |
< |
* in terms of the current deficit -- target minus running |
328 |
< |
* threads. To reduce flickering and drift around target values, |
329 |
< |
* the relation is quadratic: adding a spare if (dc*dc)>=(sc*pc) |
330 |
< |
* (where dc is deficit, sc is number of spare threads and pc is |
331 |
< |
* target parallelism.) (2) Using a form of adaptive |
332 |
< |
* spionning. requiring a number of threshold checks proportional |
333 |
< |
* to the number of spare threads. This effectively reduces churn |
334 |
< |
* at the price of systematically undershooting target parallelism |
279 |
< |
* when many threads are blocked. However, biasing toward |
280 |
< |
* undeshooting partially compensates for the above mechanics to |
281 |
< |
* suspend extra threads, that normally lead to overshoot because |
282 |
< |
* we can only suspend workers in-between top-level actions. It |
283 |
< |
* also better copes with the fact that some of the methods in |
284 |
< |
* this class tend to never become compiled (but are interpreted), |
285 |
< |
* so some components of the entire set of controls might execute |
286 |
< |
* many times faster than others. And similarly for cases where |
287 |
< |
* the apparent lack of work is just due to GC stalls and other |
316 |
> |
* control in this class is deciding when to create extra threads |
317 |
> |
* in method helpMaintainParallelism. We would like to keep |
318 |
> |
* exactly #parallelism threads running, which is an impossble |
319 |
> |
* task. We always need to create one when the number of running |
320 |
> |
* threads would become zero and all workers are busy. Beyond |
321 |
> |
* this, we must rely on heuristics that work well in the the |
322 |
> |
* presence of transients phenomena such as GC stalls, dynamic |
323 |
> |
* compilation, and wake-up lags. These transients are extremely |
324 |
> |
* common -- we are normally trying to fully saturate the CPUs on |
325 |
> |
* a machine, so almost any activity other than running tasks |
326 |
> |
* impedes accuracy. Our main defense is to allow some slack in |
327 |
> |
* creation thresholds, using rules that reflect the fact that the |
328 |
> |
* more threads we have running, the more likely that we are |
329 |
> |
* underestimating the number running threads. The rules also |
330 |
> |
* better cope with the fact that some of the methods in this |
331 |
> |
* class tend to never become compiled (but are interpreted), so |
332 |
> |
* some components of the entire set of controls might execute 100 |
333 |
> |
* times faster than others. And similarly for cases where the |
334 |
> |
* apparent lack of work is just due to GC stalls and other |
335 |
|
* transient system activity. |
336 |
|
* |
290 |
– |
* 7. Maintaining other configuration parameters and monitoring |
291 |
– |
* statistics. Updates to fields controlling parallelism level, |
292 |
– |
* max size, etc can only meaningfully take effect for individual |
293 |
– |
* threads upon their next top-level actions; i.e., between |
294 |
– |
* stealing/running tasks/submission, which are separated by calls |
295 |
– |
* to preStep. Memory ordering for these (assumed infrequent) |
296 |
– |
* reconfiguration calls is ensured by using reads and writes to |
297 |
– |
* volatile field workerCounts (that must be read in preStep anyway) |
298 |
– |
* as "fences" -- user-level reads are preceded by reads of |
299 |
– |
* workCounts, and writes are followed by no-op CAS to |
300 |
– |
* workerCounts. The values reported by other management and |
301 |
– |
* monitoring methods are either computed on demand, or are kept |
302 |
– |
* in fields that are only updated when threads are otherwise |
303 |
– |
* idle. |
304 |
– |
* |
337 |
|
* Beware that there is a lot of representation-level coupling |
338 |
|
* among classes ForkJoinPool, ForkJoinWorkerThread, and |
339 |
|
* ForkJoinTask. For example, direct access to "workers" array by |
345 |
|
* |
346 |
|
* Style notes: There are lots of inline assignments (of form |
347 |
|
* "while ((local = field) != 0)") which are usually the simplest |
348 |
< |
* way to ensure read orderings. Also several occurrences of the |
349 |
< |
* unusual "do {} while(!cas...)" which is the simplest way to |
350 |
< |
* force an update of a CAS'ed variable. There are also a few |
351 |
< |
* other coding oddities that help some methods perform reasonably |
352 |
< |
* even when interpreted (not compiled). |
348 |
> |
* way to ensure the required read orderings (which are sometimes |
349 |
> |
* critical). Also several occurrences of the unusual "do {} |
350 |
> |
* while(!cas...)" which is the simplest way to force an update of |
351 |
> |
* a CAS'ed variable. There are also other coding oddities that |
352 |
> |
* help some methods perform reasonably even when interpreted (not |
353 |
> |
* compiled), at the expense of some messy constructions that |
354 |
> |
* reduce byte code counts. |
355 |
|
* |
356 |
|
* The order of declarations in this file is: (1) statics (2) |
357 |
|
* fields (along with constants used when unpacking some of them) |
380 |
|
* Default ForkJoinWorkerThreadFactory implementation; creates a |
381 |
|
* new ForkJoinWorkerThread. |
382 |
|
*/ |
383 |
< |
static class DefaultForkJoinWorkerThreadFactory |
383 |
> |
static class DefaultForkJoinWorkerThreadFactory |
384 |
|
implements ForkJoinWorkerThreadFactory { |
385 |
|
public ForkJoinWorkerThread newThread(ForkJoinPool pool) { |
386 |
|
return new ForkJoinWorkerThread(pool); |
419 |
|
new AtomicInteger(); |
420 |
|
|
421 |
|
/** |
422 |
< |
* Absolute bound for parallelism level. Twice this number must |
423 |
< |
* fit into a 16bit field to enable word-packing for some counts. |
422 |
> |
* Absolute bound for parallelism level. Twice this number plus |
423 |
> |
* one (i.e., 0xfff) must fit into a 16bit field to enable |
424 |
> |
* word-packing for some counts and indices. |
425 |
|
*/ |
426 |
< |
private static final int MAX_THREADS = 0x7fff; |
426 |
> |
private static final int MAX_WORKERS = 0x7fff; |
427 |
|
|
428 |
|
/** |
429 |
|
* Array holding all worker threads in the pool. Array size must |
449 |
|
/** |
450 |
|
* Latch released upon termination. |
451 |
|
*/ |
452 |
< |
private final CountDownLatch terminationLatch; |
452 |
> |
private final Phaser termination; |
453 |
|
|
454 |
|
/** |
455 |
|
* Creation factory for worker threads. |
463 |
|
private volatile long stealCount; |
464 |
|
|
465 |
|
/** |
466 |
+ |
* The last nanoTime that a spare thread was trimmed |
467 |
+ |
*/ |
468 |
+ |
private volatile long trimTime; |
469 |
+ |
|
470 |
+ |
/** |
471 |
+ |
* The rate at which to trim unused spares |
472 |
+ |
*/ |
473 |
+ |
static final long UNUSED_SPARE_TRIM_RATE_NANOS = |
474 |
+ |
1000L * 1000L * 1000L; // 1 sec |
475 |
+ |
|
476 |
+ |
/** |
477 |
|
* Encoded record of top of treiber stack of threads waiting for |
478 |
|
* events. The top 32 bits contain the count being waited for. The |
479 |
< |
* bottom word contains one plus the pool index of waiting worker |
480 |
< |
* thread. |
479 |
> |
* bottom 16 bits contains one plus the pool index of waiting |
480 |
> |
* worker thread. (Bits 16-31 are unused.) |
481 |
|
*/ |
482 |
|
private volatile long eventWaiters; |
483 |
|
|
484 |
|
private static final int EVENT_COUNT_SHIFT = 32; |
485 |
< |
private static final long WAITER_INDEX_MASK = (1L << EVENT_COUNT_SHIFT)-1L; |
485 |
> |
private static final long WAITER_ID_MASK = (1L << 16) - 1L; |
486 |
|
|
487 |
|
/** |
488 |
|
* A counter for events that may wake up worker threads: |
489 |
|
* - Submission of a new task to the pool |
490 |
|
* - A worker pushing a task on an empty queue |
491 |
< |
* - termination and reconfiguration |
491 |
> |
* - termination |
492 |
|
*/ |
493 |
|
private volatile int eventCount; |
494 |
|
|
495 |
|
/** |
496 |
+ |
* Encoded record of top of treiber stack of spare threads waiting |
497 |
+ |
* for resumption. The top 16 bits contain an arbitrary count to |
498 |
+ |
* avoid ABA effects. The bottom 16bits contains one plus the pool |
499 |
+ |
* index of waiting worker thread. |
500 |
+ |
*/ |
501 |
+ |
private volatile int spareWaiters; |
502 |
+ |
|
503 |
+ |
private static final int SPARE_COUNT_SHIFT = 16; |
504 |
+ |
private static final int SPARE_ID_MASK = (1 << 16) - 1; |
505 |
+ |
|
506 |
+ |
/** |
507 |
|
* Lifecycle control. The low word contains the number of workers |
508 |
|
* that are (probably) executing tasks. This value is atomically |
509 |
|
* incremented before a worker gets a task to run, and decremented |
532 |
|
* making decisions about creating and suspending spare |
533 |
|
* threads. Updated only by CAS. Note that adding a new worker |
534 |
|
* requires incrementing both counts, since workers start off in |
535 |
< |
* running state. This field is also used for memory-fencing |
479 |
< |
* configuration parameters. |
535 |
> |
* running state. |
536 |
|
*/ |
537 |
|
private volatile int workerCounts; |
538 |
|
|
541 |
|
private static final int ONE_RUNNING = 1; |
542 |
|
private static final int ONE_TOTAL = 1 << TOTAL_COUNT_SHIFT; |
543 |
|
|
488 |
– |
/* |
489 |
– |
* Fields parallelism. maxPoolSize, and maintainsParallelism are |
490 |
– |
* non-volatile, but external reads/writes use workerCount fences |
491 |
– |
* to ensure visability. |
492 |
– |
*/ |
493 |
– |
|
544 |
|
/** |
545 |
|
* The target parallelism level. |
546 |
+ |
* Accessed directly by ForkJoinWorkerThreads. |
547 |
|
*/ |
548 |
< |
private int parallelism; |
498 |
< |
|
499 |
< |
/** |
500 |
< |
* The maximum allowed pool size. |
501 |
< |
*/ |
502 |
< |
private int maxPoolSize; |
548 |
> |
final int parallelism; |
549 |
|
|
550 |
|
/** |
551 |
|
* True if use local fifo, not default lifo, for local polling |
552 |
< |
* Replicated by ForkJoinWorkerThreads |
552 |
> |
* Read by, and replicated by ForkJoinWorkerThreads |
553 |
|
*/ |
554 |
< |
private volatile boolean locallyFifo; |
554 |
> |
final boolean locallyFifo; |
555 |
|
|
556 |
|
/** |
557 |
< |
* Controls whether to add spares to maintain parallelism |
557 |
> |
* The uncaught exception handler used when any worker abruptly |
558 |
> |
* terminates. |
559 |
|
*/ |
560 |
< |
private boolean maintainsParallelism; |
514 |
< |
|
515 |
< |
/** |
516 |
< |
* The uncaught exception handler used when any worker |
517 |
< |
* abruptly terminates |
518 |
< |
*/ |
519 |
< |
private volatile Thread.UncaughtExceptionHandler ueh; |
560 |
> |
private final Thread.UncaughtExceptionHandler ueh; |
561 |
|
|
562 |
|
/** |
563 |
|
* Pool number, just for assigning useful names to worker threads |
564 |
|
*/ |
565 |
|
private final int poolNumber; |
566 |
|
|
567 |
< |
// utilities for updating fields |
567 |
> |
|
568 |
> |
// Utilities for CASing fields. Note that several of these |
569 |
> |
// are manually inlined by callers |
570 |
|
|
571 |
|
/** |
572 |
< |
* Adds delta to running count. Used mainly by ForkJoinTask. |
572 |
> |
* Increments running count part of workerCounts |
573 |
|
*/ |
574 |
< |
final void updateRunningCount(int delta) { |
575 |
< |
int wc; |
574 |
> |
final void incrementRunningCount() { |
575 |
> |
int c; |
576 |
|
do {} while (!UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
577 |
< |
wc = workerCounts, |
578 |
< |
wc + delta)); |
577 |
> |
c = workerCounts, |
578 |
> |
c + ONE_RUNNING)); |
579 |
|
} |
580 |
|
|
581 |
|
/** |
582 |
< |
* Decrements running count unless already zero |
582 |
> |
* Tries to decrement running count unless already zero |
583 |
|
*/ |
584 |
|
final boolean tryDecrementRunningCount() { |
585 |
|
int wc = workerCounts; |
590 |
|
} |
591 |
|
|
592 |
|
/** |
593 |
< |
* Write fence for user modifications of pool parameters |
594 |
< |
* (parallelism. etc). Note that it doesn't matter if CAS fails. |
593 |
> |
* Forces decrement of encoded workerCounts, awaiting nonzero if |
594 |
> |
* (rarely) necessary when other count updates lag. |
595 |
> |
* |
596 |
> |
* @param dr -- either zero or ONE_RUNNING |
597 |
> |
* @param dt == either zero or ONE_TOTAL |
598 |
|
*/ |
599 |
< |
private void workerCountWriteFence() { |
600 |
< |
int wc; |
601 |
< |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
602 |
< |
wc = workerCounts, wc); |
599 |
> |
private void decrementWorkerCounts(int dr, int dt) { |
600 |
> |
for (;;) { |
601 |
> |
int wc = workerCounts; |
602 |
> |
if (wc == 0 && (runState & TERMINATED) != 0) |
603 |
> |
return; // lagging termination on a backout |
604 |
> |
if ((wc & RUNNING_COUNT_MASK) - dr < 0 || |
605 |
> |
(wc >>> TOTAL_COUNT_SHIFT) - dt < 0) |
606 |
> |
Thread.yield(); |
607 |
> |
if (UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
608 |
> |
wc, wc - (dr + dt))) |
609 |
> |
return; |
610 |
> |
} |
611 |
|
} |
612 |
|
|
613 |
|
/** |
614 |
< |
* Read fence for external reads of pool parameters |
561 |
< |
* (parallelism. maxPoolSize, etc). |
614 |
> |
* Increments event count |
615 |
|
*/ |
616 |
< |
private void workerCountReadFence() { |
617 |
< |
int ignore = workerCounts; |
616 |
> |
private void advanceEventCount() { |
617 |
> |
int c; |
618 |
> |
do {} while(!UNSAFE.compareAndSwapInt(this, eventCountOffset, |
619 |
> |
c = eventCount, c+1)); |
620 |
|
} |
621 |
|
|
622 |
|
/** |
667 |
|
lock.lock(); |
668 |
|
try { |
669 |
|
ForkJoinWorkerThread[] ws = workers; |
670 |
< |
int nws = ws.length; |
671 |
< |
if (k < 0 || k >= nws || ws[k] != null) { |
672 |
< |
for (k = 0; k < nws && ws[k] != null; ++k) |
670 |
> |
int n = ws.length; |
671 |
> |
if (k < 0 || k >= n || ws[k] != null) { |
672 |
> |
for (k = 0; k < n && ws[k] != null; ++k) |
673 |
|
; |
674 |
< |
if (k == nws) |
675 |
< |
ws = Arrays.copyOf(ws, nws << 1); |
674 |
> |
if (k == n) |
675 |
> |
ws = Arrays.copyOf(ws, n << 1); |
676 |
|
} |
677 |
|
ws[k] = w; |
678 |
|
workers = ws; // volatile array write ensures slot visibility |
705 |
|
* Tries to create and add new worker. Assumes that worker counts |
706 |
|
* are already updated to accommodate the worker, so adjusts on |
707 |
|
* failure. |
653 |
– |
* |
654 |
– |
* @return new worker or null if creation failed |
708 |
|
*/ |
709 |
< |
private ForkJoinWorkerThread addWorker() { |
709 |
> |
private void addWorker() { |
710 |
|
ForkJoinWorkerThread w = null; |
711 |
|
try { |
712 |
|
w = factory.newThread(this); |
713 |
|
} finally { // Adjust on either null or exceptional factory return |
714 |
|
if (w == null) { |
715 |
< |
onWorkerCreationFailure(); |
716 |
< |
return null; |
715 |
> |
decrementWorkerCounts(ONE_RUNNING, ONE_TOTAL); |
716 |
> |
tryTerminate(false); // in case of failure during shutdown |
717 |
|
} |
718 |
|
} |
719 |
< |
w.start(recordWorker(w), locallyFifo, ueh); |
720 |
< |
return w; |
668 |
< |
} |
669 |
< |
|
670 |
< |
/** |
671 |
< |
* Adjusts counts upon failure to create worker |
672 |
< |
*/ |
673 |
< |
private void onWorkerCreationFailure() { |
674 |
< |
for (;;) { |
675 |
< |
int wc = workerCounts; |
676 |
< |
if ((wc >>> TOTAL_COUNT_SHIFT) > 0 && |
677 |
< |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
678 |
< |
wc, wc - (ONE_RUNNING|ONE_TOTAL))) |
679 |
< |
break; |
680 |
< |
} |
681 |
< |
tryTerminate(false); // in case of failure during shutdown |
682 |
< |
} |
683 |
< |
|
684 |
< |
/** |
685 |
< |
* Create enough total workers to establish target parallelism, |
686 |
< |
* giving up if terminating or addWorker fails |
687 |
< |
*/ |
688 |
< |
private void ensureEnoughTotalWorkers() { |
689 |
< |
int wc; |
690 |
< |
while (((wc = workerCounts) >>> TOTAL_COUNT_SHIFT) < parallelism && |
691 |
< |
runState < TERMINATING) { |
692 |
< |
if ((UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
693 |
< |
wc, wc + (ONE_RUNNING|ONE_TOTAL)) && |
694 |
< |
addWorker() == null)) |
695 |
< |
break; |
696 |
< |
} |
719 |
> |
if (w != null) |
720 |
> |
w.start(recordWorker(w), ueh); |
721 |
|
} |
722 |
|
|
723 |
|
/** |
724 |
|
* Final callback from terminating worker. Removes record of |
725 |
|
* worker from array, and adjusts counts. If pool is shutting |
726 |
< |
* down, tries to complete terminatation, else possibly replaces |
703 |
< |
* the worker. |
726 |
> |
* down, tries to complete terminatation. |
727 |
|
* |
728 |
|
* @param w the worker |
729 |
|
*/ |
730 |
|
final void workerTerminated(ForkJoinWorkerThread w) { |
708 |
– |
if (w.active) { // force inactive |
709 |
– |
w.active = false; |
710 |
– |
do {} while (!tryDecrementActiveCount()); |
711 |
– |
} |
731 |
|
forgetWorker(w); |
732 |
< |
|
733 |
< |
// Decrement total count, and if was running, running count |
734 |
< |
// Spin (waiting for other updates) if either would be negative |
735 |
< |
int nr = w.isTrimmed() ? 0 : ONE_RUNNING; |
717 |
< |
int unit = ONE_TOTAL + nr; |
718 |
< |
for (;;) { |
719 |
< |
int wc = workerCounts; |
720 |
< |
int rc = wc & RUNNING_COUNT_MASK; |
721 |
< |
if (rc - nr < 0 || (wc >>> TOTAL_COUNT_SHIFT) == 0) |
722 |
< |
Thread.yield(); // back off if waiting for other updates |
723 |
< |
else if (UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
724 |
< |
wc, wc - unit)) |
725 |
< |
break; |
726 |
< |
} |
727 |
< |
|
728 |
< |
accumulateStealCount(w); // collect final count |
729 |
< |
if (!tryTerminate(false)) |
730 |
< |
ensureEnoughTotalWorkers(); |
732 |
> |
decrementWorkerCounts(w.isTrimmed()? 0 : ONE_RUNNING, ONE_TOTAL); |
733 |
> |
while (w.stealCount != 0) // collect final count |
734 |
> |
tryAccumulateStealCount(w); |
735 |
> |
tryTerminate(false); |
736 |
|
} |
737 |
|
|
738 |
|
// Waiting for and signalling events |
739 |
|
|
740 |
|
/** |
736 |
– |
* Ensures eventCount on exit is different (mod 2^32) than on |
737 |
– |
* entry. CAS failures are OK -- any change in count suffices. |
738 |
– |
*/ |
739 |
– |
private void advanceEventCount() { |
740 |
– |
int c; |
741 |
– |
UNSAFE.compareAndSwapInt(this, eventCountOffset, c = eventCount, c+1); |
742 |
– |
} |
743 |
– |
|
744 |
– |
/** |
741 |
|
* Releases workers blocked on a count not equal to current count. |
742 |
+ |
* Normally called after precheck that eventWaiters isn't zero to |
743 |
+ |
* avoid wasted array checks. |
744 |
+ |
* |
745 |
+ |
* @param signalling true if caller is a signalling worker so can |
746 |
+ |
* exit upon (conservatively) detected contention by other threads |
747 |
+ |
* who will continue to release |
748 |
|
*/ |
749 |
< |
final void releaseWaiters() { |
750 |
< |
long top; |
751 |
< |
int id; |
752 |
< |
while ((id = (int)((top = eventWaiters) & WAITER_INDEX_MASK)) > 0 && |
753 |
< |
(int)(top >>> EVENT_COUNT_SHIFT) != eventCount) { |
754 |
< |
ForkJoinWorkerThread[] ws = workers; |
755 |
< |
ForkJoinWorkerThread w; |
756 |
< |
if (ws.length >= id && (w = ws[id - 1]) != null && |
757 |
< |
UNSAFE.compareAndSwapLong(this, eventWaitersOffset, |
758 |
< |
top, w.nextWaiter)) |
749 |
> |
private void releaseEventWaiters(boolean signalling) { |
750 |
> |
ForkJoinWorkerThread[] ws = workers; |
751 |
> |
int n = ws.length; |
752 |
> |
long h; // head of stack |
753 |
> |
ForkJoinWorkerThread w; int id, ec; |
754 |
> |
while ((id = ((int)((h = eventWaiters) & WAITER_ID_MASK)) - 1) >= 0 && |
755 |
> |
(int)(h >>> EVENT_COUNT_SHIFT) != (ec = eventCount) && |
756 |
> |
id < n && (w = ws[id]) != null) { |
757 |
> |
if (UNSAFE.compareAndSwapLong(this, eventWaitersOffset, |
758 |
> |
h, h = w.nextWaiter)) |
759 |
|
LockSupport.unpark(w); |
760 |
+ |
if (signalling && (eventCount != ec || eventWaiters != h)) |
761 |
+ |
break; |
762 |
|
} |
763 |
|
} |
764 |
|
|
765 |
|
/** |
766 |
< |
* Advances eventCount and releases waiters until interference by |
767 |
< |
* other releasing threads is detected. |
766 |
> |
* Tries to advance eventCount and releases waiters. Called only |
767 |
> |
* from workers. |
768 |
|
*/ |
769 |
|
final void signalWork() { |
770 |
< |
int ec; |
771 |
< |
UNSAFE.compareAndSwapInt(this, eventCountOffset, ec=eventCount, ec+1); |
772 |
< |
outer:for (;;) { |
773 |
< |
long top = eventWaiters; |
770 |
< |
ec = eventCount; |
771 |
< |
for (;;) { |
772 |
< |
ForkJoinWorkerThread[] ws; ForkJoinWorkerThread w; |
773 |
< |
int id = (int)(top & WAITER_INDEX_MASK); |
774 |
< |
if (id <= 0 || (int)(top >>> EVENT_COUNT_SHIFT) == ec) |
775 |
< |
return; |
776 |
< |
if ((ws = workers).length < id || (w = ws[id - 1]) == null || |
777 |
< |
!UNSAFE.compareAndSwapLong(this, eventWaitersOffset, |
778 |
< |
top, top = w.nextWaiter)) |
779 |
< |
continue outer; // possibly stale; reread |
780 |
< |
LockSupport.unpark(w); |
781 |
< |
if (top != eventWaiters) // let someone else take over |
782 |
< |
return; |
783 |
< |
} |
784 |
< |
} |
770 |
> |
int c; // try to increment event count -- CAS failure OK |
771 |
> |
UNSAFE.compareAndSwapInt(this, eventCountOffset, c = eventCount, c+1); |
772 |
> |
if (eventWaiters != 0L) |
773 |
> |
releaseEventWaiters(true); |
774 |
|
} |
775 |
|
|
776 |
|
/** |
777 |
< |
* If worker is inactive, blocks until terminating or event count |
778 |
< |
* advances from last value held by worker; in any case helps |
790 |
< |
* release others. |
777 |
> |
* Blocks worker until terminating or event count |
778 |
> |
* advances from last value held by worker |
779 |
|
* |
780 |
|
* @param w the calling worker thread |
781 |
|
*/ |
782 |
|
private void eventSync(ForkJoinWorkerThread w) { |
783 |
< |
if (!w.active) { |
784 |
< |
int prev = w.lastEventCount; |
785 |
< |
long nextTop = (((long)prev << EVENT_COUNT_SHIFT) | |
786 |
< |
((long)(w.poolIndex + 1))); |
787 |
< |
long top; |
788 |
< |
while ((runState < SHUTDOWN || !tryTerminate(false)) && |
789 |
< |
(((int)(top = eventWaiters) & WAITER_INDEX_MASK) == 0 || |
790 |
< |
(int)(top >>> EVENT_COUNT_SHIFT) == prev) && |
791 |
< |
eventCount == prev) { |
792 |
< |
if (UNSAFE.compareAndSwapLong(this, eventWaitersOffset, |
793 |
< |
w.nextWaiter = top, nextTop)) { |
794 |
< |
accumulateStealCount(w); // transfer steals while idle |
795 |
< |
Thread.interrupted(); // clear/ignore interrupt |
796 |
< |
while (eventCount == prev) |
797 |
< |
w.doPark(); |
798 |
< |
break; |
783 |
> |
int wec = w.lastEventCount; |
784 |
> |
long nh = (((long)wec) << EVENT_COUNT_SHIFT) | ((long)(w.poolIndex+1)); |
785 |
> |
long h; |
786 |
> |
while ((runState < SHUTDOWN || !tryTerminate(false)) && |
787 |
> |
((h = eventWaiters) == 0L || |
788 |
> |
(int)(h >>> EVENT_COUNT_SHIFT) == wec) && |
789 |
> |
eventCount == wec) { |
790 |
> |
if (UNSAFE.compareAndSwapLong(this, eventWaitersOffset, |
791 |
> |
w.nextWaiter = h, nh)) { |
792 |
> |
while (runState < TERMINATING && eventCount == wec) { |
793 |
> |
if (!tryAccumulateStealCount(w)) // transfer while idle |
794 |
> |
continue; |
795 |
> |
Thread.interrupted(); // clear/ignore interrupt |
796 |
> |
if (eventCount != wec) |
797 |
> |
break; |
798 |
> |
LockSupport.park(w); |
799 |
|
} |
800 |
+ |
break; |
801 |
|
} |
813 |
– |
w.lastEventCount = eventCount; |
802 |
|
} |
803 |
< |
releaseWaiters(); |
803 |
> |
w.lastEventCount = eventCount; |
804 |
|
} |
805 |
|
|
806 |
+ |
// Maintaining spares |
807 |
+ |
|
808 |
|
/** |
809 |
< |
* Callback from workers invoked upon each top-level action (i.e., |
820 |
< |
* stealing a task or taking a submission and running |
821 |
< |
* it). Performs one or both of the following: |
822 |
< |
* |
823 |
< |
* * If the worker cannot find work, updates its active status to |
824 |
< |
* inactive and updates activeCount unless there is contention, in |
825 |
< |
* which case it may try again (either in this or a subsequent |
826 |
< |
* call). Additionally, awaits the next task event and/or helps |
827 |
< |
* wake up other releasable waiters. |
828 |
< |
* |
829 |
< |
* * If there are too many running threads, suspends this worker |
830 |
< |
* (first forcing inactivation if necessary). If it is not |
831 |
< |
* resumed before a keepAlive elapses, the worker may be "trimmed" |
832 |
< |
* -- killed while suspended within suspendAsSpare. Otherwise, |
833 |
< |
* upon resume it rechecks to make sure that it is still needed. |
834 |
< |
* |
835 |
< |
* @param w the worker |
836 |
< |
* @param worked false if the worker scanned for work but didn't |
837 |
< |
* find any (in which case it may block waiting for work). |
809 |
> |
* Pushes worker onto the spare stack |
810 |
|
*/ |
811 |
< |
final void preStep(ForkJoinWorkerThread w, boolean worked) { |
812 |
< |
boolean active = w.active; |
813 |
< |
boolean inactivate = !worked & active; |
814 |
< |
for (;;) { |
815 |
< |
if (inactivate) { |
816 |
< |
int c = runState; |
817 |
< |
if (UNSAFE.compareAndSwapInt(this, runStateOffset, |
818 |
< |
c, c - ONE_ACTIVE)) |
819 |
< |
inactivate = active = w.active = false; |
820 |
< |
} |
821 |
< |
int wc = workerCounts; |
822 |
< |
if ((wc & RUNNING_COUNT_MASK) <= parallelism) { |
823 |
< |
if (!worked) |
824 |
< |
eventSync(w); |
825 |
< |
return; |
811 |
> |
final void pushSpare(ForkJoinWorkerThread w) { |
812 |
> |
int ns = (++w.spareCount << SPARE_COUNT_SHIFT) | (w.poolIndex+1); |
813 |
> |
do {} while (!UNSAFE.compareAndSwapInt(this, spareWaitersOffset, |
814 |
> |
w.nextSpare = spareWaiters,ns)); |
815 |
> |
} |
816 |
> |
|
817 |
> |
/** |
818 |
> |
* Tries (once) to resume a spare if running count is less than |
819 |
> |
* target parallelism. Fails on contention or stale workers. |
820 |
> |
*/ |
821 |
> |
private void tryResumeSpare() { |
822 |
> |
int sw, id; |
823 |
> |
ForkJoinWorkerThread w; |
824 |
> |
ForkJoinWorkerThread[] ws; |
825 |
> |
if ((id = ((sw = spareWaiters) & SPARE_ID_MASK) - 1) >= 0 && |
826 |
> |
id < (ws = workers).length && (w = ws[id]) != null && |
827 |
> |
(workerCounts & RUNNING_COUNT_MASK) < parallelism && |
828 |
> |
eventWaiters == 0L && |
829 |
> |
spareWaiters == sw && |
830 |
> |
UNSAFE.compareAndSwapInt(this, spareWaitersOffset, |
831 |
> |
sw, w.nextSpare) && |
832 |
> |
w.tryUnsuspend()) { |
833 |
> |
int c; // try increment; if contended, finish after unpark |
834 |
> |
boolean inc = UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
835 |
> |
c = workerCounts, |
836 |
> |
c + ONE_RUNNING); |
837 |
> |
LockSupport.unpark(w); |
838 |
> |
if (!inc) { |
839 |
> |
do {} while(!UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
840 |
> |
c = workerCounts, |
841 |
> |
c + ONE_RUNNING)); |
842 |
|
} |
855 |
– |
if (!(inactivate |= active) && // must inactivate to suspend |
856 |
– |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
857 |
– |
wc, wc - ONE_RUNNING) && |
858 |
– |
!w.suspendAsSpare()) // false if trimmed |
859 |
– |
return; |
843 |
|
} |
844 |
|
} |
845 |
|
|
846 |
|
/** |
847 |
< |
* Adjusts counts and creates or resumes compensating threads for |
848 |
< |
* a worker blocking on task joinMe. First tries resuming an |
849 |
< |
* existing spare (which usually also avoids any count |
850 |
< |
* adjustment), but must then decrement running count to determine |
851 |
< |
* whether a new thread is needed. See above for fuller |
852 |
< |
* explanation. This code is sprawled out non-modularly mainly |
853 |
< |
* because adaptive spinning works best if the entire method is |
854 |
< |
* either interpreted or compiled vs having only some pieces of it |
855 |
< |
* compiled. |
856 |
< |
* |
857 |
< |
* @param joinMe the task to join |
858 |
< |
* @return task status on exit (to simplify usage by callers) |
859 |
< |
*/ |
860 |
< |
final int awaitJoin(ForkJoinTask<?> joinMe) { |
847 |
> |
* Callback from oldest spare occasionally waking up. Tries |
848 |
> |
* (once) to shutdown a spare if more than 25% spare overage, or |
849 |
> |
* if UNUSED_SPARE_TRIM_RATE_NANOS have elapsed and there are at |
850 |
> |
* least #parallelism running threads. Note that we don't need CAS |
851 |
> |
* or locks here because the method is called only from the oldest |
852 |
> |
* suspended spare occasionally waking (and even misfires are OK). |
853 |
> |
* |
854 |
> |
* @param now the wake up nanoTime of caller |
855 |
> |
*/ |
856 |
> |
final void tryTrimSpare(long now) { |
857 |
> |
long lastTrim = trimTime; |
858 |
> |
trimTime = now; |
859 |
> |
helpMaintainParallelism(); // first, help wake up any needed spares |
860 |
> |
int sw, id; |
861 |
> |
ForkJoinWorkerThread w; |
862 |
> |
ForkJoinWorkerThread[] ws; |
863 |
|
int pc = parallelism; |
864 |
< |
boolean adj = false; // true when running count adjusted |
865 |
< |
int scans = 0; |
866 |
< |
|
867 |
< |
while (joinMe.status >= 0) { |
868 |
< |
ForkJoinWorkerThread spare = null; |
869 |
< |
if ((workerCounts & RUNNING_COUNT_MASK) < pc) { |
870 |
< |
ForkJoinWorkerThread[] ws = workers; |
871 |
< |
int nws = ws.length; |
872 |
< |
for (int i = 0; i < nws; ++i) { |
873 |
< |
ForkJoinWorkerThread w = ws[i]; |
889 |
< |
if (w != null && w.isSuspended()) { |
890 |
< |
spare = w; |
891 |
< |
break; |
892 |
< |
} |
893 |
< |
} |
894 |
< |
if (joinMe.status < 0) |
895 |
< |
break; |
896 |
< |
} |
897 |
< |
int wc = workerCounts; |
898 |
< |
int rc = wc & RUNNING_COUNT_MASK; |
899 |
< |
int dc = pc - rc; |
900 |
< |
if (dc > 0 && spare != null && spare.tryUnsuspend()) { |
901 |
< |
if (adj) { |
902 |
< |
int c; |
903 |
< |
do {} while (!UNSAFE.compareAndSwapInt |
904 |
< |
(this, workerCountsOffset, |
905 |
< |
c = workerCounts, c + ONE_RUNNING)); |
906 |
< |
} |
907 |
< |
adj = true; |
908 |
< |
LockSupport.unpark(spare); |
909 |
< |
} |
910 |
< |
else if (adj) { |
911 |
< |
if (dc <= 0) |
912 |
< |
break; |
913 |
< |
int tc = wc >>> TOTAL_COUNT_SHIFT; |
914 |
< |
if (scans > tc) { |
915 |
< |
int ts = (tc - pc) * pc; |
916 |
< |
if (rc != 0 && (dc * dc < ts || !maintainsParallelism)) |
917 |
< |
break; |
918 |
< |
if (scans > ts && tc < maxPoolSize && |
919 |
< |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, wc, |
920 |
< |
wc+(ONE_RUNNING|ONE_TOTAL))){ |
921 |
< |
addWorker(); |
922 |
< |
break; |
923 |
< |
} |
924 |
< |
} |
925 |
< |
} |
926 |
< |
else if (rc != 0) |
927 |
< |
adj = UNSAFE.compareAndSwapInt (this, workerCountsOffset, |
928 |
< |
wc, wc - ONE_RUNNING); |
929 |
< |
if ((scans++ & 1) == 0) |
930 |
< |
releaseWaiters(); // help others progress |
931 |
< |
else |
932 |
< |
Thread.yield(); // avoid starving productive threads |
933 |
< |
} |
864 |
> |
int wc = workerCounts; |
865 |
> |
if ((wc & RUNNING_COUNT_MASK) >= pc && |
866 |
> |
(((wc >>> TOTAL_COUNT_SHIFT) - pc) > (pc >>> 2) + 1 ||// approx 25% |
867 |
> |
now - lastTrim >= UNUSED_SPARE_TRIM_RATE_NANOS) && |
868 |
> |
(id = ((sw = spareWaiters) & SPARE_ID_MASK) - 1) >= 0 && |
869 |
> |
id < (ws = workers).length && (w = ws[id]) != null && |
870 |
> |
UNSAFE.compareAndSwapInt(this, spareWaitersOffset, |
871 |
> |
sw, w.nextSpare)) |
872 |
> |
w.shutdown(false); |
873 |
> |
} |
874 |
|
|
875 |
< |
if (adj) { |
876 |
< |
joinMe.internalAwaitDone(); |
877 |
< |
int c; |
878 |
< |
do {} while (!UNSAFE.compareAndSwapInt |
879 |
< |
(this, workerCountsOffset, |
880 |
< |
c = workerCounts, c + ONE_RUNNING)); |
875 |
> |
/** |
876 |
> |
* Does at most one of: |
877 |
> |
* |
878 |
> |
* 1. Help wake up existing workers waiting for work via |
879 |
> |
* releaseEventWaiters. (If any exist, then it probably doesn't |
880 |
> |
* matter right now if under target parallelism level.) |
881 |
> |
* |
882 |
> |
* 2. If below parallelism level and a spare exists, try (once) |
883 |
> |
* to resume it via tryResumeSpare. |
884 |
> |
* |
885 |
> |
* 3. If neither of the above, tries (once) to add a new |
886 |
> |
* worker if either there are not enough total, or if all |
887 |
> |
* existing workers are busy, there are either no running |
888 |
> |
* workers or the deficit is at least twice the surplus. |
889 |
> |
*/ |
890 |
> |
private void helpMaintainParallelism() { |
891 |
> |
// uglified to work better when not compiled |
892 |
> |
int pc, wc, rc, tc, rs; long h; |
893 |
> |
if ((h = eventWaiters) != 0L) { |
894 |
> |
if ((int)(h >>> EVENT_COUNT_SHIFT) != eventCount) |
895 |
> |
releaseEventWaiters(false); // avoid useless call |
896 |
> |
} |
897 |
> |
else if ((pc = parallelism) > |
898 |
> |
(rc = ((wc = workerCounts) & RUNNING_COUNT_MASK))) { |
899 |
> |
if (spareWaiters != 0) |
900 |
> |
tryResumeSpare(); |
901 |
> |
else if ((rs = runState) < TERMINATING && |
902 |
> |
((tc = wc >>> TOTAL_COUNT_SHIFT) < pc || |
903 |
> |
(tc == (rs & ACTIVE_COUNT_MASK) && // all busy |
904 |
> |
(rc == 0 || // must add |
905 |
> |
rc < pc - ((tc - pc) << 1)) && // within slack |
906 |
> |
tc < MAX_WORKERS && runState == rs)) && // recheck busy |
907 |
> |
workerCounts == wc && |
908 |
> |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, wc, |
909 |
> |
wc + (ONE_RUNNING|ONE_TOTAL))) |
910 |
> |
addWorker(); |
911 |
|
} |
942 |
– |
return joinMe.status; |
912 |
|
} |
913 |
|
|
914 |
|
/** |
915 |
< |
* Same idea as awaitJoin |
915 |
> |
* Callback from workers invoked upon each top-level action (i.e., |
916 |
> |
* stealing a task or taking a submission and running |
917 |
> |
* it). Performs one or more of the following: |
918 |
> |
* |
919 |
> |
* 1. If the worker cannot find work (misses > 0), updates its |
920 |
> |
* active status to inactive and updates activeCount unless |
921 |
> |
* this is the first miss and there is contention, in which |
922 |
> |
* case it may try again (either in this or a subsequent |
923 |
> |
* call). |
924 |
> |
* |
925 |
> |
* 2. If there are at least 2 misses, awaits the next task event |
926 |
> |
* via eventSync |
927 |
> |
* |
928 |
> |
* 3. If there are too many running threads, suspends this worker |
929 |
> |
* (first forcing inactivation if necessary). If it is not |
930 |
> |
* needed, it may be killed while suspended via |
931 |
> |
* tryTrimSpare. Otherwise, upon resume it rechecks to make |
932 |
> |
* sure that it is still needed. |
933 |
> |
* |
934 |
> |
* 4. Helps release and/or reactivate other workers via |
935 |
> |
* helpMaintainParallelism |
936 |
> |
* |
937 |
> |
* @param w the worker |
938 |
> |
* @param misses the number of scans by caller failing to find work |
939 |
> |
* (saturating at 2 just to avoid wraparound) |
940 |
|
*/ |
941 |
< |
final void awaitBlocker(ManagedBlocker blocker, boolean maintainPar) |
942 |
< |
throws InterruptedException { |
950 |
< |
maintainPar &= maintainsParallelism; |
941 |
> |
final void preStep(ForkJoinWorkerThread w, int misses) { |
942 |
> |
boolean active = w.active; |
943 |
|
int pc = parallelism; |
952 |
– |
boolean adj = false; // true when running count adjusted |
953 |
– |
int scans = 0; |
954 |
– |
boolean done; |
955 |
– |
|
944 |
|
for (;;) { |
957 |
– |
if (done = blocker.isReleasable()) |
958 |
– |
break; |
959 |
– |
ForkJoinWorkerThread spare = null; |
960 |
– |
if ((workerCounts & RUNNING_COUNT_MASK) < pc) { |
961 |
– |
ForkJoinWorkerThread[] ws = workers; |
962 |
– |
int nws = ws.length; |
963 |
– |
for (int i = 0; i < nws; ++i) { |
964 |
– |
ForkJoinWorkerThread w = ws[i]; |
965 |
– |
if (w != null && w.isSuspended()) { |
966 |
– |
spare = w; |
967 |
– |
break; |
968 |
– |
} |
969 |
– |
} |
970 |
– |
if (done = blocker.isReleasable()) |
971 |
– |
break; |
972 |
– |
} |
945 |
|
int wc = workerCounts; |
946 |
|
int rc = wc & RUNNING_COUNT_MASK; |
947 |
< |
int dc = pc - rc; |
948 |
< |
if (dc > 0 && spare != null && spare.tryUnsuspend()) { |
949 |
< |
if (adj) { |
950 |
< |
int c; |
951 |
< |
do {} while (!UNSAFE.compareAndSwapInt |
952 |
< |
(this, workerCountsOffset, |
953 |
< |
c = workerCounts, c + ONE_RUNNING)); |
954 |
< |
} |
983 |
< |
adj = true; |
984 |
< |
LockSupport.unpark(spare); |
947 |
> |
if (active && (misses > 0 || rc > pc)) { |
948 |
> |
int rs; // try inactivate |
949 |
> |
if (UNSAFE.compareAndSwapInt(this, runStateOffset, |
950 |
> |
rs = runState, rs - ONE_ACTIVE)) |
951 |
> |
active = w.active = false; |
952 |
> |
else if (misses > 1 || rc > pc || |
953 |
> |
(rs & ACTIVE_COUNT_MASK) >= pc) |
954 |
> |
continue; // force inactivate |
955 |
|
} |
956 |
< |
else if (adj) { |
957 |
< |
if (dc <= 0) |
956 |
> |
if (misses > 1) { |
957 |
> |
misses = 0; // don't re-sync |
958 |
> |
eventSync(w); // continue loop to recheck rc |
959 |
> |
} |
960 |
> |
else if (rc > pc) { |
961 |
> |
if (workerCounts == wc && // try to suspend as spare |
962 |
> |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
963 |
> |
wc, wc - ONE_RUNNING) && |
964 |
> |
!w.suspendAsSpare()) // false if killed |
965 |
|
break; |
989 |
– |
int tc = wc >>> TOTAL_COUNT_SHIFT; |
990 |
– |
if (scans > tc) { |
991 |
– |
int ts = (tc - pc) * pc; |
992 |
– |
if (rc != 0 && (dc * dc < ts || !maintainPar)) |
993 |
– |
break; |
994 |
– |
if (scans > ts && tc < maxPoolSize && |
995 |
– |
UNSAFE.compareAndSwapInt(this, workerCountsOffset, wc, |
996 |
– |
wc+(ONE_RUNNING|ONE_TOTAL))){ |
997 |
– |
addWorker(); |
998 |
– |
break; |
999 |
– |
} |
1000 |
– |
} |
966 |
|
} |
967 |
< |
else if (rc != 0) |
968 |
< |
adj = UNSAFE.compareAndSwapInt (this, workerCountsOffset, |
969 |
< |
wc, wc - ONE_RUNNING); |
970 |
< |
if ((++scans & 1) == 0) |
971 |
< |
releaseWaiters(); // help others progress |
1007 |
< |
else |
1008 |
< |
Thread.yield(); // avoid starving productive threads |
967 |
> |
else { |
968 |
> |
if (rc < pc || eventWaiters != 0L) |
969 |
> |
helpMaintainParallelism(); |
970 |
> |
break; |
971 |
> |
} |
972 |
|
} |
973 |
+ |
} |
974 |
|
|
975 |
< |
try { |
976 |
< |
if (!done) |
977 |
< |
do {} while (!blocker.isReleasable() && !blocker.block()); |
978 |
< |
} finally { |
979 |
< |
if (adj) { |
975 |
> |
/** |
976 |
> |
* Helps and/or blocks awaiting join of the given task. |
977 |
> |
* Alternates between helpJoinTask() and helpMaintainParallelism() |
978 |
> |
* as many times as there is a deficit in running count (or longer |
979 |
> |
* if running count would become zero), then blocks if task still |
980 |
> |
* not done. |
981 |
> |
* |
982 |
> |
* @param joinMe the task to join |
983 |
> |
*/ |
984 |
> |
final void awaitJoin(ForkJoinTask<?> joinMe, ForkJoinWorkerThread worker) { |
985 |
> |
int threshold = parallelism; // descend blocking thresholds |
986 |
> |
while (joinMe.status >= 0) { |
987 |
> |
boolean block; int wc; |
988 |
> |
worker.helpJoinTask(joinMe); |
989 |
> |
if (joinMe.status < 0) |
990 |
> |
break; |
991 |
> |
if (((wc = workerCounts) & RUNNING_COUNT_MASK) <= threshold) { |
992 |
> |
if (threshold > 0) |
993 |
> |
--threshold; |
994 |
> |
else |
995 |
> |
advanceEventCount(); // force release |
996 |
> |
block = false; |
997 |
> |
} |
998 |
> |
else |
999 |
> |
block = UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
1000 |
> |
wc, wc - ONE_RUNNING); |
1001 |
> |
helpMaintainParallelism(); |
1002 |
> |
if (block) { |
1003 |
|
int c; |
1004 |
+ |
joinMe.internalAwaitDone(); |
1005 |
|
do {} while (!UNSAFE.compareAndSwapInt |
1006 |
|
(this, workerCountsOffset, |
1007 |
|
c = workerCounts, c + ONE_RUNNING)); |
1008 |
+ |
break; |
1009 |
|
} |
1010 |
|
} |
1011 |
|
} |
1012 |
|
|
1013 |
|
/** |
1014 |
< |
* Unless there are not enough other running threads, adjusts |
1026 |
< |
* counts and blocks a worker performing helpJoin that cannot find |
1027 |
< |
* any work. |
1028 |
< |
* |
1029 |
< |
* @return true if joinMe now done |
1014 |
> |
* Same idea as awaitJoin, but no helping |
1015 |
|
*/ |
1016 |
< |
final boolean tryAwaitBusyJoin(ForkJoinTask<?> joinMe) { |
1017 |
< |
int pc = parallelism; |
1018 |
< |
outer:for (;;) { |
1019 |
< |
releaseWaiters(); |
1020 |
< |
if ((workerCounts & RUNNING_COUNT_MASK) < pc) { |
1021 |
< |
ForkJoinWorkerThread[] ws = workers; |
1022 |
< |
int nws = ws.length; |
1023 |
< |
for (int i = 0; i < nws; ++i) { |
1024 |
< |
ForkJoinWorkerThread w = ws[i]; |
1025 |
< |
if (w != null && w.isSuspended()) { |
1026 |
< |
if (joinMe.status < 0) |
1042 |
< |
return true; |
1043 |
< |
if ((workerCounts & RUNNING_COUNT_MASK) > pc) |
1044 |
< |
break; |
1045 |
< |
if (w.tryUnsuspend()) { |
1046 |
< |
LockSupport.unpark(w); |
1047 |
< |
break outer; |
1048 |
< |
} |
1049 |
< |
continue outer; |
1050 |
< |
} |
1051 |
< |
} |
1016 |
> |
final void awaitBlocker(ManagedBlocker blocker) |
1017 |
> |
throws InterruptedException { |
1018 |
> |
int threshold = parallelism; |
1019 |
> |
while (!blocker.isReleasable()) { |
1020 |
> |
boolean block; int wc; |
1021 |
> |
if (((wc = workerCounts) & RUNNING_COUNT_MASK) <= threshold) { |
1022 |
> |
if (threshold > 0) |
1023 |
> |
--threshold; |
1024 |
> |
else |
1025 |
> |
advanceEventCount(); |
1026 |
> |
block = false; |
1027 |
|
} |
1028 |
< |
if (joinMe.status < 0) |
1029 |
< |
return true; |
1030 |
< |
int wc = workerCounts; |
1031 |
< |
if ((wc & RUNNING_COUNT_MASK) <= 2 || |
1032 |
< |
(wc >>> TOTAL_COUNT_SHIFT) < pc) |
1033 |
< |
return false; // keep this thread alive |
1034 |
< |
if (UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
1035 |
< |
wc, wc - ONE_RUNNING)) |
1028 |
> |
else |
1029 |
> |
block = UNSAFE.compareAndSwapInt(this, workerCountsOffset, |
1030 |
> |
wc, wc - ONE_RUNNING); |
1031 |
> |
helpMaintainParallelism(); |
1032 |
> |
if (block) { |
1033 |
> |
try { |
1034 |
> |
do {} while (!blocker.isReleasable() && !blocker.block()); |
1035 |
> |
} finally { |
1036 |
> |
int c; |
1037 |
> |
do {} while (!UNSAFE.compareAndSwapInt |
1038 |
> |
(this, workerCountsOffset, |
1039 |
> |
c = workerCounts, c + ONE_RUNNING)); |
1040 |
> |
} |
1041 |
|
break; |
1042 |
+ |
} |
1043 |
|
} |
1063 |
– |
|
1064 |
– |
joinMe.internalAwaitDone(); |
1065 |
– |
int c; |
1066 |
– |
do {} while (!UNSAFE.compareAndSwapInt |
1067 |
– |
(this, workerCountsOffset, |
1068 |
– |
c = workerCounts, c + ONE_RUNNING)); |
1069 |
– |
return true; |
1044 |
|
} |
1045 |
|
|
1046 |
|
/** |
1064 |
|
// Finish now if all threads terminated; else in some subsequent call |
1065 |
|
if ((workerCounts >>> TOTAL_COUNT_SHIFT) == 0) { |
1066 |
|
advanceRunLevel(TERMINATED); |
1067 |
< |
terminationLatch.countDown(); |
1067 |
> |
termination.arrive(); |
1068 |
|
} |
1069 |
|
return true; |
1070 |
|
} |
1071 |
|
|
1072 |
|
/** |
1073 |
|
* Actions on transition to TERMINATING |
1074 |
+ |
* |
1075 |
+ |
* Runs up to four passes through workers: (0) shutting down each |
1076 |
+ |
* quietly (without waking up if parked) to quickly spread |
1077 |
+ |
* notifications without unnecessary bouncing around event queues |
1078 |
+ |
* etc (1) wake up and help cancel tasks (2) interrupt (3) mop up |
1079 |
+ |
* races with interrupted workers |
1080 |
|
*/ |
1081 |
|
private void startTerminating() { |
1082 |
< |
for (int i = 0; i < 2; ++i) { // twice to mop up newly created workers |
1083 |
< |
cancelSubmissions(); |
1104 |
< |
shutdownWorkers(); |
1105 |
< |
cancelWorkerTasks(); |
1082 |
> |
cancelSubmissions(); |
1083 |
> |
for (int passes = 0; passes < 4 && workerCounts != 0; ++passes) { |
1084 |
|
advanceEventCount(); |
1085 |
< |
releaseWaiters(); |
1086 |
< |
interruptWorkers(); |
1085 |
> |
eventWaiters = 0L; // clobber lists |
1086 |
> |
spareWaiters = 0; |
1087 |
> |
ForkJoinWorkerThread[] ws = workers; |
1088 |
> |
int n = ws.length; |
1089 |
> |
for (int i = 0; i < n; ++i) { |
1090 |
> |
ForkJoinWorkerThread w = ws[i]; |
1091 |
> |
if (w != null) { |
1092 |
> |
w.shutdown(true); |
1093 |
> |
if (passes > 0 && !w.isTerminated()) { |
1094 |
> |
w.cancelTasks(); |
1095 |
> |
LockSupport.unpark(w); |
1096 |
> |
if (passes > 1) { |
1097 |
> |
try { |
1098 |
> |
w.interrupt(); |
1099 |
> |
} catch (SecurityException ignore) { |
1100 |
> |
} |
1101 |
> |
} |
1102 |
> |
} |
1103 |
> |
} |
1104 |
> |
} |
1105 |
|
} |
1106 |
|
} |
1107 |
|
|
1118 |
|
} |
1119 |
|
} |
1120 |
|
|
1125 |
– |
/** |
1126 |
– |
* Sets all worker run states to at least shutdown, |
1127 |
– |
* also resuming suspended workers |
1128 |
– |
*/ |
1129 |
– |
private void shutdownWorkers() { |
1130 |
– |
ForkJoinWorkerThread[] ws = workers; |
1131 |
– |
int nws = ws.length; |
1132 |
– |
for (int i = 0; i < nws; ++i) { |
1133 |
– |
ForkJoinWorkerThread w = ws[i]; |
1134 |
– |
if (w != null) |
1135 |
– |
w.shutdown(); |
1136 |
– |
} |
1137 |
– |
} |
1138 |
– |
|
1139 |
– |
/** |
1140 |
– |
* Clears out and cancels all locally queued tasks |
1141 |
– |
*/ |
1142 |
– |
private void cancelWorkerTasks() { |
1143 |
– |
ForkJoinWorkerThread[] ws = workers; |
1144 |
– |
int nws = ws.length; |
1145 |
– |
for (int i = 0; i < nws; ++i) { |
1146 |
– |
ForkJoinWorkerThread w = ws[i]; |
1147 |
– |
if (w != null) |
1148 |
– |
w.cancelTasks(); |
1149 |
– |
} |
1150 |
– |
} |
1151 |
– |
|
1152 |
– |
/** |
1153 |
– |
* Unsticks all workers blocked on joins etc |
1154 |
– |
*/ |
1155 |
– |
private void interruptWorkers() { |
1156 |
– |
ForkJoinWorkerThread[] ws = workers; |
1157 |
– |
int nws = ws.length; |
1158 |
– |
for (int i = 0; i < nws; ++i) { |
1159 |
– |
ForkJoinWorkerThread w = ws[i]; |
1160 |
– |
if (w != null && !w.isTerminated()) { |
1161 |
– |
try { |
1162 |
– |
w.interrupt(); |
1163 |
– |
} catch (SecurityException ignore) { |
1164 |
– |
} |
1165 |
– |
} |
1166 |
– |
} |
1167 |
– |
} |
1168 |
– |
|
1121 |
|
// misc support for ForkJoinWorkerThread |
1122 |
|
|
1123 |
|
/** |
1128 |
|
} |
1129 |
|
|
1130 |
|
/** |
1131 |
< |
* Accumulates steal count from a worker, clearing |
1132 |
< |
* the worker's value |
1131 |
> |
* Tries to accumulates steal count from a worker, clearing |
1132 |
> |
* the worker's value. |
1133 |
> |
* |
1134 |
> |
* @return true if worker steal count now zero |
1135 |
|
*/ |
1136 |
< |
final void accumulateStealCount(ForkJoinWorkerThread w) { |
1136 |
> |
final boolean tryAccumulateStealCount(ForkJoinWorkerThread w) { |
1137 |
|
int sc = w.stealCount; |
1138 |
< |
if (sc != 0) { |
1139 |
< |
long c; |
1140 |
< |
w.stealCount = 0; |
1141 |
< |
do {} while (!UNSAFE.compareAndSwapLong(this, stealCountOffset, |
1142 |
< |
c = stealCount, c + sc)); |
1138 |
> |
long c = stealCount; |
1139 |
> |
// CAS even if zero, for fence effects |
1140 |
> |
if (UNSAFE.compareAndSwapLong(this, stealCountOffset, c, c + sc)) { |
1141 |
> |
if (sc != 0) |
1142 |
> |
w.stealCount = 0; |
1143 |
> |
return true; |
1144 |
|
} |
1145 |
+ |
return sc == 0; |
1146 |
|
} |
1147 |
|
|
1148 |
|
/** |
1150 |
|
* active thread. |
1151 |
|
*/ |
1152 |
|
final int idlePerActive() { |
1153 |
+ |
int pc = parallelism; // use parallelism, not rc |
1154 |
|
int ac = runState; // no mask -- artifically boosts during shutdown |
1198 |
– |
int pc = parallelism; // use targeted parallelism, not rc |
1155 |
|
// Use exact results for small values, saturate past 4 |
1156 |
|
return pc <= ac? 0 : pc >>> 1 <= ac? 1 : pc >>> 2 <= ac? 3 : pc >>> 3; |
1157 |
|
} |
1162 |
|
|
1163 |
|
/** |
1164 |
|
* Creates a {@code ForkJoinPool} with parallelism equal to {@link |
1165 |
< |
* java.lang.Runtime#availableProcessors}, and using the {@linkplain |
1166 |
< |
* #defaultForkJoinWorkerThreadFactory default thread factory}. |
1165 |
> |
* java.lang.Runtime#availableProcessors}, using the {@linkplain |
1166 |
> |
* #defaultForkJoinWorkerThreadFactory default thread factory}, |
1167 |
> |
* no UncaughtExceptionHandler, and non-async LIFO processing mode. |
1168 |
|
* |
1169 |
|
* @throws SecurityException if a security manager exists and |
1170 |
|
* the caller is not permitted to modify threads |
1173 |
|
*/ |
1174 |
|
public ForkJoinPool() { |
1175 |
|
this(Runtime.getRuntime().availableProcessors(), |
1176 |
< |
defaultForkJoinWorkerThreadFactory); |
1176 |
> |
defaultForkJoinWorkerThreadFactory, null, false); |
1177 |
|
} |
1178 |
|
|
1179 |
|
/** |
1180 |
|
* Creates a {@code ForkJoinPool} with the indicated parallelism |
1181 |
< |
* level and using the {@linkplain |
1182 |
< |
* #defaultForkJoinWorkerThreadFactory default thread factory}. |
1181 |
> |
* level, the {@linkplain |
1182 |
> |
* #defaultForkJoinWorkerThreadFactory default thread factory}, |
1183 |
> |
* no UncaughtExceptionHandler, and non-async LIFO processing mode. |
1184 |
|
* |
1185 |
|
* @param parallelism the parallelism level |
1186 |
|
* @throws IllegalArgumentException if parallelism less than or |
1191 |
|
* java.lang.RuntimePermission}{@code ("modifyThread")} |
1192 |
|
*/ |
1193 |
|
public ForkJoinPool(int parallelism) { |
1194 |
< |
this(parallelism, defaultForkJoinWorkerThreadFactory); |
1237 |
< |
} |
1238 |
< |
|
1239 |
< |
/** |
1240 |
< |
* Creates a {@code ForkJoinPool} with parallelism equal to {@link |
1241 |
< |
* java.lang.Runtime#availableProcessors}, and using the given |
1242 |
< |
* thread factory. |
1243 |
< |
* |
1244 |
< |
* @param factory the factory for creating new threads |
1245 |
< |
* @throws NullPointerException if the factory is null |
1246 |
< |
* @throws SecurityException if a security manager exists and |
1247 |
< |
* the caller is not permitted to modify threads |
1248 |
< |
* because it does not hold {@link |
1249 |
< |
* java.lang.RuntimePermission}{@code ("modifyThread")} |
1250 |
< |
*/ |
1251 |
< |
public ForkJoinPool(ForkJoinWorkerThreadFactory factory) { |
1252 |
< |
this(Runtime.getRuntime().availableProcessors(), factory); |
1194 |
> |
this(parallelism, defaultForkJoinWorkerThreadFactory, null, false); |
1195 |
|
} |
1196 |
|
|
1197 |
|
/** |
1198 |
< |
* Creates a {@code ForkJoinPool} with the given parallelism and |
1257 |
< |
* thread factory. |
1198 |
> |
* Creates a {@code ForkJoinPool} with the given parameters. |
1199 |
|
* |
1200 |
< |
* @param parallelism the parallelism level |
1201 |
< |
* @param factory the factory for creating new threads |
1200 |
> |
* @param parallelism the parallelism level. For default value, |
1201 |
> |
* use {@link java.lang.Runtime#availableProcessors}. |
1202 |
> |
* @param factory the factory for creating new threads. For default value, |
1203 |
> |
* use {@link #defaultForkJoinWorkerThreadFactory}. |
1204 |
> |
* @param handler the handler for internal worker threads that |
1205 |
> |
* terminate due to unrecoverable errors encountered while executing |
1206 |
> |
* tasks. For default value, use <code>null</code>. |
1207 |
> |
* @param asyncMode if true, |
1208 |
> |
* establishes local first-in-first-out scheduling mode for forked |
1209 |
> |
* tasks that are never joined. This mode may be more appropriate |
1210 |
> |
* than default locally stack-based mode in applications in which |
1211 |
> |
* worker threads only process event-style asynchronous tasks. |
1212 |
> |
* For default value, use <code>false</code>. |
1213 |
|
* @throws IllegalArgumentException if parallelism less than or |
1214 |
|
* equal to zero, or greater than implementation limit |
1215 |
|
* @throws NullPointerException if the factory is null |
1218 |
|
* because it does not hold {@link |
1219 |
|
* java.lang.RuntimePermission}{@code ("modifyThread")} |
1220 |
|
*/ |
1221 |
< |
public ForkJoinPool(int parallelism, ForkJoinWorkerThreadFactory factory) { |
1221 |
> |
public ForkJoinPool(int parallelism, |
1222 |
> |
ForkJoinWorkerThreadFactory factory, |
1223 |
> |
Thread.UncaughtExceptionHandler handler, |
1224 |
> |
boolean asyncMode) { |
1225 |
|
checkPermission(); |
1226 |
|
if (factory == null) |
1227 |
|
throw new NullPointerException(); |
1228 |
< |
if (parallelism <= 0 || parallelism > MAX_THREADS) |
1228 |
> |
if (parallelism <= 0 || parallelism > MAX_WORKERS) |
1229 |
|
throw new IllegalArgumentException(); |
1275 |
– |
this.poolNumber = poolNumberGenerator.incrementAndGet(); |
1276 |
– |
int arraySize = initialArraySizeFor(parallelism); |
1230 |
|
this.parallelism = parallelism; |
1231 |
|
this.factory = factory; |
1232 |
< |
this.maxPoolSize = MAX_THREADS; |
1233 |
< |
this.maintainsParallelism = true; |
1232 |
> |
this.ueh = handler; |
1233 |
> |
this.locallyFifo = asyncMode; |
1234 |
> |
int arraySize = initialArraySizeFor(parallelism); |
1235 |
|
this.workers = new ForkJoinWorkerThread[arraySize]; |
1236 |
|
this.submissionQueue = new LinkedTransferQueue<ForkJoinTask<?>>(); |
1237 |
|
this.workerLock = new ReentrantLock(); |
1238 |
< |
this.terminationLatch = new CountDownLatch(1); |
1238 |
> |
this.termination = new Phaser(1); |
1239 |
> |
this.poolNumber = poolNumberGenerator.incrementAndGet(); |
1240 |
> |
this.trimTime = System.nanoTime(); |
1241 |
|
} |
1242 |
|
|
1243 |
|
/** |
1245 |
|
* @param pc the initial parallelism level |
1246 |
|
*/ |
1247 |
|
private static int initialArraySizeFor(int pc) { |
1248 |
< |
// See Hackers Delight, sec 3.2. We know MAX_THREADS < (1 >>> 16) |
1249 |
< |
int size = pc < MAX_THREADS ? pc + 1 : MAX_THREADS; |
1248 |
> |
// See Hackers Delight, sec 3.2. We know MAX_WORKERS < (1 >>> 16) |
1249 |
> |
int size = pc < MAX_WORKERS ? pc + 1 : MAX_WORKERS; |
1250 |
|
size |= size >>> 1; |
1251 |
|
size |= size >>> 2; |
1252 |
|
size |= size >>> 4; |
1266 |
|
throw new RejectedExecutionException(); |
1267 |
|
submissionQueue.offer(task); |
1268 |
|
advanceEventCount(); |
1269 |
< |
releaseWaiters(); |
1314 |
< |
ensureEnoughTotalWorkers(); |
1269 |
> |
helpMaintainParallelism(); // start or wake up workers |
1270 |
|
} |
1271 |
|
|
1272 |
|
/** |
1285 |
|
|
1286 |
|
/** |
1287 |
|
* Arranges for (asynchronous) execution of the given task. |
1288 |
+ |
* If the caller is already engaged in a fork/join computation in |
1289 |
+ |
* the current pool, this method is equivalent in effect to |
1290 |
+ |
* {@link ForkJoinTask#fork}. |
1291 |
|
* |
1292 |
|
* @param task the task |
1293 |
|
* @throws NullPointerException if the task is null |
1315 |
|
} |
1316 |
|
|
1317 |
|
/** |
1318 |
+ |
* Submits a ForkJoinTask for execution. |
1319 |
+ |
* If the caller is already engaged in a fork/join computation in |
1320 |
+ |
* the current pool, this method is equivalent in effect to |
1321 |
+ |
* {@link ForkJoinTask#fork}. |
1322 |
+ |
* |
1323 |
+ |
* @param task the task to submit |
1324 |
+ |
* @return the task |
1325 |
+ |
* @throws NullPointerException if the task is null |
1326 |
+ |
* @throws RejectedExecutionException if the task cannot be |
1327 |
+ |
* scheduled for execution |
1328 |
+ |
*/ |
1329 |
+ |
public <T> ForkJoinTask<T> submit(ForkJoinTask<T> task) { |
1330 |
+ |
doSubmit(task); |
1331 |
+ |
return task; |
1332 |
+ |
} |
1333 |
+ |
|
1334 |
+ |
/** |
1335 |
|
* @throws NullPointerException if the task is null |
1336 |
|
* @throws RejectedExecutionException if the task cannot be |
1337 |
|
* scheduled for execution |
1369 |
|
} |
1370 |
|
|
1371 |
|
/** |
1397 |
– |
* Submits a ForkJoinTask for execution. |
1398 |
– |
* |
1399 |
– |
* @param task the task to submit |
1400 |
– |
* @return the task |
1401 |
– |
* @throws NullPointerException if the task is null |
1402 |
– |
* @throws RejectedExecutionException if the task cannot be |
1403 |
– |
* scheduled for execution |
1404 |
– |
*/ |
1405 |
– |
public <T> ForkJoinTask<T> submit(ForkJoinTask<T> task) { |
1406 |
– |
doSubmit(task); |
1407 |
– |
return task; |
1408 |
– |
} |
1409 |
– |
|
1410 |
– |
/** |
1372 |
|
* @throws NullPointerException {@inheritDoc} |
1373 |
|
* @throws RejectedExecutionException {@inheritDoc} |
1374 |
|
*/ |
1410 |
|
* @return the handler, or {@code null} if none |
1411 |
|
*/ |
1412 |
|
public Thread.UncaughtExceptionHandler getUncaughtExceptionHandler() { |
1452 |
– |
workerCountReadFence(); |
1413 |
|
return ueh; |
1414 |
|
} |
1415 |
|
|
1416 |
|
/** |
1457 |
– |
* Sets the handler for internal worker threads that terminate due |
1458 |
– |
* to unrecoverable errors encountered while executing tasks. |
1459 |
– |
* Unless set, the current default or ThreadGroup handler is used |
1460 |
– |
* as handler. |
1461 |
– |
* |
1462 |
– |
* @param h the new handler |
1463 |
– |
* @return the old handler, or {@code null} if none |
1464 |
– |
* @throws SecurityException if a security manager exists and |
1465 |
– |
* the caller is not permitted to modify threads |
1466 |
– |
* because it does not hold {@link |
1467 |
– |
* java.lang.RuntimePermission}{@code ("modifyThread")} |
1468 |
– |
*/ |
1469 |
– |
public Thread.UncaughtExceptionHandler |
1470 |
– |
setUncaughtExceptionHandler(Thread.UncaughtExceptionHandler h) { |
1471 |
– |
checkPermission(); |
1472 |
– |
Thread.UncaughtExceptionHandler old = ueh; |
1473 |
– |
if (h != old) { |
1474 |
– |
ueh = h; |
1475 |
– |
ForkJoinWorkerThread[] ws = workers; |
1476 |
– |
int nws = ws.length; |
1477 |
– |
for (int i = 0; i < nws; ++i) { |
1478 |
– |
ForkJoinWorkerThread w = ws[i]; |
1479 |
– |
if (w != null) |
1480 |
– |
w.setUncaughtExceptionHandler(h); |
1481 |
– |
} |
1482 |
– |
} |
1483 |
– |
return old; |
1484 |
– |
} |
1485 |
– |
|
1486 |
– |
/** |
1487 |
– |
* Sets the target parallelism level of this pool. |
1488 |
– |
* |
1489 |
– |
* @param parallelism the target parallelism |
1490 |
– |
* @throws IllegalArgumentException if parallelism less than or |
1491 |
– |
* equal to zero or greater than maximum size bounds |
1492 |
– |
* @throws SecurityException if a security manager exists and |
1493 |
– |
* the caller is not permitted to modify threads |
1494 |
– |
* because it does not hold {@link |
1495 |
– |
* java.lang.RuntimePermission}{@code ("modifyThread")} |
1496 |
– |
*/ |
1497 |
– |
public void setParallelism(int parallelism) { |
1498 |
– |
checkPermission(); |
1499 |
– |
if (parallelism <= 0 || parallelism > maxPoolSize) |
1500 |
– |
throw new IllegalArgumentException(); |
1501 |
– |
workerCountReadFence(); |
1502 |
– |
int pc = this.parallelism; |
1503 |
– |
if (pc != parallelism) { |
1504 |
– |
this.parallelism = parallelism; |
1505 |
– |
workerCountWriteFence(); |
1506 |
– |
// Release spares. If too many, some will die after re-suspend |
1507 |
– |
ForkJoinWorkerThread[] ws = workers; |
1508 |
– |
int nws = ws.length; |
1509 |
– |
for (int i = 0; i < nws; ++i) { |
1510 |
– |
ForkJoinWorkerThread w = ws[i]; |
1511 |
– |
if (w != null && w.tryUnsuspend()) { |
1512 |
– |
int c; |
1513 |
– |
do {} while (!UNSAFE.compareAndSwapInt |
1514 |
– |
(this, workerCountsOffset, |
1515 |
– |
c = workerCounts, c + ONE_RUNNING)); |
1516 |
– |
LockSupport.unpark(w); |
1517 |
– |
} |
1518 |
– |
} |
1519 |
– |
ensureEnoughTotalWorkers(); |
1520 |
– |
advanceEventCount(); |
1521 |
– |
releaseWaiters(); // force config recheck by existing workers |
1522 |
– |
} |
1523 |
– |
} |
1524 |
– |
|
1525 |
– |
/** |
1417 |
|
* Returns the targeted parallelism level of this pool. |
1418 |
|
* |
1419 |
|
* @return the targeted parallelism level of this pool |
1420 |
|
*/ |
1421 |
|
public int getParallelism() { |
1531 |
– |
// workerCountReadFence(); // inlined below |
1532 |
– |
int ignore = workerCounts; |
1422 |
|
return parallelism; |
1423 |
|
} |
1424 |
|
|
1435 |
|
} |
1436 |
|
|
1437 |
|
/** |
1549 |
– |
* Returns the maximum number of threads allowed to exist in the |
1550 |
– |
* pool. Unless set using {@link #setMaximumPoolSize}, the |
1551 |
– |
* maximum is an implementation-defined value designed only to |
1552 |
– |
* prevent runaway growth. |
1553 |
– |
* |
1554 |
– |
* @return the maximum |
1555 |
– |
*/ |
1556 |
– |
public int getMaximumPoolSize() { |
1557 |
– |
workerCountReadFence(); |
1558 |
– |
return maxPoolSize; |
1559 |
– |
} |
1560 |
– |
|
1561 |
– |
/** |
1562 |
– |
* Sets the maximum number of threads allowed to exist in the |
1563 |
– |
* pool. The given value should normally be greater than or equal |
1564 |
– |
* to the {@link #getParallelism parallelism} level. Setting this |
1565 |
– |
* value has no effect on current pool size. It controls |
1566 |
– |
* construction of new threads. The use of this method may cause |
1567 |
– |
* tasks that intrinsically require extra threads for dependent |
1568 |
– |
* computations to indefinitely stall. If you are instead trying |
1569 |
– |
* to minimize internal thread creation, consider setting {@link |
1570 |
– |
* #setMaintainsParallelism} as false. |
1571 |
– |
* |
1572 |
– |
* @throws IllegalArgumentException if negative or greater than |
1573 |
– |
* internal implementation limit |
1574 |
– |
*/ |
1575 |
– |
public void setMaximumPoolSize(int newMax) { |
1576 |
– |
if (newMax < 0 || newMax > MAX_THREADS) |
1577 |
– |
throw new IllegalArgumentException(); |
1578 |
– |
maxPoolSize = newMax; |
1579 |
– |
workerCountWriteFence(); |
1580 |
– |
} |
1581 |
– |
|
1582 |
– |
/** |
1583 |
– |
* Returns {@code true} if this pool dynamically maintains its |
1584 |
– |
* target parallelism level. If false, new threads are added only |
1585 |
– |
* to avoid possible starvation. This setting is by default true. |
1586 |
– |
* |
1587 |
– |
* @return {@code true} if maintains parallelism |
1588 |
– |
*/ |
1589 |
– |
public boolean getMaintainsParallelism() { |
1590 |
– |
workerCountReadFence(); |
1591 |
– |
return maintainsParallelism; |
1592 |
– |
} |
1593 |
– |
|
1594 |
– |
/** |
1595 |
– |
* Sets whether this pool dynamically maintains its target |
1596 |
– |
* parallelism level. If false, new threads are added only to |
1597 |
– |
* avoid possible starvation. |
1598 |
– |
* |
1599 |
– |
* @param enable {@code true} to maintain parallelism |
1600 |
– |
*/ |
1601 |
– |
public void setMaintainsParallelism(boolean enable) { |
1602 |
– |
maintainsParallelism = enable; |
1603 |
– |
workerCountWriteFence(); |
1604 |
– |
} |
1605 |
– |
|
1606 |
– |
/** |
1607 |
– |
* Establishes local first-in-first-out scheduling mode for forked |
1608 |
– |
* tasks that are never joined. This mode may be more appropriate |
1609 |
– |
* than default locally stack-based mode in applications in which |
1610 |
– |
* worker threads only process asynchronous tasks. This method is |
1611 |
– |
* designed to be invoked only when the pool is quiescent, and |
1612 |
– |
* typically only before any tasks are submitted. The effects of |
1613 |
– |
* invocations at other times may be unpredictable. |
1614 |
– |
* |
1615 |
– |
* @param async if {@code true}, use locally FIFO scheduling |
1616 |
– |
* @return the previous mode |
1617 |
– |
* @see #getAsyncMode |
1618 |
– |
*/ |
1619 |
– |
public boolean setAsyncMode(boolean async) { |
1620 |
– |
workerCountReadFence(); |
1621 |
– |
boolean oldMode = locallyFifo; |
1622 |
– |
if (oldMode != async) { |
1623 |
– |
locallyFifo = async; |
1624 |
– |
workerCountWriteFence(); |
1625 |
– |
ForkJoinWorkerThread[] ws = workers; |
1626 |
– |
int nws = ws.length; |
1627 |
– |
for (int i = 0; i < nws; ++i) { |
1628 |
– |
ForkJoinWorkerThread w = ws[i]; |
1629 |
– |
if (w != null) |
1630 |
– |
w.setAsyncMode(async); |
1631 |
– |
} |
1632 |
– |
} |
1633 |
– |
return oldMode; |
1634 |
– |
} |
1635 |
– |
|
1636 |
– |
/** |
1438 |
|
* Returns {@code true} if this pool uses local first-in-first-out |
1439 |
|
* scheduling mode for forked tasks that are never joined. |
1440 |
|
* |
1441 |
|
* @return {@code true} if this pool uses async mode |
1641 |
– |
* @see #setAsyncMode |
1442 |
|
*/ |
1443 |
|
public boolean getAsyncMode() { |
1644 |
– |
workerCountReadFence(); |
1444 |
|
return locallyFifo; |
1445 |
|
} |
1446 |
|
|
1510 |
|
public long getQueuedTaskCount() { |
1511 |
|
long count = 0; |
1512 |
|
ForkJoinWorkerThread[] ws = workers; |
1513 |
< |
int nws = ws.length; |
1514 |
< |
for (int i = 0; i < nws; ++i) { |
1513 |
> |
int n = ws.length; |
1514 |
> |
for (int i = 0; i < n; ++i) { |
1515 |
|
ForkJoinWorkerThread w = ws[i]; |
1516 |
|
if (w != null) |
1517 |
|
count += w.getQueueSize(); |
1569 |
|
* @return the number of elements transferred |
1570 |
|
*/ |
1571 |
|
protected int drainTasksTo(Collection<? super ForkJoinTask<?>> c) { |
1572 |
< |
int n = submissionQueue.drainTo(c); |
1572 |
> |
int count = submissionQueue.drainTo(c); |
1573 |
|
ForkJoinWorkerThread[] ws = workers; |
1574 |
< |
int nws = ws.length; |
1575 |
< |
for (int i = 0; i < nws; ++i) { |
1574 |
> |
int n = ws.length; |
1575 |
> |
for (int i = 0; i < n; ++i) { |
1576 |
|
ForkJoinWorkerThread w = ws[i]; |
1577 |
|
if (w != null) |
1578 |
< |
n += w.drainTasksTo(c); |
1578 |
> |
count += w.drainTasksTo(c); |
1579 |
|
} |
1580 |
< |
return n; |
1580 |
> |
return count; |
1581 |
|
} |
1582 |
|
|
1583 |
|
/** |
1701 |
|
*/ |
1702 |
|
public boolean awaitTermination(long timeout, TimeUnit unit) |
1703 |
|
throws InterruptedException { |
1704 |
< |
return terminationLatch.await(timeout, unit); |
1704 |
> |
try { |
1705 |
> |
return termination.awaitAdvanceInterruptibly(0, timeout, unit) > 0; |
1706 |
> |
} catch(TimeoutException ex) { |
1707 |
> |
return false; |
1708 |
> |
} |
1709 |
|
} |
1710 |
|
|
1711 |
|
/** |
1712 |
|
* Interface for extending managed parallelism for tasks running |
1713 |
|
* in {@link ForkJoinPool}s. |
1714 |
|
* |
1715 |
< |
* <p>A {@code ManagedBlocker} provides two methods. |
1716 |
< |
* Method {@code isReleasable} must return {@code true} if |
1717 |
< |
* blocking is not necessary. Method {@code block} blocks the |
1718 |
< |
* current thread if necessary (perhaps internally invoking |
1719 |
< |
* {@code isReleasable} before actually blocking). |
1715 |
> |
* <p>A {@code ManagedBlocker} provides two methods. Method |
1716 |
> |
* {@code isReleasable} must return {@code true} if blocking is |
1717 |
> |
* not necessary. Method {@code block} blocks the current thread |
1718 |
> |
* if necessary (perhaps internally invoking {@code isReleasable} |
1719 |
> |
* before actually blocking). The unusual methods in this API |
1720 |
> |
* accommodate synchronizers that may, but don't usually, block |
1721 |
> |
* for long periods. Similarly, they allow more efficient internal |
1722 |
> |
* handling of cases in which additional workers may be, but |
1723 |
> |
* usually are not, needed to ensure sufficient parallelism. |
1724 |
> |
* Toward this end, implementations of method {@code isReleasable} |
1725 |
> |
* must be amenable to repeated invocation. |
1726 |
|
* |
1727 |
|
* <p>For example, here is a ManagedBlocker based on a |
1728 |
|
* ReentrantLock: |
1740 |
|
* return hasLock || (hasLock = lock.tryLock()); |
1741 |
|
* } |
1742 |
|
* }}</pre> |
1743 |
+ |
* |
1744 |
+ |
* <p>Here is a class that possibly blocks waiting for an |
1745 |
+ |
* item on a given queue: |
1746 |
+ |
* <pre> {@code |
1747 |
+ |
* class QueueTaker<E> implements ManagedBlocker { |
1748 |
+ |
* final BlockingQueue<E> queue; |
1749 |
+ |
* volatile E item = null; |
1750 |
+ |
* QueueTaker(BlockingQueue<E> q) { this.queue = q; } |
1751 |
+ |
* public boolean block() throws InterruptedException { |
1752 |
+ |
* if (item == null) |
1753 |
+ |
* item = queue.take |
1754 |
+ |
* return true; |
1755 |
+ |
* } |
1756 |
+ |
* public boolean isReleasable() { |
1757 |
+ |
* return item != null || (item = queue.poll) != null; |
1758 |
+ |
* } |
1759 |
+ |
* public E getItem() { // call after pool.managedBlock completes |
1760 |
+ |
* return item; |
1761 |
+ |
* } |
1762 |
+ |
* }}</pre> |
1763 |
|
*/ |
1764 |
|
public static interface ManagedBlocker { |
1765 |
|
/** |
1783 |
|
* Blocks in accord with the given blocker. If the current thread |
1784 |
|
* is a {@link ForkJoinWorkerThread}, this method possibly |
1785 |
|
* arranges for a spare thread to be activated if necessary to |
1786 |
< |
* ensure parallelism while the current thread is blocked. |
1958 |
< |
* |
1959 |
< |
* <p>If {@code maintainParallelism} is {@code true} and the pool |
1960 |
< |
* supports it ({@link #getMaintainsParallelism}), this method |
1961 |
< |
* attempts to maintain the pool's nominal parallelism. Otherwise |
1962 |
< |
* it activates a thread only if necessary to avoid complete |
1963 |
< |
* starvation. This option may be preferable when blockages use |
1964 |
< |
* timeouts, or are almost always brief. |
1786 |
> |
* ensure sufficient parallelism while the current thread is blocked. |
1787 |
|
* |
1788 |
|
* <p>If the caller is not a {@link ForkJoinTask}, this method is |
1789 |
|
* behaviorally equivalent to |
1797 |
|
* first be expanded to ensure parallelism, and later adjusted. |
1798 |
|
* |
1799 |
|
* @param blocker the blocker |
1978 |
– |
* @param maintainParallelism if {@code true} and supported by |
1979 |
– |
* this pool, attempt to maintain the pool's nominal parallelism; |
1980 |
– |
* otherwise activate a thread only if necessary to avoid |
1981 |
– |
* complete starvation. |
1800 |
|
* @throws InterruptedException if blocker.block did so |
1801 |
|
*/ |
1802 |
< |
public static void managedBlock(ManagedBlocker blocker, |
1985 |
< |
boolean maintainParallelism) |
1802 |
> |
public static void managedBlock(ManagedBlocker blocker) |
1803 |
|
throws InterruptedException { |
1804 |
|
Thread t = Thread.currentThread(); |
1805 |
< |
if (t instanceof ForkJoinWorkerThread) |
1806 |
< |
((ForkJoinWorkerThread) t).pool. |
1807 |
< |
awaitBlocker(blocker, maintainParallelism); |
1808 |
< |
else |
1809 |
< |
awaitBlocker(blocker); |
1810 |
< |
} |
1811 |
< |
|
1995 |
< |
/** |
1996 |
< |
* Performs Non-FJ blocking |
1997 |
< |
*/ |
1998 |
< |
private static void awaitBlocker(ManagedBlocker blocker) |
1999 |
< |
throws InterruptedException { |
2000 |
< |
do {} while (!blocker.isReleasable() && !blocker.block()); |
1805 |
> |
if (t instanceof ForkJoinWorkerThread) { |
1806 |
> |
ForkJoinWorkerThread w = (ForkJoinWorkerThread) t; |
1807 |
> |
w.pool.awaitBlocker(blocker); |
1808 |
> |
} |
1809 |
> |
else { |
1810 |
> |
do {} while (!blocker.isReleasable() && !blocker.block()); |
1811 |
> |
} |
1812 |
|
} |
1813 |
|
|
1814 |
|
// AbstractExecutorService overrides. These rely on undocumented |
1836 |
|
objectFieldOffset("eventWaiters",ForkJoinPool.class); |
1837 |
|
private static final long stealCountOffset = |
1838 |
|
objectFieldOffset("stealCount",ForkJoinPool.class); |
1839 |
< |
|
1839 |
> |
private static final long spareWaitersOffset = |
1840 |
> |
objectFieldOffset("spareWaiters",ForkJoinPool.class); |
1841 |
|
|
1842 |
|
private static long objectFieldOffset(String field, Class<?> klazz) { |
1843 |
|
try { |