ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ThreadPoolExecutor.java
Revision: 1.117
Committed: Tue Feb 6 04:13:43 2007 UTC (17 years, 3 months ago) by jsr166
Branch: MAIN
Changes since 1.116: +212 -187 lines
Log Message:
More TPE/STPE review rework

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain, as explained at
4 * http://creativecommons.org/licenses/publicdomain
5 */
6
7 package java.util.concurrent;
8 import java.util.concurrent.locks.*;
9 import java.util.concurrent.atomic.*;
10 import java.util.*;
11
12 /**
13 * An {@link ExecutorService} that executes each submitted task using
14 * one of possibly several pooled threads, normally configured
15 * using {@link Executors} factory methods.
16 *
17 * <p>Thread pools address two different problems: they usually
18 * provide improved performance when executing large numbers of
19 * asynchronous tasks, due to reduced per-task invocation overhead,
20 * and they provide a means of bounding and managing the resources,
21 * including threads, consumed when executing a collection of tasks.
22 * Each {@code ThreadPoolExecutor} also maintains some basic
23 * statistics, such as the number of completed tasks.
24 *
25 * <p>To be useful across a wide range of contexts, this class
26 * provides many adjustable parameters and extensibility
27 * hooks. However, programmers are urged to use the more convenient
28 * {@link Executors} factory methods {@link
29 * Executors#newCachedThreadPool} (unbounded thread pool, with
30 * automatic thread reclamation), {@link Executors#newFixedThreadPool}
31 * (fixed size thread pool) and {@link
32 * Executors#newSingleThreadExecutor} (single background thread), that
33 * preconfigure settings for the most common usage
34 * scenarios. Otherwise, use the following guide when manually
35 * configuring and tuning this class:
36 *
37 * <dl>
38 *
39 * <dt>Core and maximum pool sizes</dt>
40 *
41 * <dd>A {@code ThreadPoolExecutor} will automatically adjust the
42 * pool size (see {@link #getPoolSize})
43 * according to the bounds set by
44 * corePoolSize (see {@link #getCorePoolSize}) and
45 * maximumPoolSize (see {@link #getMaximumPoolSize}).
46 *
47 * When a new task is submitted in method {@link #execute}, and fewer
48 * than corePoolSize threads are running, a new thread is created to
49 * handle the request, even if other worker threads are idle. If
50 * there are more than corePoolSize but less than maximumPoolSize
51 * threads running, a new thread will be created only if the queue is
52 * full. By setting corePoolSize and maximumPoolSize the same, you
53 * create a fixed-size thread pool. By setting maximumPoolSize to an
54 * essentially unbounded value such as {@code Integer.MAX_VALUE}, you
55 * allow the pool to accommodate an arbitrary number of concurrent
56 * tasks. Most typically, core and maximum pool sizes are set only
57 * upon construction, but they may also be changed dynamically using
58 * {@link #setCorePoolSize} and {@link #setMaximumPoolSize}. </dd>
59 *
60 * <dt>On-demand construction</dt>
61 *
62 * <dd> By default, even core threads are initially created and
63 * started only when new tasks arrive, but this can be overridden
64 * dynamically using method {@link #prestartCoreThread} or {@link
65 * #prestartAllCoreThreads}. You probably want to prestart threads if
66 * you construct the pool with a non-empty queue. </dd>
67 *
68 * <dt>Creating new threads</dt>
69 *
70 * <dd>New threads are created using a {@link ThreadFactory}. If not
71 * otherwise specified, a {@link Executors#defaultThreadFactory} is
72 * used, that creates threads to all be in the same {@link
73 * ThreadGroup} and with the same {@code NORM_PRIORITY} priority and
74 * non-daemon status. By supplying a different ThreadFactory, you can
75 * alter the thread's name, thread group, priority, daemon status,
76 * etc. If a {@code ThreadFactory} fails to create a thread when asked
77 * by returning null from {@code newThread}, the executor will
78 * continue, but might not be able to execute any tasks. Threads
79 * should possess the "modifyThread" {@code RuntimePermission}. If
80 * worker threads or other threads using the pool do not possess this
81 * permission, service may be degraded: configuration changes may not
82 * take effect in a timely manner, and a shutdown pool may remain in a
83 * state in which termination is possible but not completed.</dd>
84 *
85 * <dt>Keep-alive times</dt>
86 *
87 * <dd>If the pool currently has more than corePoolSize threads,
88 * excess threads will be terminated if they have been idle for more
89 * than the keepAliveTime (see {@link #getKeepAliveTime}). This
90 * provides a means of reducing resource consumption when the pool is
91 * not being actively used. If the pool becomes more active later, new
92 * threads will be constructed. This parameter can also be changed
93 * dynamically using method {@link #setKeepAliveTime}. Using a value
94 * of {@code Long.MAX_VALUE} {@link TimeUnit#NANOSECONDS} effectively
95 * disables idle threads from ever terminating prior to shut down. By
96 * default, the keep-alive policy applies only when there are more
97 * than corePoolSizeThreads. But method {@link
98 * #allowCoreThreadTimeOut(boolean)} can be used to apply this
99 * time-out policy to core threads as well, so long as the
100 * keepAliveTime value is non-zero. </dd>
101 *
102 * <dt>Queuing</dt>
103 *
104 * <dd>Any {@link BlockingQueue} may be used to transfer and hold
105 * submitted tasks. The use of this queue interacts with pool sizing:
106 *
107 * <ul>
108 *
109 * <li> If fewer than corePoolSize threads are running, the Executor
110 * always prefers adding a new thread
111 * rather than queuing.</li>
112 *
113 * <li> If corePoolSize or more threads are running, the Executor
114 * always prefers queuing a request rather than adding a new
115 * thread.</li>
116 *
117 * <li> If a request cannot be queued, a new thread is created unless
118 * this would exceed maximumPoolSize, in which case, the task will be
119 * rejected.</li>
120 *
121 * </ul>
122 *
123 * There are three general strategies for queuing:
124 * <ol>
125 *
126 * <li> <em> Direct handoffs.</em> A good default choice for a work
127 * queue is a {@link SynchronousQueue} that hands off tasks to threads
128 * without otherwise holding them. Here, an attempt to queue a task
129 * will fail if no threads are immediately available to run it, so a
130 * new thread will be constructed. This policy avoids lockups when
131 * handling sets of requests that might have internal dependencies.
132 * Direct handoffs generally require unbounded maximumPoolSizes to
133 * avoid rejection of new submitted tasks. This in turn admits the
134 * possibility of unbounded thread growth when commands continue to
135 * arrive on average faster than they can be processed. </li>
136 *
137 * <li><em> Unbounded queues.</em> Using an unbounded queue (for
138 * example a {@link LinkedBlockingQueue} without a predefined
139 * capacity) will cause new tasks to wait in the queue when all
140 * corePoolSize threads are busy. Thus, no more than corePoolSize
141 * threads will ever be created. (And the value of the maximumPoolSize
142 * therefore doesn't have any effect.) This may be appropriate when
143 * each task is completely independent of others, so tasks cannot
144 * affect each others execution; for example, in a web page server.
145 * While this style of queuing can be useful in smoothing out
146 * transient bursts of requests, it admits the possibility of
147 * unbounded work queue growth when commands continue to arrive on
148 * average faster than they can be processed. </li>
149 *
150 * <li><em>Bounded queues.</em> A bounded queue (for example, an
151 * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
152 * used with finite maximumPoolSizes, but can be more difficult to
153 * tune and control. Queue sizes and maximum pool sizes may be traded
154 * off for each other: Using large queues and small pools minimizes
155 * CPU usage, OS resources, and context-switching overhead, but can
156 * lead to artificially low throughput. If tasks frequently block (for
157 * example if they are I/O bound), a system may be able to schedule
158 * time for more threads than you otherwise allow. Use of small queues
159 * generally requires larger pool sizes, which keeps CPUs busier but
160 * may encounter unacceptable scheduling overhead, which also
161 * decreases throughput. </li>
162 *
163 * </ol>
164 *
165 * </dd>
166 *
167 * <dt>Rejected tasks</dt>
168 *
169 * <dd> New tasks submitted in method {@link #execute} will be
170 * <em>rejected</em> when the Executor has been shut down, and also
171 * when the Executor uses finite bounds for both maximum threads and
172 * work queue capacity, and is saturated. In either case, the {@code
173 * execute} method invokes the {@link
174 * RejectedExecutionHandler#rejectedExecution} method of its {@link
175 * RejectedExecutionHandler}. Four predefined handler policies are
176 * provided:
177 *
178 * <ol>
179 *
180 * <li> In the default {@link ThreadPoolExecutor.AbortPolicy}, the
181 * handler throws a runtime {@link RejectedExecutionException} upon
182 * rejection. </li>
183 *
184 * <li> In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread
185 * that invokes {@code execute} itself runs the task. This provides a
186 * simple feedback control mechanism that will slow down the rate that
187 * new tasks are submitted. </li>
188 *
189 * <li> In {@link ThreadPoolExecutor.DiscardPolicy}, a task that
190 * cannot be executed is simply dropped. </li>
191 *
192 * <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the
193 * executor is not shut down, the task at the head of the work queue
194 * is dropped, and then execution is retried (which can fail again,
195 * causing this to be repeated.) </li>
196 *
197 * </ol>
198 *
199 * It is possible to define and use other kinds of {@link
200 * RejectedExecutionHandler} classes. Doing so requires some care
201 * especially when policies are designed to work only under particular
202 * capacity or queuing policies. </dd>
203 *
204 * <dt>Hook methods</dt>
205 *
206 * <dd>This class provides {@code protected} overridable {@link
207 * #beforeExecute} and {@link #afterExecute} methods that are called
208 * before and after execution of each task. These can be used to
209 * manipulate the execution environment; for example, reinitializing
210 * ThreadLocals, gathering statistics, or adding log
211 * entries. Additionally, method {@link #terminated} can be overridden
212 * to perform any special processing that needs to be done once the
213 * Executor has fully terminated.
214 *
215 * <p>If hook or callback methods throw exceptions, internal worker
216 * threads may in turn fail and abruptly terminate.</dd>
217 *
218 * <dt>Queue maintenance</dt>
219 *
220 * <dd> Method {@link #getQueue} allows access to the work queue for
221 * purposes of monitoring and debugging. Use of this method for any
222 * other purpose is strongly discouraged. Two supplied methods,
223 * {@link #remove} and {@link #purge} are available to assist in
224 * storage reclamation when large numbers of queued tasks become
225 * cancelled.</dd>
226 *
227 * <dt>Finalization</dt>
228 *
229 * <dd> A pool that is no longer referenced in a program <em>AND</em>
230 * has no remaining threads will be {@code shutdown} automatically. If
231 * you would like to ensure that unreferenced pools are reclaimed even
232 * if users forget to call {@link #shutdown}, then you must arrange
233 * that unused threads eventually die, by setting appropriate
234 * keep-alive times, using a lower bound of zero core threads and/or
235 * setting {@link #allowCoreThreadTimeOut(boolean)}. </dd>
236 *
237 * </dl>
238 *
239 * <p> <b>Extension example</b>. Most extensions of this class
240 * override one or more of the protected hook methods. For example,
241 * here is a subclass that adds a simple pause/resume feature:
242 *
243 * <pre> {@code
244 * class PausableThreadPoolExecutor extends ThreadPoolExecutor {
245 * private boolean isPaused;
246 * private ReentrantLock pauseLock = new ReentrantLock();
247 * private Condition unpaused = pauseLock.newCondition();
248 *
249 * public PausableThreadPoolExecutor(...) { super(...); }
250 *
251 * protected void beforeExecute(Thread t, Runnable r) {
252 * super.beforeExecute(t, r);
253 * pauseLock.lock();
254 * try {
255 * while (isPaused) unpaused.await();
256 * } catch (InterruptedException ie) {
257 * t.interrupt();
258 * } finally {
259 * pauseLock.unlock();
260 * }
261 * }
262 *
263 * public void pause() {
264 * pauseLock.lock();
265 * try {
266 * isPaused = true;
267 * } finally {
268 * pauseLock.unlock();
269 * }
270 * }
271 *
272 * public void resume() {
273 * pauseLock.lock();
274 * try {
275 * isPaused = false;
276 * unpaused.signalAll();
277 * } finally {
278 * pauseLock.unlock();
279 * }
280 * }
281 * }}</pre>
282 *
283 * @since 1.5
284 * @author Doug Lea
285 */
286 public class ThreadPoolExecutor extends AbstractExecutorService {
287 /**
288 * The main pool control state, ctl, is an atomic integer packing
289 * two conceptual fields
290 * workerCount, indicating the effective number of threads
291 * runState, indicating whether running, shutting down etc
292 *
293 * In order to pack them into one int, we limit workerCount to
294 * (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2
295 * billion) otherwise representable. If this is ever an issue in
296 * the future, the variable can be changed to be an AtomicLong,
297 * and the shift/mask constants below adjusted. But until the need
298 * arises, this code is a bit faster and simpler using an int.
299 *
300 * The workerCount is the number of workers that have been
301 * permitted to start and not permitted to stop. The value may be
302 * transiently different from the actual number of live threads,
303 * for example when a ThreadFactory fails to create a thread when
304 * asked, and when exiting threads are still performing
305 * bookkeeping before terminating. The user-visible pool size is
306 * reported as the current size of the workers set.
307 *
308 * The runState provides the main lifecyle control, taking on values:
309 *
310 * RUNNING: Accept new tasks and process queued tasks
311 * SHUTDOWN: Don't accept new tasks, but process queued tasks
312 * STOP: Don't accept new tasks, don't process queued tasks,
313 * and interrupt in-progress tasks
314 * TIDYING: All tasks have terminated, workerCount is zero,
315 * the thread transitioning to state TIDYING
316 * will run the terminated() hook method
317 * TERMINATED: terminated() has completed
318 *
319 * The numerical order among these values matters, to allow
320 * ordered comparisons. The runState monotonically increases over
321 * time, but need not hit each state. The transitions are:
322 *
323 * RUNNING -> SHUTDOWN
324 * On invocation of shutdown(), perhaps implicitly in finalize()
325 * (RUNNING or SHUTDOWN) -> STOP
326 * On invocation of shutdownNow()
327 * SHUTDOWN -> TIDYING
328 * When both queue and pool are empty
329 * STOP -> TIDYING
330 * When pool is empty
331 * TIDYING -> TERMINATED
332 * When the terminated() hook method has completed
333 *
334 * Threads waiting in awaitTermination() will return when the
335 * state reaches TERMINATED.
336 *
337 * Detecting the transition from SHUTDOWN to TIDYING is less
338 * straightforward than you'd like because the queue may become
339 * empty after non-empty and vice versa during SHUTDOWN state, but
340 * we can only terminate if, after seeing that it is empty, we see
341 * that workerCount is 0 (which sometimes entails a recheck -- see
342 * below).
343 */
344 private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
345 private static final int COUNT_BITS = Integer.SIZE - 3;
346 private static final int CAPACITY = (1 << COUNT_BITS) - 1;
347
348 // runState is stored in the high-order bits
349 private static final int RUNNING = -1 << COUNT_BITS;
350 private static final int SHUTDOWN = 0 << COUNT_BITS;
351 private static final int STOP = 1 << COUNT_BITS;
352 private static final int TIDYING = 2 << COUNT_BITS;
353 private static final int TERMINATED = 3 << COUNT_BITS;
354
355 // Packing and unpacking ctl
356 private static int runStateOf(int c) { return c & ~CAPACITY; }
357 private static int workerCountOf(int c) { return c & CAPACITY; }
358 private static int ctlOf(int rs, int wc) { return rs | wc; }
359
360 /*
361 * Bit field accessors that don't require unpacking ctl.
362 * These depend on the bit layout and on workerCount being never negative.
363 */
364
365 private static boolean runStateLessThan(int c, int s) {
366 return c < s;
367 }
368
369 private static boolean runStateAtLeast(int c, int s) {
370 return c >= s;
371 }
372
373 private static boolean isRunning(int c) {
374 return c < SHUTDOWN;
375 }
376
377 /**
378 * Attempt to CAS-increment the workerCount field of ctl.
379 */
380 private boolean compareAndIncrementWorkerCount(int expect) {
381 return ctl.compareAndSet(expect, expect + 1);
382 }
383
384 /**
385 * Attempt to CAS-decrement the workerCount field of ctl.
386 */
387 private boolean compareAndDecrementWorkerCount(int expect) {
388 return ctl.compareAndSet(expect, expect - 1);
389 }
390
391 /**
392 * Decrements the workerCount field of ctl. This is called only on
393 * abrupt termination of a thread (see processWorkerExit). Other
394 * decrements are performed within getTask.
395 */
396 private void decrementWorkerCount() {
397 do {} while (! compareAndDecrementWorkerCount(ctl.get()));
398 }
399
400 /**
401 * The queue used for holding tasks and handing off to worker
402 * threads. We do not require that workQueue.poll() returning
403 * null necessarily means that workQueue.isEmpty(), so rely
404 * solely on isEmpty to see if the queue is empty (which we must
405 * do for example when deciding whether to transition from
406 * SHUTDOWN to TIDYING). This accommodates special-purpose
407 * queues such as DelayQueues for which poll() is allowed to
408 * return null even if it may later return non-null when delays
409 * expire.
410 */
411 private final BlockingQueue<Runnable> workQueue;
412
413 /**
414 * Lock held on access to workers set and related bookkeeping.
415 * While we could use a concurrent set of some sort, it turns out
416 * to be generally preferable to use a lock. Among the reasons is
417 * that this serializes interruptIdleWorkers, which avoids
418 * unnecessary interrupt storms, especially during shutdown.
419 * Otherwise exiting threads would concurrently interrupt those
420 * that have not yet interrupted. It also simplifies some of the
421 * associated statistics bookkeeping of largestPoolSize etc. We
422 * also hold mainLock on shutdown and shutdownNow, for the sake of
423 * ensuring workers set is stable while separately checking
424 * permission to interrupt and actually interrupting.
425 */
426 private final ReentrantLock mainLock = new ReentrantLock();
427
428 /**
429 * Set containing all worker threads in pool. Accessed only when
430 * holding mainLock.
431 */
432 private final HashSet<Worker> workers = new HashSet<Worker>();
433
434 /**
435 * Wait condition to support awaitTermination
436 */
437 private final Condition termination = mainLock.newCondition();
438
439 /**
440 * Tracks largest attained pool size. Accessed only under
441 * mainLock.
442 */
443 private int largestPoolSize;
444
445 /**
446 * Counter for completed tasks. Updated only on termination of
447 * worker threads. Accessed only under mainLock.
448 */
449 private long completedTaskCount;
450
451 /*
452 * All user control parameters are declared as volatiles so that
453 * ongoing actions are based on freshest values, but without need
454 * for locking, since no internal invariants depend on them
455 * changing synchronously with respect to other actions.
456 */
457
458 /**
459 * Factory for new threads. All threads are created using this
460 * factory (via method addWorker). All callers must be prepared
461 * for addWorker to fail, which may reflect a system or user's
462 * policy limiting the number of threads. Even though it is not
463 * treated as an error, failure to create threads may result in
464 * new tasks being rejected or existing ones remaining stuck in
465 * the queue. On the other hand, no special precautions exist to
466 * handle OutOfMemoryErrors that might be thrown while trying to
467 * create threads, since there is generally no recourse from
468 * within this class.
469 */
470 private volatile ThreadFactory threadFactory;
471
472 /**
473 * Handler called when saturated or shutdown in execute.
474 */
475 private volatile RejectedExecutionHandler handler;
476
477 /**
478 * Timeout in nanoseconds for idle threads waiting for work.
479 * Threads use this timeout when there are more than corePoolSize
480 * present or if allowCoreThreadTimeOut. Otherwise they wait
481 * forever for new work.
482 */
483 private volatile long keepAliveTime;
484
485 /**
486 * If false (default), core threads stay alive even when idle.
487 * If true, core threads use keepAliveTime to time out waiting
488 * for work.
489 */
490 private volatile boolean allowCoreThreadTimeOut;
491
492 /**
493 * Core pool size is the minimum number of workers to keep alive
494 * (and not allow to time out etc) unless allowCoreThreadTimeOut
495 * is set, in which case the minimum is zero.
496 */
497 private volatile int corePoolSize;
498
499 /**
500 * Maximum pool size. Note that the actual maximum is internally
501 * bounded by CAPACITY.
502 */
503 private volatile int maximumPoolSize;
504
505 /**
506 * The default rejected execution handler
507 */
508 private static final RejectedExecutionHandler defaultHandler =
509 new AbortPolicy();
510
511 /**
512 * Permission required for callers of shutdown and shutdownNow.
513 * We additionally require (see checkShutdownAccess) that callers
514 * have permission to actually interrupt threads in the worker set
515 * (as governed by Thread.interrupt, which relies on
516 * ThreadGroup.checkAccess, which in turn relies on
517 * SecurityManager.checkAccess). Shutdowns are attempted only if
518 * these checks pass.
519 *
520 * All actual invocations of Thread.interrupt (see
521 * interruptIdleWorkers and interruptWorkers) ignore
522 * SecurityExceptions, meaning that the attempted interrupts
523 * silently fail. In the case of shutdown, they should not fail
524 * unless the SecurityManager has inconsistent policies, sometimes
525 * allowing access to a thread and sometimes not. In such cases,
526 * failure to actually interrupt threads may disable or delay full
527 * termination. Other uses of interruptIdleWorkers are advisory,
528 * and failure to actually interrupt will merely delay response to
529 * configuration changes so is not handled exceptionally.
530 */
531 private static final RuntimePermission shutdownPerm =
532 new RuntimePermission("modifyThread");
533
534 /**
535 * Class Worker mainly maintains interrupt control state for
536 * threads running tasks, along with other minor bookkeeping. This
537 * class opportunistically extends ReentrantLock to simplify
538 * acquiring and releasing a lock surrounding each task execution.
539 * This protects against interrupts that are intended to wake up a
540 * worker thread waiting for a task from instead interrupting a
541 * task being run.
542 */
543 private final class Worker extends ReentrantLock implements Runnable {
544 /**
545 * This class will never be serialized, but we provide a
546 * serialVersionUID to suppress a javac warning.
547 */
548 private static final long serialVersionUID = 6138294804551838833L;
549
550 /** Thread this worker is running in. Null if factory fails. */
551 final Thread thread;
552 /** Initial task to run. Possibly null. */
553 Runnable firstTask;
554 /** Per-thread task counter */
555 volatile long completedTasks;
556
557 /**
558 * Creates with given first task and thread from ThreadFactory.
559 * @param firstTask the first task (null if none)
560 */
561 Worker(Runnable firstTask) {
562 this.firstTask = firstTask;
563 this.thread = getThreadFactory().newThread(this);
564 }
565
566 /** Delegates main run loop to outer runWorker */
567 public void run() {
568 runWorker(this);
569 }
570 }
571
572 /*
573 * Methods for setting control state
574 */
575
576 /**
577 * Transitions runState to given target, or leaves it alone if
578 * already at least the given target.
579 *
580 * @param targetState the desired state, either SHUTDOWN or STOP
581 * (but not TIDYING or TERMINATED -- use tryTerminate for that)
582 */
583 private void advanceRunState(int targetState) {
584 for (;;) {
585 int c = ctl.get();
586 if (runStateAtLeast(c, targetState) ||
587 ctl.compareAndSet(c, ctlOf(targetState, workerCountOf(c))))
588 break;
589 }
590 }
591
592 /**
593 * Transitions to TERMINATED state if either (SHUTDOWN and pool
594 * and queue empty) or (STOP and pool empty). If otherwise
595 * eligible to terminate but workerCount is nonzero, interrupts an
596 * idle worker to ensure that shutdown signals propagate. This
597 * method must be called following any action that might make
598 * termination possible -- reducing worker count or removing tasks
599 * from the queue during shutdown. The method is non-private to
600 * allow access from ScheduledThreadPoolExecutor.
601 */
602 final void tryTerminate() {
603 for (;;) {
604 int c = ctl.get();
605 if (isRunning(c)
606 || runStateAtLeast(c, TIDYING)
607 || (runStateOf(c) == SHUTDOWN && ! workQueue.isEmpty()))
608 return;
609 if (workerCountOf(c) != 0) { // Eligible to terminate
610 interruptIdleWorkers(ONLY_ONE);
611 return;
612 }
613 if (ctl.compareAndSet(c, ctlOf(TIDYING, 0))) {
614 mainLock.lock();
615 try {
616 try {
617 terminated();
618 } finally {
619 ctl.set(ctlOf(TERMINATED, 0));
620 termination.signalAll();
621 }
622 } finally {
623 mainLock.unlock();
624 }
625 return;
626 }
627 // else retry on failed CAS
628 }
629 }
630
631 /*
632 * Methods for controlling interrupts to worker threads.
633 */
634
635 /**
636 * If there is a security manager, makes sure caller has
637 * permission to shut down threads in general (see shutdownPerm).
638 * If this passes, additionally makes sure the caller is allowed
639 * to interrupt each worker thread. This might not be true even if
640 * first check passed, if the SecurityManager treats some threads
641 * specially.
642 */
643 private void checkShutdownAccess() {
644 SecurityManager security = System.getSecurityManager();
645 if (security != null) {
646 security.checkPermission(shutdownPerm);
647 final ReentrantLock mainLock = this.mainLock;
648 mainLock.lock();
649 try {
650 for (Worker w : workers)
651 security.checkAccess(w.thread);
652 } finally {
653 mainLock.unlock();
654 }
655 }
656 }
657
658 /**
659 * Interrupts all threads, even if active. Ignores SecurityExceptions
660 * (in which case some threads may remain uninterrupted).
661 */
662 private void interruptWorkers() {
663 final ReentrantLock mainLock = this.mainLock;
664 mainLock.lock();
665 try {
666 for (Worker w : workers) {
667 try {
668 w.thread.interrupt();
669 } catch (SecurityException ignore) {
670 }
671 }
672 } finally {
673 mainLock.unlock();
674 }
675 }
676
677 /**
678 * Interrupts threads that might be waiting for tasks (as
679 * indicated by not being locked) so they can check for
680 * termination or configuration changes. Ignores
681 * SecurityExceptions (in which case some threads may remain
682 * uninterrupted).
683 *
684 * @param onlyOne If true, interrupt at most one worker. This is
685 * called only from tryTerminate when termination is otherwise
686 * enabled but there are still other workers. In this case, at
687 * most one waiting worker is interrupted to propagate shutdown
688 * signals in case all threads are currently waiting.
689 * Interrupting any arbitrary thread ensures that newly arriving
690 * workers since shutdown began will also eventually exit.
691 * To guarantee eventual termination, it suffices to always
692 * interrupt only one idle worker, but shutdown() interrupts all
693 * idle workers so that redundant workers exit promptly, not
694 * waiting for a straggler task to finish.
695 */
696 private void interruptIdleWorkers(boolean onlyOne) {
697 final ReentrantLock mainLock = this.mainLock;
698 mainLock.lock();
699 try {
700 for (Worker w : workers) {
701 Thread t = w.thread;
702 if (!t.isInterrupted() && w.tryLock()) {
703 try {
704 t.interrupt();
705 } catch (SecurityException ignore) {
706 } finally {
707 w.unlock();
708 }
709 }
710 if (onlyOne)
711 break;
712 }
713 } finally {
714 mainLock.unlock();
715 }
716 }
717
718 private void interruptIdleWorkers() { interruptIdleWorkers(false); }
719 private static final boolean ONLY_ONE = true;
720
721 /**
722 * Ensures that unless the pool is stopping, the current thread
723 * does not have its interrupt set. This requires a double-check
724 * of state in case the interrupt was cleared concurrently with a
725 * shutdownNow -- if so, the interrupt is re-enabled.
726 */
727 private void clearInterruptsForTaskRun() {
728 if (runStateLessThan(ctl.get(), STOP) &&
729 Thread.interrupted() &&
730 runStateAtLeast(ctl.get(), STOP))
731 Thread.currentThread().interrupt();
732 }
733
734 /*
735 * Misc utilities, most of which are also exported to
736 * ScheduledThreadPoolExecutor
737 */
738
739 /**
740 * Invokes the rejected execution handler for the given command.
741 * Package-protected for use by ScheduledThreadPoolExecutor.
742 */
743 final void reject(Runnable command) {
744 handler.rejectedExecution(command, this);
745 }
746
747 /**
748 * Performs any further cleanup following run state transition on
749 * invocation of shutdown. A no-op here, but used by
750 * ScheduledThreadPoolExecutor to cancel delayed tasks.
751 */
752 void onShutdown() {
753 }
754
755 /**
756 * State check needed by ScheduledThreadPoolExecutor to
757 * enable running tasks during shutdown.
758 *
759 * @param shutdownOK true if should return true if SHUTDOWN
760 */
761 final boolean isRunningOrShutdown(boolean shutdownOK) {
762 int rs = runStateOf(ctl.get());
763 return rs == RUNNING || (rs == SHUTDOWN && shutdownOK);
764 }
765
766 /**
767 * Drains the task queue into a new list, normally using
768 * drainTo. But if the queue is a DelayQueue or any other kind of
769 * queue for which poll or drainTo may fail to remove some
770 * elements, it deletes them one by one.
771 */
772 private List<Runnable> drainQueue() {
773 BlockingQueue<Runnable> q = workQueue;
774 List<Runnable> taskList = new ArrayList<Runnable>();
775 q.drainTo(taskList);
776 if (!q.isEmpty()) {
777 for (Runnable r : q.toArray(new Runnable[0])) {
778 if (q.remove(r))
779 taskList.add(r);
780 }
781 }
782 return taskList;
783 }
784
785 /*
786 * Methods for creating, running and cleaning up after workers
787 */
788
789 /**
790 * Checks if a new worker can be added with respect to current
791 * pool state and the given bound (either core or maximum). If so,
792 * the worker count is adjusted accordingly, and, if possible, a
793 * new worker is created and started running firstTask as its
794 * first task. This method returns false if the pool is stopped or
795 * eligible to shut down. It also returns false if the thread
796 * factory fails to create a thread when asked, which requires a
797 * backout of workerCount, and a recheck for termination, in case
798 * the existence of this worker was holding up termination.
799 *
800 * @param firstTask the task the new thread should run first (or
801 * null if none). Workers are created with an initial first task
802 * (in method execute()) to bypass queuing when there are fewer
803 * than corePoolSize threads (in which case we always start one),
804 * or when the queue is full (in which case we must bypass queue).
805 * Initially idle threads are usually created via
806 * prestartCoreThread or to replace other dying workers.
807 *
808 * @param core if true use corePoolSize as bound, else
809 * maximumPoolSize. (A boolean indicator is used here rather than a
810 * value to ensure reads of fresh values after checking other pool
811 * state).
812 * @return true if successful
813 */
814 private boolean addWorker(Runnable firstTask, boolean core) {
815 for (;;) {
816 int c = ctl.get();
817 // Check if queue empty only if necessary.
818 if (runStateAtLeast(c, SHUTDOWN)
819 && ! (runStateOf(c) == SHUTDOWN
820 && firstTask == null
821 && ! workQueue.isEmpty()))
822 return false;
823 int wc = workerCountOf(c);
824 if (wc >= CAPACITY ||
825 wc >= (core ? corePoolSize : maximumPoolSize))
826 return false;
827 if (compareAndIncrementWorkerCount(c))
828 break;
829 }
830
831 Worker w = new Worker(firstTask);
832 Thread t = w.thread;
833
834 final ReentrantLock mainLock = this.mainLock;
835 mainLock.lock();
836 try {
837 // Back out on ThreadFactory failure or if
838 // shut down before lock acquired.
839 int c = ctl.get();
840 if (t == null
841 || (runStateAtLeast(c, SHUTDOWN)
842 && (! (runStateOf(c) == SHUTDOWN
843 && firstTask == null)))) {
844 decrementWorkerCount();
845 tryTerminate();
846 return false;
847 }
848 workers.add(w);
849 int s = workers.size();
850 if (s > largestPoolSize)
851 largestPoolSize = s;
852 } finally {
853 mainLock.unlock();
854 }
855
856 t.start();
857 return true;
858 }
859
860 /**
861 * Performs cleanup and bookkeeping for a dying worker. Called
862 * only from worker threads. Unless completedAbruptly is set,
863 * assumes that workerCount has already been adjusted to account
864 * for exit. This method removes thread from worker set, and
865 * possibly terminates the pool or replaces the worker if either
866 * it exited due to user task exception or if fewer than
867 * corePoolSize workers are running or queue is non-empty but
868 * there are no workers.
869 *
870 * @param w the worker
871 * @param completedAbruptly if the worker died due to user exception
872 */
873 private void processWorkerExit(Worker w, boolean completedAbruptly) {
874 if (completedAbruptly) // If abrupt, then workerCount wasn't adjusted
875 decrementWorkerCount();
876
877 final ReentrantLock mainLock = this.mainLock;
878 mainLock.lock();
879 try {
880 completedTaskCount += w.completedTasks;
881 workers.remove(w);
882 } finally {
883 mainLock.unlock();
884 }
885
886 tryTerminate();
887
888 int c = ctl.get();
889 if (runStateLessThan(c, STOP)) {
890 if (!completedAbruptly) {
891 int min = allowCoreThreadTimeOut ? 0 : corePoolSize;
892 if (min == 0 && ! workQueue.isEmpty())
893 min = 1;
894 if (workerCountOf(c) >= min)
895 return; // replacement not needed
896 }
897 addWorker(null, false);
898 }
899 }
900
901 /**
902 * Performs blocking or timed wait for a task, depending on
903 * current configuration settings, or returns null if this worker
904 * must exit because of any of:
905 * 1. There are more than maximumPoolSize workers (due to
906 * a call to setMaximumPoolSize).
907 * 2. The pool is stopped.
908 * 3. The queue is empty, and either the pool is shutdown,
909 * or the thread has already timed out at least once
910 * waiting for a task, and would otherwise enter another
911 * timed wait.
912 *
913 * @return task, or null if the worker must exit, in which case
914 * workerCount is decremented
915 */
916 private Runnable getTask() {
917 /*
918 * Variable "empty" tracks whether the queue appears to be
919 * empty in case we need to know to check exit. This is set
920 * true on time-out from timed poll as an indicator of likely
921 * emptiness, in which case it is rechecked explicitly via
922 * isEmpty when deciding whether to exit. Emptiness must also
923 * be checked in state SHUTDOWN. The variable is initialized
924 * false to indicate lack of prior timeout, and left false
925 * until otherwise required to check.
926 */
927 boolean empty = false;
928 for (;;) {
929 int c = ctl.get();
930 int rs = runStateOf(c);
931 if (rs == SHUTDOWN || empty) {
932 empty = workQueue.isEmpty();
933 if (runStateOf(c = ctl.get()) != rs)
934 continue; // retry if state changed
935 }
936
937 int wc = workerCountOf(c);
938 boolean timed = allowCoreThreadTimeOut || wc > corePoolSize;
939
940 // Try to exit if too many threads, shutting down, and/or timed out
941 if (wc > maximumPoolSize || rs > SHUTDOWN ||
942 (empty && (timed || rs == SHUTDOWN))) {
943 if (compareAndDecrementWorkerCount(c))
944 return null;
945 else
946 continue; // retry on CAS failure
947 }
948
949 try {
950 Runnable r = timed ?
951 workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
952 workQueue.take();
953 if (r != null)
954 return r;
955 empty = true; // queue probably empty; recheck above
956 } catch (InterruptedException retry) {
957 }
958 }
959 }
960
961 /**
962 * Main worker run loop. Repeatedly gets tasks from queue and
963 * executes them, while coping with a number of issues:
964 *
965 * 1. We may start out with an initial task, in which case we
966 * don't need to get the first one. Otherwise, as long as pool is
967 * running, we get tasks from getTask. If it returns null then the
968 * worker exits due to changed pool state or configuration
969 * parameters. Other exits result from exception throws in
970 * external code, in which case completedAbruptly holds, which
971 * usually leads processWorkerExit to replace this thread.
972 *
973 * 2. Before running any task, the lock is acquired to prevent
974 * other pool interrupts while the task is executing, and
975 * clearInterruptsForTaskRun called to ensure that unless pool is
976 * stopping, this thread does not have its interrupt set.
977 *
978 * 3. Each task run is preceded by a call to beforeExecute, which
979 * might throw an exception, in which case we cause thread to die
980 * (breaking loop with completedAbruptly true) without processing
981 * the task.
982 *
983 * 4. Assuming beforeExecute completes normally, we run the task,
984 * gathering any of its thrown exceptions to send to
985 * afterExecute. We separately handle RuntimeException, Error
986 * (both of which the specs guarantee that we trap) and arbitrary
987 * Throwables. Because we cannot rethrow Throwables within
988 * Runnable.run, we wrap them within Errors on the way out (to the
989 * thread's UncaughtExceptionHandler). Any thrown exception also
990 * conservatively causes thread to die.
991 *
992 * 5. After task.run completes, we call afterExecute, which may
993 * also throw an exception, which will also cause thread to
994 * die. According to JLS Sec 14.20, this exception is the one that
995 * will be in effect even if task.run throws.
996 *
997 * The net effect of the exception mechanics is that afterExecute
998 * and the thread's UncaughtExceptionHandler have as accurate
999 * information as we can provide about any problems encountered by
1000 * user code.
1001 *
1002 * @param w the worker
1003 */
1004 final void runWorker(Worker w) {
1005 Runnable task = w.firstTask;
1006 w.firstTask = null;
1007 boolean completedAbruptly = true;
1008 try {
1009 while (task != null || (task = getTask()) != null) {
1010 w.lock();
1011 clearInterruptsForTaskRun();
1012 try {
1013 beforeExecute(w.thread, task);
1014 Throwable thrown = null;
1015 try {
1016 task.run();
1017 } catch (RuntimeException x) {
1018 thrown = x; throw x;
1019 } catch (Error x) {
1020 thrown = x; throw x;
1021 } catch (Throwable x) {
1022 thrown = x; throw new Error(x);
1023 } finally {
1024 afterExecute(task, thrown);
1025 }
1026 } finally {
1027 task = null;
1028 w.completedTasks++;
1029 w.unlock();
1030 }
1031 }
1032 completedAbruptly = false;
1033 } finally {
1034 processWorkerExit(w, completedAbruptly);
1035 }
1036 }
1037
1038 // Public constructors and methods
1039
1040 /**
1041 * Creates a new {@code ThreadPoolExecutor} with the given initial
1042 * parameters and default thread factory and rejected execution handler.
1043 * It may be more convenient to use one of the {@link Executors} factory
1044 * methods instead of this general purpose constructor.
1045 *
1046 * @param corePoolSize the number of threads to keep in the pool, even
1047 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1048 * @param maximumPoolSize the maximum number of threads to allow in the
1049 * pool
1050 * @param keepAliveTime when the number of threads is greater than
1051 * the core, this is the maximum time that excess idle threads
1052 * will wait for new tasks before terminating.
1053 * @param unit the time unit for the {@code keepAliveTime} argument
1054 * @param workQueue the queue to use for holding tasks before they are
1055 * executed. This queue will hold only the {@code Runnable}
1056 * tasks submitted by the {@code execute} method.
1057 * @throws IllegalArgumentException if one of the following holds:<br>
1058 * {@code corePoolSize < 0}<br>
1059 * {@code keepAliveTime < 0}<br>
1060 * {@code maximumPoolSize <= 0}<br>
1061 * {@code maximumPoolSize < corePoolSize}
1062 * @throws NullPointerException if {@code workQueue} is null
1063 */
1064 public ThreadPoolExecutor(int corePoolSize,
1065 int maximumPoolSize,
1066 long keepAliveTime,
1067 TimeUnit unit,
1068 BlockingQueue<Runnable> workQueue) {
1069 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1070 Executors.defaultThreadFactory(), defaultHandler);
1071 }
1072
1073 /**
1074 * Creates a new {@code ThreadPoolExecutor} with the given initial
1075 * parameters and default rejected execution handler.
1076 *
1077 * @param corePoolSize the number of threads to keep in the pool, even
1078 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1079 * @param maximumPoolSize the maximum number of threads to allow in the
1080 * pool
1081 * @param keepAliveTime when the number of threads is greater than
1082 * the core, this is the maximum time that excess idle threads
1083 * will wait for new tasks before terminating.
1084 * @param unit the time unit for the {@code keepAliveTime} argument
1085 * @param workQueue the queue to use for holding tasks before they are
1086 * executed. This queue will hold only the {@code Runnable}
1087 * tasks submitted by the {@code execute} method.
1088 * @param threadFactory the factory to use when the executor
1089 * creates a new thread
1090 * @throws IllegalArgumentException if one of the following holds:<br>
1091 * {@code corePoolSize < 0}<br>
1092 * {@code keepAliveTime < 0}<br>
1093 * {@code maximumPoolSize <= 0}<br>
1094 * {@code maximumPoolSize < corePoolSize}
1095 * @throws NullPointerException if {@code workQueue}
1096 * or {@code threadFactory} is null
1097 */
1098 public ThreadPoolExecutor(int corePoolSize,
1099 int maximumPoolSize,
1100 long keepAliveTime,
1101 TimeUnit unit,
1102 BlockingQueue<Runnable> workQueue,
1103 ThreadFactory threadFactory) {
1104 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1105 threadFactory, defaultHandler);
1106 }
1107
1108 /**
1109 * Creates a new {@code ThreadPoolExecutor} with the given initial
1110 * parameters and default thread factory.
1111 *
1112 * @param corePoolSize the number of threads to keep in the pool, even
1113 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1114 * @param maximumPoolSize the maximum number of threads to allow in the
1115 * pool
1116 * @param keepAliveTime when the number of threads is greater than
1117 * the core, this is the maximum time that excess idle threads
1118 * will wait for new tasks before terminating.
1119 * @param unit the time unit for the {@code keepAliveTime} argument
1120 * @param workQueue the queue to use for holding tasks before they are
1121 * executed. This queue will hold only the {@code Runnable}
1122 * tasks submitted by the {@code execute} method.
1123 * @param handler the handler to use when execution is blocked
1124 * because the thread bounds and queue capacities are reached
1125 * @throws IllegalArgumentException if one of the following holds:<br>
1126 * {@code corePoolSize < 0}<br>
1127 * {@code keepAliveTime < 0}<br>
1128 * {@code maximumPoolSize <= 0}<br>
1129 * {@code maximumPoolSize < corePoolSize}
1130 * @throws NullPointerException if {@code workQueue}
1131 * or {@code handler} is null
1132 */
1133 public ThreadPoolExecutor(int corePoolSize,
1134 int maximumPoolSize,
1135 long keepAliveTime,
1136 TimeUnit unit,
1137 BlockingQueue<Runnable> workQueue,
1138 RejectedExecutionHandler handler) {
1139 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1140 Executors.defaultThreadFactory(), handler);
1141 }
1142
1143 /**
1144 * Creates a new {@code ThreadPoolExecutor} with the given initial
1145 * parameters.
1146 *
1147 * @param corePoolSize the number of threads to keep in the pool, even
1148 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1149 * @param maximumPoolSize the maximum number of threads to allow in the
1150 * pool
1151 * @param keepAliveTime when the number of threads is greater than
1152 * the core, this is the maximum time that excess idle threads
1153 * will wait for new tasks before terminating.
1154 * @param unit the time unit for the {@code keepAliveTime} argument
1155 * @param workQueue the queue to use for holding tasks before they are
1156 * executed. This queue will hold only the {@code Runnable}
1157 * tasks submitted by the {@code execute} method.
1158 * @param threadFactory the factory to use when the executor
1159 * creates a new thread
1160 * @param handler the handler to use when execution is blocked
1161 * because the thread bounds and queue capacities are reached
1162 * @throws IllegalArgumentException if one of the following holds:<br>
1163 * {@code corePoolSize < 0}<br>
1164 * {@code keepAliveTime < 0}<br>
1165 * {@code maximumPoolSize <= 0}<br>
1166 * {@code maximumPoolSize < corePoolSize}
1167 * @throws NullPointerException if {@code workQueue}
1168 * or {@code threadFactory} or {@code handler} is null
1169 */
1170 public ThreadPoolExecutor(int corePoolSize,
1171 int maximumPoolSize,
1172 long keepAliveTime,
1173 TimeUnit unit,
1174 BlockingQueue<Runnable> workQueue,
1175 ThreadFactory threadFactory,
1176 RejectedExecutionHandler handler) {
1177 if (corePoolSize < 0 ||
1178 maximumPoolSize <= 0 ||
1179 maximumPoolSize < corePoolSize ||
1180 keepAliveTime < 0)
1181 throw new IllegalArgumentException();
1182 if (workQueue == null || threadFactory == null || handler == null)
1183 throw new NullPointerException();
1184 this.corePoolSize = corePoolSize;
1185 this.maximumPoolSize = maximumPoolSize;
1186 this.workQueue = workQueue;
1187 this.keepAliveTime = unit.toNanos(keepAliveTime);
1188 this.threadFactory = threadFactory;
1189 this.handler = handler;
1190 }
1191
1192 /**
1193 * Executes the given task sometime in the future. The task
1194 * may execute in a new thread or in an existing pooled thread.
1195 *
1196 * If the task cannot be submitted for execution, either because this
1197 * executor has been shutdown or because its capacity has been reached,
1198 * the task is handled by the current {@code RejectedExecutionHandler}.
1199 *
1200 * @param command the task to execute
1201 * @throws RejectedExecutionException at discretion of
1202 * {@code RejectedExecutionHandler}, if the task
1203 * cannot be accepted for execution
1204 * @throws NullPointerException if {@code command} is null
1205 */
1206 public void execute(Runnable command) {
1207 if (command == null)
1208 throw new NullPointerException();
1209 /*
1210 * Proceed in 3 steps:
1211 *
1212 * 1. If fewer than corePoolSize threads are running, try to
1213 * start a new thread with the given command as its first
1214 * task. The call to addWorker atomically checks runState and
1215 * workerCount, and so prevents false alarms that would add
1216 * threads when it shouldn't, by returning false.
1217 *
1218 * 2. If a task can be successfully queued, then we still need
1219 * to double-check whether we should have added a thread
1220 * (because existing ones died since last checking) or that
1221 * the pool shut down since entry into this method. So we
1222 * recheck state and if necessary roll back the enqueuing if
1223 * stopped, or start a new thread if there are none.
1224 *
1225 * 3. If we cannot queue task, then we try to add a new
1226 * thread. If it fails, we know we are shut down or saturated
1227 * and so reject the task.
1228 */
1229 int c = ctl.get();
1230 if (workerCountOf(c) < corePoolSize) {
1231 if (addWorker(command, true))
1232 return;
1233 c = ctl.get();
1234 }
1235 if (isRunning(c) && workQueue.offer(command)) {
1236 int recheck = ctl.get();
1237 if (! isRunning(recheck) && remove(command))
1238 reject(command);
1239 else if (workerCountOf(recheck) == 0)
1240 addWorker(null, false);
1241 }
1242 else if (!addWorker(command, false))
1243 reject(command);
1244 }
1245
1246 /**
1247 * Initiates an orderly shutdown in which previously submitted
1248 * tasks are executed, but no new tasks will be accepted.
1249 * Invocation has no additional effect if already shut down.
1250 *
1251 * @throws SecurityException {@inheritDoc}
1252 */
1253 public void shutdown() {
1254 final ReentrantLock mainLock = this.mainLock;
1255 mainLock.lock();
1256 try {
1257 checkShutdownAccess();
1258 advanceRunState(SHUTDOWN);
1259 interruptIdleWorkers();
1260 onShutdown(); // hook for ScheduledThreadPoolExecutor
1261 } finally {
1262 mainLock.unlock();
1263 }
1264 tryTerminate();
1265 }
1266
1267 /**
1268 * Attempts to stop all actively executing tasks, halts the
1269 * processing of waiting tasks, and returns a list of the tasks
1270 * that were awaiting execution. These tasks are drained (removed)
1271 * from the task queue upon return from this method.
1272 *
1273 * <p>There are no guarantees beyond best-effort attempts to stop
1274 * processing actively executing tasks. This implementation
1275 * cancels tasks via {@link Thread#interrupt}, so any task that
1276 * fails to respond to interrupts may never terminate.
1277 *
1278 * @throws SecurityException {@inheritDoc}
1279 */
1280 public List<Runnable> shutdownNow() {
1281 List<Runnable> tasks;
1282 final ReentrantLock mainLock = this.mainLock;
1283 mainLock.lock();
1284 try {
1285 checkShutdownAccess();
1286 advanceRunState(STOP);
1287 interruptWorkers();
1288 tasks = drainQueue();
1289 } finally {
1290 mainLock.unlock();
1291 }
1292 tryTerminate();
1293 return tasks;
1294 }
1295
1296 public boolean isShutdown() {
1297 return ! isRunning(ctl.get());
1298 }
1299
1300 /**
1301 * Returns true if this executor is in the process of terminating
1302 * after {@link #shutdown} or {@link #shutdownNow} but has not
1303 * completely terminated. This method may be useful for
1304 * debugging. A return of {@code true} reported a sufficient
1305 * period after shutdown may indicate that submitted tasks have
1306 * ignored or suppressed interruption, causing this executor not
1307 * to properly terminate.
1308 *
1309 * @return true if terminating but not yet terminated
1310 */
1311 public boolean isTerminating() {
1312 int c = ctl.get();
1313 return ! isRunning(c) && runStateLessThan(c, TERMINATED);
1314 }
1315
1316 public boolean isTerminated() {
1317 return runStateAtLeast(ctl.get(), TERMINATED);
1318 }
1319
1320 public boolean awaitTermination(long timeout, TimeUnit unit)
1321 throws InterruptedException {
1322 long nanos = unit.toNanos(timeout);
1323 final ReentrantLock mainLock = this.mainLock;
1324 mainLock.lock();
1325 try {
1326 for (;;) {
1327 if (runStateAtLeast(ctl.get(), TERMINATED))
1328 return true;
1329 if (nanos <= 0)
1330 return false;
1331 nanos = termination.awaitNanos(nanos);
1332 }
1333 } finally {
1334 mainLock.unlock();
1335 }
1336 }
1337
1338 /**
1339 * Invokes {@code shutdown} when this executor is no longer
1340 * referenced and it has no threads.
1341 */
1342 protected void finalize() {
1343 shutdown();
1344 }
1345
1346 /**
1347 * Sets the thread factory used to create new threads.
1348 *
1349 * @param threadFactory the new thread factory
1350 * @throws NullPointerException if threadFactory is null
1351 * @see #getThreadFactory
1352 */
1353 public void setThreadFactory(ThreadFactory threadFactory) {
1354 if (threadFactory == null)
1355 throw new NullPointerException();
1356 this.threadFactory = threadFactory;
1357 }
1358
1359 /**
1360 * Returns the thread factory used to create new threads.
1361 *
1362 * @return the current thread factory
1363 * @see #setThreadFactory
1364 */
1365 public ThreadFactory getThreadFactory() {
1366 return threadFactory;
1367 }
1368
1369 /**
1370 * Sets a new handler for unexecutable tasks.
1371 *
1372 * @param handler the new handler
1373 * @throws NullPointerException if handler is null
1374 * @see #getRejectedExecutionHandler
1375 */
1376 public void setRejectedExecutionHandler(RejectedExecutionHandler handler) {
1377 if (handler == null)
1378 throw new NullPointerException();
1379 this.handler = handler;
1380 }
1381
1382 /**
1383 * Returns the current handler for unexecutable tasks.
1384 *
1385 * @return the current handler
1386 * @see #setRejectedExecutionHandler
1387 */
1388 public RejectedExecutionHandler getRejectedExecutionHandler() {
1389 return handler;
1390 }
1391
1392 /**
1393 * Sets the core number of threads. This overrides any value set
1394 * in the constructor. If the new value is smaller than the
1395 * current value, excess existing threads will be terminated when
1396 * they next become idle. If larger, new threads will, if needed,
1397 * be started to execute any queued tasks.
1398 *
1399 * @param corePoolSize the new core size
1400 * @throws IllegalArgumentException if {@code corePoolSize < 0}
1401 * @see #getCorePoolSize
1402 */
1403 public void setCorePoolSize(int corePoolSize) {
1404 if (corePoolSize < 0)
1405 throw new IllegalArgumentException();
1406 int delta = corePoolSize - this.corePoolSize;
1407 this.corePoolSize = corePoolSize;
1408 if (workerCountOf(ctl.get()) > corePoolSize)
1409 interruptIdleWorkers();
1410 else if (delta > 0) {
1411 // We don't really know how many new threads are "needed".
1412 // As a heuristic, prestart enough new workers (up to new
1413 // core size) to handle the current number of tasks in
1414 // queue, but stop if queue becomes empty while doing so.
1415 int k = Math.min(delta, workQueue.size());
1416 while (k-- > 0 && addWorker(null, true)) {
1417 if (workQueue.isEmpty())
1418 break;
1419 }
1420 }
1421 }
1422
1423 /**
1424 * Returns the core number of threads.
1425 *
1426 * @return the core number of threads
1427 * @see #setCorePoolSize
1428 */
1429 public int getCorePoolSize() {
1430 return corePoolSize;
1431 }
1432
1433 /**
1434 * Starts a core thread, causing it to idly wait for work. This
1435 * overrides the default policy of starting core threads only when
1436 * new tasks are executed. This method will return {@code false}
1437 * if all core threads have already been started.
1438 *
1439 * @return {@code true} if a thread was started
1440 */
1441 public boolean prestartCoreThread() {
1442 return workerCountOf(ctl.get()) < corePoolSize &&
1443 addWorker(null, true);
1444 }
1445
1446 /**
1447 * Starts all core threads, causing them to idly wait for work. This
1448 * overrides the default policy of starting core threads only when
1449 * new tasks are executed.
1450 *
1451 * @return the number of threads started
1452 */
1453 public int prestartAllCoreThreads() {
1454 int n = 0;
1455 while (addWorker(null, true))
1456 ++n;
1457 return n;
1458 }
1459
1460 /**
1461 * Returns true if this pool allows core threads to time out and
1462 * terminate if no tasks arrive within the keepAlive time, being
1463 * replaced if needed when new tasks arrive. When true, the same
1464 * keep-alive policy applying to non-core threads applies also to
1465 * core threads. When false (the default), core threads are never
1466 * terminated due to lack of incoming tasks.
1467 *
1468 * @return {@code true} if core threads are allowed to time out,
1469 * else {@code false}
1470 *
1471 * @since 1.6
1472 */
1473 public boolean allowsCoreThreadTimeOut() {
1474 return allowCoreThreadTimeOut;
1475 }
1476
1477 /**
1478 * Sets the policy governing whether core threads may time out and
1479 * terminate if no tasks arrive within the keep-alive time, being
1480 * replaced if needed when new tasks arrive. When false, core
1481 * threads are never terminated due to lack of incoming
1482 * tasks. When true, the same keep-alive policy applying to
1483 * non-core threads applies also to core threads. To avoid
1484 * continual thread replacement, the keep-alive time must be
1485 * greater than zero when setting {@code true}. This method
1486 * should in general be called before the pool is actively used.
1487 *
1488 * @param value {@code true} if should time out, else {@code false}
1489 * @throws IllegalArgumentException if value is {@code true}
1490 * and the current keep-alive time is not greater than zero
1491 *
1492 * @since 1.6
1493 */
1494 public void allowCoreThreadTimeOut(boolean value) {
1495 if (value && keepAliveTime <= 0)
1496 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1497 if (value != allowCoreThreadTimeOut) {
1498 allowCoreThreadTimeOut = value;
1499 if (value)
1500 interruptIdleWorkers();
1501 }
1502 }
1503
1504 /**
1505 * Sets the maximum allowed number of threads. This overrides any
1506 * value set in the constructor. If the new value is smaller than
1507 * the current value, excess existing threads will be
1508 * terminated when they next become idle.
1509 *
1510 * @param maximumPoolSize the new maximum
1511 * @throws IllegalArgumentException if the new maximum is
1512 * less than or equal to zero, or
1513 * less than the {@linkplain #getCorePoolSize core pool size}
1514 * @see #getMaximumPoolSize
1515 */
1516 public void setMaximumPoolSize(int maximumPoolSize) {
1517 if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize)
1518 throw new IllegalArgumentException();
1519 this.maximumPoolSize = maximumPoolSize;
1520 if (workerCountOf(ctl.get()) > maximumPoolSize)
1521 interruptIdleWorkers();
1522 }
1523
1524 /**
1525 * Returns the maximum allowed number of threads.
1526 *
1527 * @return the maximum allowed number of threads
1528 * @see #setMaximumPoolSize
1529 */
1530 public int getMaximumPoolSize() {
1531 return maximumPoolSize;
1532 }
1533
1534 /**
1535 * Sets the time limit for which threads may remain idle before
1536 * being terminated. If there are more than the core number of
1537 * threads currently in the pool, after waiting this amount of
1538 * time without processing a task, excess threads will be
1539 * terminated. This overrides any value set in the constructor.
1540 *
1541 * @param time the time to wait. A time value of zero will cause
1542 * excess threads to terminate immediately after executing tasks.
1543 * @param unit the time unit of the {@code time} argument
1544 * @throws IllegalArgumentException if {@code time} less than zero or
1545 * if {@code time} is zero and {@code allowsCoreThreadTimeOut}
1546 * @see #getKeepAliveTime
1547 */
1548 public void setKeepAliveTime(long time, TimeUnit unit) {
1549 if (time < 0)
1550 throw new IllegalArgumentException();
1551 if (time == 0 && allowsCoreThreadTimeOut())
1552 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1553 long keepAliveTime = unit.toNanos(time);
1554 long delta = keepAliveTime - this.keepAliveTime;
1555 this.keepAliveTime = keepAliveTime;
1556 if (delta < 0)
1557 interruptIdleWorkers();
1558 }
1559
1560 /**
1561 * Returns the thread keep-alive time, which is the amount of time
1562 * that threads in excess of the core pool size may remain
1563 * idle before being terminated.
1564 *
1565 * @param unit the desired time unit of the result
1566 * @return the time limit
1567 * @see #setKeepAliveTime
1568 */
1569 public long getKeepAliveTime(TimeUnit unit) {
1570 return unit.convert(keepAliveTime, TimeUnit.NANOSECONDS);
1571 }
1572
1573 /* User-level queue utilities */
1574
1575 /**
1576 * Returns the task queue used by this executor. Access to the
1577 * task queue is intended primarily for debugging and monitoring.
1578 * This queue may be in active use. Retrieving the task queue
1579 * does not prevent queued tasks from executing.
1580 *
1581 * @return the task queue
1582 */
1583 public BlockingQueue<Runnable> getQueue() {
1584 return workQueue;
1585 }
1586
1587 /**
1588 * Removes this task from the executor's internal queue if it is
1589 * present, thus causing it not to be run if it has not already
1590 * started.
1591 *
1592 * <p> This method may be useful as one part of a cancellation
1593 * scheme. It may fail to remove tasks that have been converted
1594 * into other forms before being placed on the internal queue. For
1595 * example, a task entered using {@code submit} might be
1596 * converted into a form that maintains {@code Future} status.
1597 * However, in such cases, method {@link #purge} may be used to
1598 * remove those Futures that have been cancelled.
1599 *
1600 * @param task the task to remove
1601 * @return true if the task was removed
1602 */
1603 public boolean remove(Runnable task) {
1604 boolean removed = workQueue.remove(task);
1605 tryTerminate(); // In case SHUTDOWN and now empty
1606 return removed;
1607 }
1608
1609 /**
1610 * Tries to remove from the work queue all {@link Future}
1611 * tasks that have been cancelled. This method can be useful as a
1612 * storage reclamation operation, that has no other impact on
1613 * functionality. Cancelled tasks are never executed, but may
1614 * accumulate in work queues until worker threads can actively
1615 * remove them. Invoking this method instead tries to remove them now.
1616 * However, this method may fail to remove tasks in
1617 * the presence of interference by other threads.
1618 */
1619 public void purge() {
1620 final BlockingQueue<Runnable> q = workQueue;
1621 try {
1622 Iterator<Runnable> it = q.iterator();
1623 while (it.hasNext()) {
1624 Runnable r = it.next();
1625 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1626 it.remove();
1627 }
1628 } catch (ConcurrentModificationException fallThrough) {
1629 // Take slow path if we encounter interference during traversal.
1630 // Make copy for traversal and call remove for cancelled entries.
1631 // The slow path is more likely to be O(N*N).
1632 for (Object r : q.toArray())
1633 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1634 q.remove(r);
1635 }
1636
1637 tryTerminate(); // In case SHUTDOWN and now empty
1638 }
1639
1640 /* Statistics */
1641
1642 /**
1643 * Returns the current number of threads in the pool.
1644 *
1645 * @return the number of threads
1646 */
1647 public int getPoolSize() {
1648 final ReentrantLock mainLock = this.mainLock;
1649 mainLock.lock();
1650 try {
1651 // Remove rare and surprising possibility of
1652 // isTerminated() && getPoolSize() > 0
1653 return runStateAtLeast(ctl.get(), TIDYING) ? 0
1654 : workers.size();
1655 } finally {
1656 mainLock.unlock();
1657 }
1658 }
1659
1660 /**
1661 * Returns the approximate number of threads that are actively
1662 * executing tasks.
1663 *
1664 * @return the number of threads
1665 */
1666 public int getActiveCount() {
1667 final ReentrantLock mainLock = this.mainLock;
1668 mainLock.lock();
1669 try {
1670 int n = 0;
1671 for (Worker w : workers)
1672 if (w.isLocked())
1673 ++n;
1674 return n;
1675 } finally {
1676 mainLock.unlock();
1677 }
1678 }
1679
1680 /**
1681 * Returns the largest number of threads that have ever
1682 * simultaneously been in the pool.
1683 *
1684 * @return the number of threads
1685 */
1686 public int getLargestPoolSize() {
1687 final ReentrantLock mainLock = this.mainLock;
1688 mainLock.lock();
1689 try {
1690 return largestPoolSize;
1691 } finally {
1692 mainLock.unlock();
1693 }
1694 }
1695
1696 /**
1697 * Returns the approximate total number of tasks that have ever been
1698 * scheduled for execution. Because the states of tasks and
1699 * threads may change dynamically during computation, the returned
1700 * value is only an approximation.
1701 *
1702 * @return the number of tasks
1703 */
1704 public long getTaskCount() {
1705 final ReentrantLock mainLock = this.mainLock;
1706 mainLock.lock();
1707 try {
1708 long n = completedTaskCount;
1709 for (Worker w : workers) {
1710 n += w.completedTasks;
1711 if (w.isLocked())
1712 ++n;
1713 }
1714 return n + workQueue.size();
1715 } finally {
1716 mainLock.unlock();
1717 }
1718 }
1719
1720 /**
1721 * Returns the approximate total number of tasks that have
1722 * completed execution. Because the states of tasks and threads
1723 * may change dynamically during computation, the returned value
1724 * is only an approximation, but one that does not ever decrease
1725 * across successive calls.
1726 *
1727 * @return the number of tasks
1728 */
1729 public long getCompletedTaskCount() {
1730 final ReentrantLock mainLock = this.mainLock;
1731 mainLock.lock();
1732 try {
1733 long n = completedTaskCount;
1734 for (Worker w : workers)
1735 n += w.completedTasks;
1736 return n;
1737 } finally {
1738 mainLock.unlock();
1739 }
1740 }
1741
1742 /* Extension hooks */
1743
1744 /**
1745 * Method invoked prior to executing the given Runnable in the
1746 * given thread. This method is invoked by thread {@code t} that
1747 * will execute task {@code r}, and may be used to re-initialize
1748 * ThreadLocals, or to perform logging.
1749 *
1750 * <p>This implementation does nothing, but may be customized in
1751 * subclasses. Note: To properly nest multiple overridings, subclasses
1752 * should generally invoke {@code super.beforeExecute} at the end of
1753 * this method.
1754 *
1755 * @param t the thread that will run task {@code r}
1756 * @param r the task that will be executed
1757 */
1758 protected void beforeExecute(Thread t, Runnable r) { }
1759
1760 /**
1761 * Method invoked upon completion of execution of the given Runnable.
1762 * This method is invoked by the thread that executed the task. If
1763 * non-null, the Throwable is the uncaught {@code RuntimeException}
1764 * or {@code Error} that caused execution to terminate abruptly.
1765 *
1766 * <p>This implementation does nothing, but may be customized in
1767 * subclasses. Note: To properly nest multiple overridings, subclasses
1768 * should generally invoke {@code super.afterExecute} at the
1769 * beginning of this method.
1770 *
1771 * <p><b>Note:</b> When actions are enclosed in tasks (such as
1772 * {@link FutureTask}) either explicitly or via methods such as
1773 * {@code submit}, these task objects catch and maintain
1774 * computational exceptions, and so they do not cause abrupt
1775 * termination, and the internal exceptions are <em>not</em>
1776 * passed to this method. If you would like to trap both kinds of
1777 * failures in this method, you can further probe for such cases,
1778 * as in this sample subclass that prints either the direct cause
1779 * or the underlying exception if a task has been aborted:
1780 *
1781 * <pre> {@code
1782 * class ExtendedExecutor extends ThreadPoolExecutor {
1783 * // ...
1784 * protected void afterExecute(Runnable r, Throwable t) {
1785 * super.afterExecute(r, t);
1786 * if (t == null && r instanceof Future<?>) {
1787 * try {
1788 * Object result = ((Future<?>) r).get();
1789 * } catch (CancellationException ce) {
1790 * t = ce;
1791 * } catch (ExecutionException ee) {
1792 * t = ee.getCause();
1793 * } catch (InterruptedException ie) {
1794 * Thread.currentThread().interrupt(); // ignore/reset
1795 * }
1796 * }
1797 * if (t != null)
1798 * System.out.println(t);
1799 * }
1800 * }}</pre>
1801 *
1802 * @param r the runnable that has completed
1803 * @param t the exception that caused termination, or null if
1804 * execution completed normally
1805 */
1806 protected void afterExecute(Runnable r, Throwable t) { }
1807
1808 /**
1809 * Method invoked when the Executor has terminated. Default
1810 * implementation does nothing. Note: To properly nest multiple
1811 * overridings, subclasses should generally invoke
1812 * {@code super.terminated} within this method.
1813 */
1814 protected void terminated() { }
1815
1816 /* Predefined RejectedExecutionHandlers */
1817
1818 /**
1819 * A handler for rejected tasks that runs the rejected task
1820 * directly in the calling thread of the {@code execute} method,
1821 * unless the executor has been shut down, in which case the task
1822 * is discarded.
1823 */
1824 public static class CallerRunsPolicy implements RejectedExecutionHandler {
1825 /**
1826 * Creates a {@code CallerRunsPolicy}.
1827 */
1828 public CallerRunsPolicy() { }
1829
1830 /**
1831 * Executes task r in the caller's thread, unless the executor
1832 * has been shut down, in which case the task is discarded.
1833 *
1834 * @param r the runnable task requested to be executed
1835 * @param e the executor attempting to execute this task
1836 */
1837 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1838 if (!e.isShutdown()) {
1839 r.run();
1840 }
1841 }
1842 }
1843
1844 /**
1845 * A handler for rejected tasks that throws a
1846 * {@code RejectedExecutionException}.
1847 */
1848 public static class AbortPolicy implements RejectedExecutionHandler {
1849 /**
1850 * Creates an {@code AbortPolicy}.
1851 */
1852 public AbortPolicy() { }
1853
1854 /**
1855 * Always throws RejectedExecutionException.
1856 *
1857 * @param r the runnable task requested to be executed
1858 * @param e the executor attempting to execute this task
1859 * @throws RejectedExecutionException always.
1860 */
1861 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1862 throw new RejectedExecutionException();
1863 }
1864 }
1865
1866 /**
1867 * A handler for rejected tasks that silently discards the
1868 * rejected task.
1869 */
1870 public static class DiscardPolicy implements RejectedExecutionHandler {
1871 /**
1872 * Creates a {@code DiscardPolicy}.
1873 */
1874 public DiscardPolicy() { }
1875
1876 /**
1877 * Does nothing, which has the effect of discarding task r.
1878 *
1879 * @param r the runnable task requested to be executed
1880 * @param e the executor attempting to execute this task
1881 */
1882 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1883 }
1884 }
1885
1886 /**
1887 * A handler for rejected tasks that discards the oldest unhandled
1888 * request and then retries {@code execute}, unless the executor
1889 * is shut down, in which case the task is discarded.
1890 */
1891 public static class DiscardOldestPolicy implements RejectedExecutionHandler {
1892 /**
1893 * Creates a {@code DiscardOldestPolicy} for the given executor.
1894 */
1895 public DiscardOldestPolicy() { }
1896
1897 /**
1898 * Obtains and ignores the next task that the executor
1899 * would otherwise execute, if one is immediately available,
1900 * and then retries execution of task r, unless the executor
1901 * is shut down, in which case task r is instead discarded.
1902 *
1903 * @param r the runnable task requested to be executed
1904 * @param e the executor attempting to execute this task
1905 */
1906 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1907 if (!e.isShutdown()) {
1908 e.getQueue().poll();
1909 e.execute(r);
1910 }
1911 }
1912 }
1913 }