ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ThreadPoolExecutor.java
Revision: 1.119
Committed: Mon Feb 19 00:59:54 2007 UTC (17 years, 3 months ago) by jsr166
Branch: MAIN
Changes since 1.118: +87 -66 lines
Log Message:
6523756: ThreadPoolExecutor shutdownNow vs execute race

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain, as explained at
4 * http://creativecommons.org/licenses/publicdomain
5 */
6
7 package java.util.concurrent;
8 import java.util.concurrent.locks.*;
9 import java.util.concurrent.atomic.*;
10 import java.util.*;
11
12 /**
13 * An {@link ExecutorService} that executes each submitted task using
14 * one of possibly several pooled threads, normally configured
15 * using {@link Executors} factory methods.
16 *
17 * <p>Thread pools address two different problems: they usually
18 * provide improved performance when executing large numbers of
19 * asynchronous tasks, due to reduced per-task invocation overhead,
20 * and they provide a means of bounding and managing the resources,
21 * including threads, consumed when executing a collection of tasks.
22 * Each {@code ThreadPoolExecutor} also maintains some basic
23 * statistics, such as the number of completed tasks.
24 *
25 * <p>To be useful across a wide range of contexts, this class
26 * provides many adjustable parameters and extensibility
27 * hooks. However, programmers are urged to use the more convenient
28 * {@link Executors} factory methods {@link
29 * Executors#newCachedThreadPool} (unbounded thread pool, with
30 * automatic thread reclamation), {@link Executors#newFixedThreadPool}
31 * (fixed size thread pool) and {@link
32 * Executors#newSingleThreadExecutor} (single background thread), that
33 * preconfigure settings for the most common usage
34 * scenarios. Otherwise, use the following guide when manually
35 * configuring and tuning this class:
36 *
37 * <dl>
38 *
39 * <dt>Core and maximum pool sizes</dt>
40 *
41 * <dd>A {@code ThreadPoolExecutor} will automatically adjust the
42 * pool size (see {@link #getPoolSize})
43 * according to the bounds set by
44 * corePoolSize (see {@link #getCorePoolSize}) and
45 * maximumPoolSize (see {@link #getMaximumPoolSize}).
46 *
47 * When a new task is submitted in method {@link #execute}, and fewer
48 * than corePoolSize threads are running, a new thread is created to
49 * handle the request, even if other worker threads are idle. If
50 * there are more than corePoolSize but less than maximumPoolSize
51 * threads running, a new thread will be created only if the queue is
52 * full. By setting corePoolSize and maximumPoolSize the same, you
53 * create a fixed-size thread pool. By setting maximumPoolSize to an
54 * essentially unbounded value such as {@code Integer.MAX_VALUE}, you
55 * allow the pool to accommodate an arbitrary number of concurrent
56 * tasks. Most typically, core and maximum pool sizes are set only
57 * upon construction, but they may also be changed dynamically using
58 * {@link #setCorePoolSize} and {@link #setMaximumPoolSize}. </dd>
59 *
60 * <dt>On-demand construction</dt>
61 *
62 * <dd> By default, even core threads are initially created and
63 * started only when new tasks arrive, but this can be overridden
64 * dynamically using method {@link #prestartCoreThread} or {@link
65 * #prestartAllCoreThreads}. You probably want to prestart threads if
66 * you construct the pool with a non-empty queue. </dd>
67 *
68 * <dt>Creating new threads</dt>
69 *
70 * <dd>New threads are created using a {@link ThreadFactory}. If not
71 * otherwise specified, a {@link Executors#defaultThreadFactory} is
72 * used, that creates threads to all be in the same {@link
73 * ThreadGroup} and with the same {@code NORM_PRIORITY} priority and
74 * non-daemon status. By supplying a different ThreadFactory, you can
75 * alter the thread's name, thread group, priority, daemon status,
76 * etc. If a {@code ThreadFactory} fails to create a thread when asked
77 * by returning null from {@code newThread}, the executor will
78 * continue, but might not be able to execute any tasks. Threads
79 * should possess the "modifyThread" {@code RuntimePermission}. If
80 * worker threads or other threads using the pool do not possess this
81 * permission, service may be degraded: configuration changes may not
82 * take effect in a timely manner, and a shutdown pool may remain in a
83 * state in which termination is possible but not completed.</dd>
84 *
85 * <dt>Keep-alive times</dt>
86 *
87 * <dd>If the pool currently has more than corePoolSize threads,
88 * excess threads will be terminated if they have been idle for more
89 * than the keepAliveTime (see {@link #getKeepAliveTime}). This
90 * provides a means of reducing resource consumption when the pool is
91 * not being actively used. If the pool becomes more active later, new
92 * threads will be constructed. This parameter can also be changed
93 * dynamically using method {@link #setKeepAliveTime}. Using a value
94 * of {@code Long.MAX_VALUE} {@link TimeUnit#NANOSECONDS} effectively
95 * disables idle threads from ever terminating prior to shut down. By
96 * default, the keep-alive policy applies only when there are more
97 * than corePoolSizeThreads. But method {@link
98 * #allowCoreThreadTimeOut(boolean)} can be used to apply this
99 * time-out policy to core threads as well, so long as the
100 * keepAliveTime value is non-zero. </dd>
101 *
102 * <dt>Queuing</dt>
103 *
104 * <dd>Any {@link BlockingQueue} may be used to transfer and hold
105 * submitted tasks. The use of this queue interacts with pool sizing:
106 *
107 * <ul>
108 *
109 * <li> If fewer than corePoolSize threads are running, the Executor
110 * always prefers adding a new thread
111 * rather than queuing.</li>
112 *
113 * <li> If corePoolSize or more threads are running, the Executor
114 * always prefers queuing a request rather than adding a new
115 * thread.</li>
116 *
117 * <li> If a request cannot be queued, a new thread is created unless
118 * this would exceed maximumPoolSize, in which case, the task will be
119 * rejected.</li>
120 *
121 * </ul>
122 *
123 * There are three general strategies for queuing:
124 * <ol>
125 *
126 * <li> <em> Direct handoffs.</em> A good default choice for a work
127 * queue is a {@link SynchronousQueue} that hands off tasks to threads
128 * without otherwise holding them. Here, an attempt to queue a task
129 * will fail if no threads are immediately available to run it, so a
130 * new thread will be constructed. This policy avoids lockups when
131 * handling sets of requests that might have internal dependencies.
132 * Direct handoffs generally require unbounded maximumPoolSizes to
133 * avoid rejection of new submitted tasks. This in turn admits the
134 * possibility of unbounded thread growth when commands continue to
135 * arrive on average faster than they can be processed. </li>
136 *
137 * <li><em> Unbounded queues.</em> Using an unbounded queue (for
138 * example a {@link LinkedBlockingQueue} without a predefined
139 * capacity) will cause new tasks to wait in the queue when all
140 * corePoolSize threads are busy. Thus, no more than corePoolSize
141 * threads will ever be created. (And the value of the maximumPoolSize
142 * therefore doesn't have any effect.) This may be appropriate when
143 * each task is completely independent of others, so tasks cannot
144 * affect each others execution; for example, in a web page server.
145 * While this style of queuing can be useful in smoothing out
146 * transient bursts of requests, it admits the possibility of
147 * unbounded work queue growth when commands continue to arrive on
148 * average faster than they can be processed. </li>
149 *
150 * <li><em>Bounded queues.</em> A bounded queue (for example, an
151 * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
152 * used with finite maximumPoolSizes, but can be more difficult to
153 * tune and control. Queue sizes and maximum pool sizes may be traded
154 * off for each other: Using large queues and small pools minimizes
155 * CPU usage, OS resources, and context-switching overhead, but can
156 * lead to artificially low throughput. If tasks frequently block (for
157 * example if they are I/O bound), a system may be able to schedule
158 * time for more threads than you otherwise allow. Use of small queues
159 * generally requires larger pool sizes, which keeps CPUs busier but
160 * may encounter unacceptable scheduling overhead, which also
161 * decreases throughput. </li>
162 *
163 * </ol>
164 *
165 * </dd>
166 *
167 * <dt>Rejected tasks</dt>
168 *
169 * <dd> New tasks submitted in method {@link #execute} will be
170 * <em>rejected</em> when the Executor has been shut down, and also
171 * when the Executor uses finite bounds for both maximum threads and
172 * work queue capacity, and is saturated. In either case, the {@code
173 * execute} method invokes the {@link
174 * RejectedExecutionHandler#rejectedExecution} method of its {@link
175 * RejectedExecutionHandler}. Four predefined handler policies are
176 * provided:
177 *
178 * <ol>
179 *
180 * <li> In the default {@link ThreadPoolExecutor.AbortPolicy}, the
181 * handler throws a runtime {@link RejectedExecutionException} upon
182 * rejection. </li>
183 *
184 * <li> In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread
185 * that invokes {@code execute} itself runs the task. This provides a
186 * simple feedback control mechanism that will slow down the rate that
187 * new tasks are submitted. </li>
188 *
189 * <li> In {@link ThreadPoolExecutor.DiscardPolicy}, a task that
190 * cannot be executed is simply dropped. </li>
191 *
192 * <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the
193 * executor is not shut down, the task at the head of the work queue
194 * is dropped, and then execution is retried (which can fail again,
195 * causing this to be repeated.) </li>
196 *
197 * </ol>
198 *
199 * It is possible to define and use other kinds of {@link
200 * RejectedExecutionHandler} classes. Doing so requires some care
201 * especially when policies are designed to work only under particular
202 * capacity or queuing policies. </dd>
203 *
204 * <dt>Hook methods</dt>
205 *
206 * <dd>This class provides {@code protected} overridable {@link
207 * #beforeExecute} and {@link #afterExecute} methods that are called
208 * before and after execution of each task. These can be used to
209 * manipulate the execution environment; for example, reinitializing
210 * ThreadLocals, gathering statistics, or adding log
211 * entries. Additionally, method {@link #terminated} can be overridden
212 * to perform any special processing that needs to be done once the
213 * Executor has fully terminated.
214 *
215 * <p>If hook or callback methods throw exceptions, internal worker
216 * threads may in turn fail and abruptly terminate.</dd>
217 *
218 * <dt>Queue maintenance</dt>
219 *
220 * <dd> Method {@link #getQueue} allows access to the work queue for
221 * purposes of monitoring and debugging. Use of this method for any
222 * other purpose is strongly discouraged. Two supplied methods,
223 * {@link #remove} and {@link #purge} are available to assist in
224 * storage reclamation when large numbers of queued tasks become
225 * cancelled.</dd>
226 *
227 * <dt>Finalization</dt>
228 *
229 * <dd> A pool that is no longer referenced in a program <em>AND</em>
230 * has no remaining threads will be {@code shutdown} automatically. If
231 * you would like to ensure that unreferenced pools are reclaimed even
232 * if users forget to call {@link #shutdown}, then you must arrange
233 * that unused threads eventually die, by setting appropriate
234 * keep-alive times, using a lower bound of zero core threads and/or
235 * setting {@link #allowCoreThreadTimeOut(boolean)}. </dd>
236 *
237 * </dl>
238 *
239 * <p> <b>Extension example</b>. Most extensions of this class
240 * override one or more of the protected hook methods. For example,
241 * here is a subclass that adds a simple pause/resume feature:
242 *
243 * <pre> {@code
244 * class PausableThreadPoolExecutor extends ThreadPoolExecutor {
245 * private boolean isPaused;
246 * private ReentrantLock pauseLock = new ReentrantLock();
247 * private Condition unpaused = pauseLock.newCondition();
248 *
249 * public PausableThreadPoolExecutor(...) { super(...); }
250 *
251 * protected void beforeExecute(Thread t, Runnable r) {
252 * super.beforeExecute(t, r);
253 * pauseLock.lock();
254 * try {
255 * while (isPaused) unpaused.await();
256 * } catch (InterruptedException ie) {
257 * t.interrupt();
258 * } finally {
259 * pauseLock.unlock();
260 * }
261 * }
262 *
263 * public void pause() {
264 * pauseLock.lock();
265 * try {
266 * isPaused = true;
267 * } finally {
268 * pauseLock.unlock();
269 * }
270 * }
271 *
272 * public void resume() {
273 * pauseLock.lock();
274 * try {
275 * isPaused = false;
276 * unpaused.signalAll();
277 * } finally {
278 * pauseLock.unlock();
279 * }
280 * }
281 * }}</pre>
282 *
283 * @since 1.5
284 * @author Doug Lea
285 */
286 public class ThreadPoolExecutor extends AbstractExecutorService {
287 /**
288 * The main pool control state, ctl, is an atomic integer packing
289 * two conceptual fields
290 * workerCount, indicating the effective number of threads
291 * runState, indicating whether running, shutting down etc
292 *
293 * In order to pack them into one int, we limit workerCount to
294 * (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2
295 * billion) otherwise representable. If this is ever an issue in
296 * the future, the variable can be changed to be an AtomicLong,
297 * and the shift/mask constants below adjusted. But until the need
298 * arises, this code is a bit faster and simpler using an int.
299 *
300 * The workerCount is the number of workers that have been
301 * permitted to start and not permitted to stop. The value may be
302 * transiently different from the actual number of live threads,
303 * for example when a ThreadFactory fails to create a thread when
304 * asked, and when exiting threads are still performing
305 * bookkeeping before terminating. The user-visible pool size is
306 * reported as the current size of the workers set.
307 *
308 * The runState provides the main lifecyle control, taking on values:
309 *
310 * RUNNING: Accept new tasks and process queued tasks
311 * SHUTDOWN: Don't accept new tasks, but process queued tasks
312 * STOP: Don't accept new tasks, don't process queued tasks,
313 * and interrupt in-progress tasks
314 * TIDYING: All tasks have terminated, workerCount is zero,
315 * the thread transitioning to state TIDYING
316 * will run the terminated() hook method
317 * TERMINATED: terminated() has completed
318 *
319 * The numerical order among these values matters, to allow
320 * ordered comparisons. The runState monotonically increases over
321 * time, but need not hit each state. The transitions are:
322 *
323 * RUNNING -> SHUTDOWN
324 * On invocation of shutdown(), perhaps implicitly in finalize()
325 * (RUNNING or SHUTDOWN) -> STOP
326 * On invocation of shutdownNow()
327 * SHUTDOWN -> TIDYING
328 * When both queue and pool are empty
329 * STOP -> TIDYING
330 * When pool is empty
331 * TIDYING -> TERMINATED
332 * When the terminated() hook method has completed
333 *
334 * Threads waiting in awaitTermination() will return when the
335 * state reaches TERMINATED.
336 *
337 * Detecting the transition from SHUTDOWN to TIDYING is less
338 * straightforward than you'd like because the queue may become
339 * empty after non-empty and vice versa during SHUTDOWN state, but
340 * we can only terminate if, after seeing that it is empty, we see
341 * that workerCount is 0 (which sometimes entails a recheck -- see
342 * below).
343 */
344 private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
345 private static final int COUNT_BITS = Integer.SIZE - 3;
346 private static final int CAPACITY = (1 << COUNT_BITS) - 1;
347
348 // runState is stored in the high-order bits
349 private static final int RUNNING = -1 << COUNT_BITS;
350 private static final int SHUTDOWN = 0 << COUNT_BITS;
351 private static final int STOP = 1 << COUNT_BITS;
352 private static final int TIDYING = 2 << COUNT_BITS;
353 private static final int TERMINATED = 3 << COUNT_BITS;
354
355 // Packing and unpacking ctl
356 private static int runStateOf(int c) { return c & ~CAPACITY; }
357 private static int workerCountOf(int c) { return c & CAPACITY; }
358 private static int ctlOf(int rs, int wc) { return rs | wc; }
359
360 /*
361 * Bit field accessors that don't require unpacking ctl.
362 * These depend on the bit layout and on workerCount being never negative.
363 */
364
365 private static boolean runStateLessThan(int c, int s) {
366 return c < s;
367 }
368
369 private static boolean runStateAtLeast(int c, int s) {
370 return c >= s;
371 }
372
373 private static boolean isRunning(int c) {
374 return c < SHUTDOWN;
375 }
376
377 /**
378 * Attempt to CAS-increment the workerCount field of ctl.
379 */
380 private boolean compareAndIncrementWorkerCount(int expect) {
381 return ctl.compareAndSet(expect, expect + 1);
382 }
383
384 /**
385 * Attempt to CAS-decrement the workerCount field of ctl.
386 */
387 private boolean compareAndDecrementWorkerCount(int expect) {
388 return ctl.compareAndSet(expect, expect - 1);
389 }
390
391 /**
392 * Decrements the workerCount field of ctl. This is called only on
393 * abrupt termination of a thread (see processWorkerExit). Other
394 * decrements are performed within getTask.
395 */
396 private void decrementWorkerCount() {
397 do {} while (! compareAndDecrementWorkerCount(ctl.get()));
398 }
399
400 /**
401 * The queue used for holding tasks and handing off to worker
402 * threads. We do not require that workQueue.poll() returning
403 * null necessarily means that workQueue.isEmpty(), so rely
404 * solely on isEmpty to see if the queue is empty (which we must
405 * do for example when deciding whether to transition from
406 * SHUTDOWN to TIDYING). This accommodates special-purpose
407 * queues such as DelayQueues for which poll() is allowed to
408 * return null even if it may later return non-null when delays
409 * expire.
410 */
411 private final BlockingQueue<Runnable> workQueue;
412
413 /**
414 * Lock held on access to workers set and related bookkeeping.
415 * While we could use a concurrent set of some sort, it turns out
416 * to be generally preferable to use a lock. Among the reasons is
417 * that this serializes interruptIdleWorkers, which avoids
418 * unnecessary interrupt storms, especially during shutdown.
419 * Otherwise exiting threads would concurrently interrupt those
420 * that have not yet interrupted. It also simplifies some of the
421 * associated statistics bookkeeping of largestPoolSize etc. We
422 * also hold mainLock on shutdown and shutdownNow, for the sake of
423 * ensuring workers set is stable while separately checking
424 * permission to interrupt and actually interrupting.
425 */
426 private final ReentrantLock mainLock = new ReentrantLock();
427
428 /**
429 * Set containing all worker threads in pool. Accessed only when
430 * holding mainLock.
431 */
432 private final HashSet<Worker> workers = new HashSet<Worker>();
433
434 /**
435 * Wait condition to support awaitTermination
436 */
437 private final Condition termination = mainLock.newCondition();
438
439 /**
440 * Tracks largest attained pool size. Accessed only under
441 * mainLock.
442 */
443 private int largestPoolSize;
444
445 /**
446 * Counter for completed tasks. Updated only on termination of
447 * worker threads. Accessed only under mainLock.
448 */
449 private long completedTaskCount;
450
451 /*
452 * All user control parameters are declared as volatiles so that
453 * ongoing actions are based on freshest values, but without need
454 * for locking, since no internal invariants depend on them
455 * changing synchronously with respect to other actions.
456 */
457
458 /**
459 * Factory for new threads. All threads are created using this
460 * factory (via method addWorker). All callers must be prepared
461 * for addWorker to fail, which may reflect a system or user's
462 * policy limiting the number of threads. Even though it is not
463 * treated as an error, failure to create threads may result in
464 * new tasks being rejected or existing ones remaining stuck in
465 * the queue. On the other hand, no special precautions exist to
466 * handle OutOfMemoryErrors that might be thrown while trying to
467 * create threads, since there is generally no recourse from
468 * within this class.
469 */
470 private volatile ThreadFactory threadFactory;
471
472 /**
473 * Handler called when saturated or shutdown in execute.
474 */
475 private volatile RejectedExecutionHandler handler;
476
477 /**
478 * Timeout in nanoseconds for idle threads waiting for work.
479 * Threads use this timeout when there are more than corePoolSize
480 * present or if allowCoreThreadTimeOut. Otherwise they wait
481 * forever for new work.
482 */
483 private volatile long keepAliveTime;
484
485 /**
486 * If false (default), core threads stay alive even when idle.
487 * If true, core threads use keepAliveTime to time out waiting
488 * for work.
489 */
490 private volatile boolean allowCoreThreadTimeOut;
491
492 /**
493 * Core pool size is the minimum number of workers to keep alive
494 * (and not allow to time out etc) unless allowCoreThreadTimeOut
495 * is set, in which case the minimum is zero.
496 */
497 private volatile int corePoolSize;
498
499 /**
500 * Maximum pool size. Note that the actual maximum is internally
501 * bounded by CAPACITY.
502 */
503 private volatile int maximumPoolSize;
504
505 /**
506 * The default rejected execution handler
507 */
508 private static final RejectedExecutionHandler defaultHandler =
509 new AbortPolicy();
510
511 /**
512 * Permission required for callers of shutdown and shutdownNow.
513 * We additionally require (see checkShutdownAccess) that callers
514 * have permission to actually interrupt threads in the worker set
515 * (as governed by Thread.interrupt, which relies on
516 * ThreadGroup.checkAccess, which in turn relies on
517 * SecurityManager.checkAccess). Shutdowns are attempted only if
518 * these checks pass.
519 *
520 * All actual invocations of Thread.interrupt (see
521 * interruptIdleWorkers and interruptWorkers) ignore
522 * SecurityExceptions, meaning that the attempted interrupts
523 * silently fail. In the case of shutdown, they should not fail
524 * unless the SecurityManager has inconsistent policies, sometimes
525 * allowing access to a thread and sometimes not. In such cases,
526 * failure to actually interrupt threads may disable or delay full
527 * termination. Other uses of interruptIdleWorkers are advisory,
528 * and failure to actually interrupt will merely delay response to
529 * configuration changes so is not handled exceptionally.
530 */
531 private static final RuntimePermission shutdownPerm =
532 new RuntimePermission("modifyThread");
533
534 /**
535 * Class Worker mainly maintains interrupt control state for
536 * threads running tasks, along with other minor bookkeeping. This
537 * class opportunistically extends ReentrantLock to simplify
538 * acquiring and releasing a lock surrounding each task execution.
539 * This protects against interrupts that are intended to wake up a
540 * worker thread waiting for a task from instead interrupting a
541 * task being run.
542 */
543 private final class Worker extends ReentrantLock implements Runnable {
544 /**
545 * This class will never be serialized, but we provide a
546 * serialVersionUID to suppress a javac warning.
547 */
548 private static final long serialVersionUID = 6138294804551838833L;
549
550 /** Thread this worker is running in. Null if factory fails. */
551 final Thread thread;
552 /** Initial task to run. Possibly null. */
553 Runnable firstTask;
554 /** Per-thread task counter */
555 volatile long completedTasks;
556
557 /**
558 * Creates with given first task and thread from ThreadFactory.
559 * @param firstTask the first task (null if none)
560 */
561 Worker(Runnable firstTask) {
562 this.firstTask = firstTask;
563 this.thread = getThreadFactory().newThread(this);
564 }
565
566 /** Delegates main run loop to outer runWorker */
567 public void run() {
568 runWorker(this);
569 }
570 }
571
572 /*
573 * Methods for setting control state
574 */
575
576 /**
577 * Transitions runState to given target, or leaves it alone if
578 * already at least the given target.
579 *
580 * @param targetState the desired state, either SHUTDOWN or STOP
581 * (but not TIDYING or TERMINATED -- use tryTerminate for that)
582 */
583 private void advanceRunState(int targetState) {
584 for (;;) {
585 int c = ctl.get();
586 if (runStateAtLeast(c, targetState) ||
587 ctl.compareAndSet(c, ctlOf(targetState, workerCountOf(c))))
588 break;
589 }
590 }
591
592 /**
593 * Transitions to TERMINATED state if either (SHUTDOWN and pool
594 * and queue empty) or (STOP and pool empty). If otherwise
595 * eligible to terminate but workerCount is nonzero, interrupts an
596 * idle worker to ensure that shutdown signals propagate. This
597 * method must be called following any action that might make
598 * termination possible -- reducing worker count or removing tasks
599 * from the queue during shutdown. The method is non-private to
600 * allow access from ScheduledThreadPoolExecutor.
601 */
602 final void tryTerminate() {
603 for (;;) {
604 int c = ctl.get();
605 if (isRunning(c) ||
606 runStateAtLeast(c, TIDYING) ||
607 (runStateOf(c) == SHUTDOWN && ! workQueue.isEmpty()))
608 return;
609 if (workerCountOf(c) != 0) { // Eligible to terminate
610 interruptIdleWorkers(ONLY_ONE);
611 return;
612 }
613
614 final ReentrantLock mainLock = this.mainLock;
615 mainLock.lock();
616 try {
617 if (ctl.compareAndSet(c, ctlOf(TIDYING, 0))) {
618 try {
619 terminated();
620 } finally {
621 ctl.set(ctlOf(TERMINATED, 0));
622 termination.signalAll();
623 }
624 return;
625 }
626 } finally {
627 mainLock.unlock();
628 }
629 // else retry on failed CAS
630 }
631 }
632
633 /*
634 * Methods for controlling interrupts to worker threads.
635 */
636
637 /**
638 * If there is a security manager, makes sure caller has
639 * permission to shut down threads in general (see shutdownPerm).
640 * If this passes, additionally makes sure the caller is allowed
641 * to interrupt each worker thread. This might not be true even if
642 * first check passed, if the SecurityManager treats some threads
643 * specially.
644 */
645 private void checkShutdownAccess() {
646 SecurityManager security = System.getSecurityManager();
647 if (security != null) {
648 security.checkPermission(shutdownPerm);
649 final ReentrantLock mainLock = this.mainLock;
650 mainLock.lock();
651 try {
652 for (Worker w : workers)
653 security.checkAccess(w.thread);
654 } finally {
655 mainLock.unlock();
656 }
657 }
658 }
659
660 /**
661 * Interrupts all threads, even if active. Ignores SecurityExceptions
662 * (in which case some threads may remain uninterrupted).
663 */
664 private void interruptWorkers() {
665 final ReentrantLock mainLock = this.mainLock;
666 mainLock.lock();
667 try {
668 for (Worker w : workers) {
669 try {
670 w.thread.interrupt();
671 } catch (SecurityException ignore) {
672 }
673 }
674 } finally {
675 mainLock.unlock();
676 }
677 }
678
679 /**
680 * Interrupts threads that might be waiting for tasks (as
681 * indicated by not being locked) so they can check for
682 * termination or configuration changes. Ignores
683 * SecurityExceptions (in which case some threads may remain
684 * uninterrupted).
685 *
686 * @param onlyOne If true, interrupt at most one worker. This is
687 * called only from tryTerminate when termination is otherwise
688 * enabled but there are still other workers. In this case, at
689 * most one waiting worker is interrupted to propagate shutdown
690 * signals in case all threads are currently waiting.
691 * Interrupting any arbitrary thread ensures that newly arriving
692 * workers since shutdown began will also eventually exit.
693 * To guarantee eventual termination, it suffices to always
694 * interrupt only one idle worker, but shutdown() interrupts all
695 * idle workers so that redundant workers exit promptly, not
696 * waiting for a straggler task to finish.
697 */
698 private void interruptIdleWorkers(boolean onlyOne) {
699 final ReentrantLock mainLock = this.mainLock;
700 mainLock.lock();
701 try {
702 for (Worker w : workers) {
703 Thread t = w.thread;
704 if (!t.isInterrupted() && w.tryLock()) {
705 try {
706 t.interrupt();
707 } catch (SecurityException ignore) {
708 } finally {
709 w.unlock();
710 }
711 }
712 if (onlyOne)
713 break;
714 }
715 } finally {
716 mainLock.unlock();
717 }
718 }
719
720 /**
721 * Common form of interruptIdleWorkers, to avoid having to
722 * remember what the boolean argument means.
723 */
724 private void interruptIdleWorkers() {
725 interruptIdleWorkers(false);
726 }
727
728 private static final boolean ONLY_ONE = true;
729
730 /**
731 * Ensures that unless the pool is stopping, the current thread
732 * does not have its interrupt set. This requires a double-check
733 * of state in case the interrupt was cleared concurrently with a
734 * shutdownNow -- if so, the interrupt is re-enabled.
735 */
736 private void clearInterruptsForTaskRun() {
737 if (runStateLessThan(ctl.get(), STOP) &&
738 Thread.interrupted() &&
739 runStateAtLeast(ctl.get(), STOP))
740 Thread.currentThread().interrupt();
741 }
742
743 /*
744 * Misc utilities, most of which are also exported to
745 * ScheduledThreadPoolExecutor
746 */
747
748 /**
749 * Invokes the rejected execution handler for the given command.
750 * Package-protected for use by ScheduledThreadPoolExecutor.
751 */
752 final void reject(Runnable command) {
753 handler.rejectedExecution(command, this);
754 }
755
756 /**
757 * Performs any further cleanup following run state transition on
758 * invocation of shutdown. A no-op here, but used by
759 * ScheduledThreadPoolExecutor to cancel delayed tasks.
760 */
761 void onShutdown() {
762 }
763
764 /**
765 * State check needed by ScheduledThreadPoolExecutor to
766 * enable running tasks during shutdown.
767 *
768 * @param shutdownOK true if should return true if SHUTDOWN
769 */
770 final boolean isRunningOrShutdown(boolean shutdownOK) {
771 int rs = runStateOf(ctl.get());
772 return rs == RUNNING || (rs == SHUTDOWN && shutdownOK);
773 }
774
775 /**
776 * Drains the task queue into a new list, normally using
777 * drainTo. But if the queue is a DelayQueue or any other kind of
778 * queue for which poll or drainTo may fail to remove some
779 * elements, it deletes them one by one.
780 */
781 private List<Runnable> drainQueue() {
782 BlockingQueue<Runnable> q = workQueue;
783 List<Runnable> taskList = new ArrayList<Runnable>();
784 q.drainTo(taskList);
785 if (!q.isEmpty()) {
786 for (Runnable r : q.toArray(new Runnable[0])) {
787 if (q.remove(r))
788 taskList.add(r);
789 }
790 }
791 return taskList;
792 }
793
794 /*
795 * Methods for creating, running and cleaning up after workers
796 */
797
798 /**
799 * Checks if a new worker can be added with respect to current
800 * pool state and the given bound (either core or maximum). If so,
801 * the worker count is adjusted accordingly, and, if possible, a
802 * new worker is created and started running firstTask as its
803 * first task. This method returns false if the pool is stopped or
804 * eligible to shut down. It also returns false if the thread
805 * factory fails to create a thread when asked, which requires a
806 * backout of workerCount, and a recheck for termination, in case
807 * the existence of this worker was holding up termination.
808 *
809 * @param firstTask the task the new thread should run first (or
810 * null if none). Workers are created with an initial first task
811 * (in method execute()) to bypass queuing when there are fewer
812 * than corePoolSize threads (in which case we always start one),
813 * or when the queue is full (in which case we must bypass queue).
814 * Initially idle threads are usually created via
815 * prestartCoreThread or to replace other dying workers.
816 *
817 * @param core if true use corePoolSize as bound, else
818 * maximumPoolSize. (A boolean indicator is used here rather than a
819 * value to ensure reads of fresh values after checking other pool
820 * state).
821 * @return true if successful
822 */
823 private boolean addWorker(Runnable firstTask, boolean core) {
824 retry:
825 for (;;) {
826 int c = ctl.get();
827 int rs = runStateOf(c);
828
829 // Check if queue empty only if necessary.
830 if (rs >= SHUTDOWN &&
831 ! (rs == SHUTDOWN &&
832 firstTask == null &&
833 ! workQueue.isEmpty()))
834 return false;
835
836 for (;;) {
837 int wc = workerCountOf(c);
838 if (wc >= CAPACITY ||
839 wc >= (core ? corePoolSize : maximumPoolSize))
840 return false;
841 if (compareAndIncrementWorkerCount(c))
842 break retry;
843 c = ctl.get(); // Re-read ctl
844 if (runStateOf(c) != rs)
845 continue retry;
846 // else CAS failed due to workerCount change; retry inner loop
847 }
848 }
849
850 Worker w = new Worker(firstTask);
851 Thread t = w.thread;
852
853 final ReentrantLock mainLock = this.mainLock;
854 mainLock.lock();
855 try {
856 // Recheck while holding lock.
857 // Back out on ThreadFactory failure or if
858 // shut down before lock acquired.
859 int c = ctl.get();
860 int rs = runStateOf(c);
861
862 if (t == null ||
863 (rs >= SHUTDOWN &&
864 ! (rs == SHUTDOWN &&
865 firstTask == null))) {
866 decrementWorkerCount();
867 tryTerminate();
868 return false;
869 }
870
871 workers.add(w);
872
873 int s = workers.size();
874 if (s > largestPoolSize)
875 largestPoolSize = s;
876 } finally {
877 mainLock.unlock();
878 }
879
880 t.start();
881 // It is possible (but unlikely) for a thread to have been
882 // added to workers, but not yet started, during transition to
883 // STOP, which could result in a rare missed interrupt,
884 // because Thread.interrupt is not guaranteed to have any effect
885 // on a non-yet-started Thread (see Thread#interrupt).
886 if (runStateOf(ctl.get()) == STOP && ! t.isInterrupted())
887 t.interrupt();
888
889 return true;
890 }
891
892 /**
893 * Performs cleanup and bookkeeping for a dying worker. Called
894 * only from worker threads. Unless completedAbruptly is set,
895 * assumes that workerCount has already been adjusted to account
896 * for exit. This method removes thread from worker set, and
897 * possibly terminates the pool or replaces the worker if either
898 * it exited due to user task exception or if fewer than
899 * corePoolSize workers are running or queue is non-empty but
900 * there are no workers.
901 *
902 * @param w the worker
903 * @param completedAbruptly if the worker died due to user exception
904 */
905 private void processWorkerExit(Worker w, boolean completedAbruptly) {
906 if (completedAbruptly) // If abrupt, then workerCount wasn't adjusted
907 decrementWorkerCount();
908
909 final ReentrantLock mainLock = this.mainLock;
910 mainLock.lock();
911 try {
912 completedTaskCount += w.completedTasks;
913 workers.remove(w);
914 } finally {
915 mainLock.unlock();
916 }
917
918 tryTerminate();
919
920 int c = ctl.get();
921 if (runStateLessThan(c, STOP)) {
922 if (!completedAbruptly) {
923 int min = allowCoreThreadTimeOut ? 0 : corePoolSize;
924 if (min == 0 && ! workQueue.isEmpty())
925 min = 1;
926 if (workerCountOf(c) >= min)
927 return; // replacement not needed
928 }
929 addWorker(null, false);
930 }
931 }
932
933 /**
934 * Performs blocking or timed wait for a task, depending on
935 * current configuration settings, or returns null if this worker
936 * must exit because of any of:
937 * 1. There are more than maximumPoolSize workers (due to
938 * a call to setMaximumPoolSize).
939 * 2. The pool is stopped.
940 * 3. The pool is shutdown and the queue is empty.
941 * 4. This worker timed out waiting for a task, and timed-out
942 * workers are subject to termination (that is,
943 * {@code allowCoreThreadTimeOut || workerCount > corePoolSize})
944 * both before and after the timed wait.
945 *
946 * @return task, or null if the worker must exit, in which case
947 * workerCount is decremented
948 */
949 private Runnable getTask() {
950 boolean timedOut = false; // Did the last poll() time out?
951
952 retry:
953 for (;;) {
954 int c = ctl.get();
955 int rs = runStateOf(c);
956
957 // Check if queue empty only if necessary.
958 if (rs >= SHUTDOWN && (rs >= STOP || workQueue.isEmpty())) {
959 decrementWorkerCount();
960 return null;
961 }
962
963 boolean timed; // Are workers subject to culling?
964
965 for (;;) {
966 int wc = workerCountOf(c);
967 timed = allowCoreThreadTimeOut || wc > corePoolSize;
968
969 if (wc <= maximumPoolSize && ! (timedOut && timed))
970 break;
971 if (compareAndDecrementWorkerCount(c))
972 return null;
973 c = ctl.get(); // Re-read ctl
974 if (runStateOf(c) != rs)
975 continue retry;
976 // else CAS failed due to workerCount change; retry inner loop
977 }
978
979 try {
980 Runnable r = timed ?
981 workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
982 workQueue.take();
983 if (r != null)
984 return r;
985 timedOut = true;
986 } catch (InterruptedException retry) {
987 timedOut = false;
988 }
989 }
990 }
991
992 /**
993 * Main worker run loop. Repeatedly gets tasks from queue and
994 * executes them, while coping with a number of issues:
995 *
996 * 1. We may start out with an initial task, in which case we
997 * don't need to get the first one. Otherwise, as long as pool is
998 * running, we get tasks from getTask. If it returns null then the
999 * worker exits due to changed pool state or configuration
1000 * parameters. Other exits result from exception throws in
1001 * external code, in which case completedAbruptly holds, which
1002 * usually leads processWorkerExit to replace this thread.
1003 *
1004 * 2. Before running any task, the lock is acquired to prevent
1005 * other pool interrupts while the task is executing, and
1006 * clearInterruptsForTaskRun called to ensure that unless pool is
1007 * stopping, this thread does not have its interrupt set.
1008 *
1009 * 3. Each task run is preceded by a call to beforeExecute, which
1010 * might throw an exception, in which case we cause thread to die
1011 * (breaking loop with completedAbruptly true) without processing
1012 * the task.
1013 *
1014 * 4. Assuming beforeExecute completes normally, we run the task,
1015 * gathering any of its thrown exceptions to send to
1016 * afterExecute. We separately handle RuntimeException, Error
1017 * (both of which the specs guarantee that we trap) and arbitrary
1018 * Throwables. Because we cannot rethrow Throwables within
1019 * Runnable.run, we wrap them within Errors on the way out (to the
1020 * thread's UncaughtExceptionHandler). Any thrown exception also
1021 * conservatively causes thread to die.
1022 *
1023 * 5. After task.run completes, we call afterExecute, which may
1024 * also throw an exception, which will also cause thread to
1025 * die. According to JLS Sec 14.20, this exception is the one that
1026 * will be in effect even if task.run throws.
1027 *
1028 * The net effect of the exception mechanics is that afterExecute
1029 * and the thread's UncaughtExceptionHandler have as accurate
1030 * information as we can provide about any problems encountered by
1031 * user code.
1032 *
1033 * @param w the worker
1034 */
1035 final void runWorker(Worker w) {
1036 Runnable task = w.firstTask;
1037 w.firstTask = null;
1038 boolean completedAbruptly = true;
1039 try {
1040 while (task != null || (task = getTask()) != null) {
1041 w.lock();
1042 clearInterruptsForTaskRun();
1043 try {
1044 beforeExecute(w.thread, task);
1045 Throwable thrown = null;
1046 try {
1047 task.run();
1048 } catch (RuntimeException x) {
1049 thrown = x; throw x;
1050 } catch (Error x) {
1051 thrown = x; throw x;
1052 } catch (Throwable x) {
1053 thrown = x; throw new Error(x);
1054 } finally {
1055 afterExecute(task, thrown);
1056 }
1057 } finally {
1058 task = null;
1059 w.completedTasks++;
1060 w.unlock();
1061 }
1062 }
1063 completedAbruptly = false;
1064 } finally {
1065 processWorkerExit(w, completedAbruptly);
1066 }
1067 }
1068
1069 // Public constructors and methods
1070
1071 /**
1072 * Creates a new {@code ThreadPoolExecutor} with the given initial
1073 * parameters and default thread factory and rejected execution handler.
1074 * It may be more convenient to use one of the {@link Executors} factory
1075 * methods instead of this general purpose constructor.
1076 *
1077 * @param corePoolSize the number of threads to keep in the pool, even
1078 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1079 * @param maximumPoolSize the maximum number of threads to allow in the
1080 * pool
1081 * @param keepAliveTime when the number of threads is greater than
1082 * the core, this is the maximum time that excess idle threads
1083 * will wait for new tasks before terminating.
1084 * @param unit the time unit for the {@code keepAliveTime} argument
1085 * @param workQueue the queue to use for holding tasks before they are
1086 * executed. This queue will hold only the {@code Runnable}
1087 * tasks submitted by the {@code execute} method.
1088 * @throws IllegalArgumentException if one of the following holds:<br>
1089 * {@code corePoolSize < 0}<br>
1090 * {@code keepAliveTime < 0}<br>
1091 * {@code maximumPoolSize <= 0}<br>
1092 * {@code maximumPoolSize < corePoolSize}
1093 * @throws NullPointerException if {@code workQueue} is null
1094 */
1095 public ThreadPoolExecutor(int corePoolSize,
1096 int maximumPoolSize,
1097 long keepAliveTime,
1098 TimeUnit unit,
1099 BlockingQueue<Runnable> workQueue) {
1100 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1101 Executors.defaultThreadFactory(), defaultHandler);
1102 }
1103
1104 /**
1105 * Creates a new {@code ThreadPoolExecutor} with the given initial
1106 * parameters and default rejected execution handler.
1107 *
1108 * @param corePoolSize the number of threads to keep in the pool, even
1109 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1110 * @param maximumPoolSize the maximum number of threads to allow in the
1111 * pool
1112 * @param keepAliveTime when the number of threads is greater than
1113 * the core, this is the maximum time that excess idle threads
1114 * will wait for new tasks before terminating.
1115 * @param unit the time unit for the {@code keepAliveTime} argument
1116 * @param workQueue the queue to use for holding tasks before they are
1117 * executed. This queue will hold only the {@code Runnable}
1118 * tasks submitted by the {@code execute} method.
1119 * @param threadFactory the factory to use when the executor
1120 * creates a new thread
1121 * @throws IllegalArgumentException if one of the following holds:<br>
1122 * {@code corePoolSize < 0}<br>
1123 * {@code keepAliveTime < 0}<br>
1124 * {@code maximumPoolSize <= 0}<br>
1125 * {@code maximumPoolSize < corePoolSize}
1126 * @throws NullPointerException if {@code workQueue}
1127 * or {@code threadFactory} is null
1128 */
1129 public ThreadPoolExecutor(int corePoolSize,
1130 int maximumPoolSize,
1131 long keepAliveTime,
1132 TimeUnit unit,
1133 BlockingQueue<Runnable> workQueue,
1134 ThreadFactory threadFactory) {
1135 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1136 threadFactory, defaultHandler);
1137 }
1138
1139 /**
1140 * Creates a new {@code ThreadPoolExecutor} with the given initial
1141 * parameters and default thread factory.
1142 *
1143 * @param corePoolSize the number of threads to keep in the pool, even
1144 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1145 * @param maximumPoolSize the maximum number of threads to allow in the
1146 * pool
1147 * @param keepAliveTime when the number of threads is greater than
1148 * the core, this is the maximum time that excess idle threads
1149 * will wait for new tasks before terminating.
1150 * @param unit the time unit for the {@code keepAliveTime} argument
1151 * @param workQueue the queue to use for holding tasks before they are
1152 * executed. This queue will hold only the {@code Runnable}
1153 * tasks submitted by the {@code execute} method.
1154 * @param handler the handler to use when execution is blocked
1155 * because the thread bounds and queue capacities are reached
1156 * @throws IllegalArgumentException if one of the following holds:<br>
1157 * {@code corePoolSize < 0}<br>
1158 * {@code keepAliveTime < 0}<br>
1159 * {@code maximumPoolSize <= 0}<br>
1160 * {@code maximumPoolSize < corePoolSize}
1161 * @throws NullPointerException if {@code workQueue}
1162 * or {@code handler} is null
1163 */
1164 public ThreadPoolExecutor(int corePoolSize,
1165 int maximumPoolSize,
1166 long keepAliveTime,
1167 TimeUnit unit,
1168 BlockingQueue<Runnable> workQueue,
1169 RejectedExecutionHandler handler) {
1170 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1171 Executors.defaultThreadFactory(), handler);
1172 }
1173
1174 /**
1175 * Creates a new {@code ThreadPoolExecutor} with the given initial
1176 * parameters.
1177 *
1178 * @param corePoolSize the number of threads to keep in the pool, even
1179 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1180 * @param maximumPoolSize the maximum number of threads to allow in the
1181 * pool
1182 * @param keepAliveTime when the number of threads is greater than
1183 * the core, this is the maximum time that excess idle threads
1184 * will wait for new tasks before terminating.
1185 * @param unit the time unit for the {@code keepAliveTime} argument
1186 * @param workQueue the queue to use for holding tasks before they are
1187 * executed. This queue will hold only the {@code Runnable}
1188 * tasks submitted by the {@code execute} method.
1189 * @param threadFactory the factory to use when the executor
1190 * creates a new thread
1191 * @param handler the handler to use when execution is blocked
1192 * because the thread bounds and queue capacities are reached
1193 * @throws IllegalArgumentException if one of the following holds:<br>
1194 * {@code corePoolSize < 0}<br>
1195 * {@code keepAliveTime < 0}<br>
1196 * {@code maximumPoolSize <= 0}<br>
1197 * {@code maximumPoolSize < corePoolSize}
1198 * @throws NullPointerException if {@code workQueue}
1199 * or {@code threadFactory} or {@code handler} is null
1200 */
1201 public ThreadPoolExecutor(int corePoolSize,
1202 int maximumPoolSize,
1203 long keepAliveTime,
1204 TimeUnit unit,
1205 BlockingQueue<Runnable> workQueue,
1206 ThreadFactory threadFactory,
1207 RejectedExecutionHandler handler) {
1208 if (corePoolSize < 0 ||
1209 maximumPoolSize <= 0 ||
1210 maximumPoolSize < corePoolSize ||
1211 keepAliveTime < 0)
1212 throw new IllegalArgumentException();
1213 if (workQueue == null || threadFactory == null || handler == null)
1214 throw new NullPointerException();
1215 this.corePoolSize = corePoolSize;
1216 this.maximumPoolSize = maximumPoolSize;
1217 this.workQueue = workQueue;
1218 this.keepAliveTime = unit.toNanos(keepAliveTime);
1219 this.threadFactory = threadFactory;
1220 this.handler = handler;
1221 }
1222
1223 /**
1224 * Executes the given task sometime in the future. The task
1225 * may execute in a new thread or in an existing pooled thread.
1226 *
1227 * If the task cannot be submitted for execution, either because this
1228 * executor has been shutdown or because its capacity has been reached,
1229 * the task is handled by the current {@code RejectedExecutionHandler}.
1230 *
1231 * @param command the task to execute
1232 * @throws RejectedExecutionException at discretion of
1233 * {@code RejectedExecutionHandler}, if the task
1234 * cannot be accepted for execution
1235 * @throws NullPointerException if {@code command} is null
1236 */
1237 public void execute(Runnable command) {
1238 if (command == null)
1239 throw new NullPointerException();
1240 /*
1241 * Proceed in 3 steps:
1242 *
1243 * 1. If fewer than corePoolSize threads are running, try to
1244 * start a new thread with the given command as its first
1245 * task. The call to addWorker atomically checks runState and
1246 * workerCount, and so prevents false alarms that would add
1247 * threads when it shouldn't, by returning false.
1248 *
1249 * 2. If a task can be successfully queued, then we still need
1250 * to double-check whether we should have added a thread
1251 * (because existing ones died since last checking) or that
1252 * the pool shut down since entry into this method. So we
1253 * recheck state and if necessary roll back the enqueuing if
1254 * stopped, or start a new thread if there are none.
1255 *
1256 * 3. If we cannot queue task, then we try to add a new
1257 * thread. If it fails, we know we are shut down or saturated
1258 * and so reject the task.
1259 */
1260 int c = ctl.get();
1261 if (workerCountOf(c) < corePoolSize) {
1262 if (addWorker(command, true))
1263 return;
1264 c = ctl.get();
1265 }
1266 if (isRunning(c) && workQueue.offer(command)) {
1267 int recheck = ctl.get();
1268 if (! isRunning(recheck) && remove(command))
1269 reject(command);
1270 else if (workerCountOf(recheck) == 0)
1271 addWorker(null, false);
1272 }
1273 else if (!addWorker(command, false))
1274 reject(command);
1275 }
1276
1277 /**
1278 * Initiates an orderly shutdown in which previously submitted
1279 * tasks are executed, but no new tasks will be accepted.
1280 * Invocation has no additional effect if already shut down.
1281 *
1282 * @throws SecurityException {@inheritDoc}
1283 */
1284 public void shutdown() {
1285 final ReentrantLock mainLock = this.mainLock;
1286 mainLock.lock();
1287 try {
1288 checkShutdownAccess();
1289 advanceRunState(SHUTDOWN);
1290 interruptIdleWorkers();
1291 onShutdown(); // hook for ScheduledThreadPoolExecutor
1292 } finally {
1293 mainLock.unlock();
1294 }
1295 tryTerminate();
1296 }
1297
1298 /**
1299 * Attempts to stop all actively executing tasks, halts the
1300 * processing of waiting tasks, and returns a list of the tasks
1301 * that were awaiting execution. These tasks are drained (removed)
1302 * from the task queue upon return from this method.
1303 *
1304 * <p>There are no guarantees beyond best-effort attempts to stop
1305 * processing actively executing tasks. This implementation
1306 * cancels tasks via {@link Thread#interrupt}, so any task that
1307 * fails to respond to interrupts may never terminate.
1308 *
1309 * @throws SecurityException {@inheritDoc}
1310 */
1311 public List<Runnable> shutdownNow() {
1312 List<Runnable> tasks;
1313 final ReentrantLock mainLock = this.mainLock;
1314 mainLock.lock();
1315 try {
1316 checkShutdownAccess();
1317 advanceRunState(STOP);
1318 interruptWorkers();
1319 tasks = drainQueue();
1320 } finally {
1321 mainLock.unlock();
1322 }
1323 tryTerminate();
1324 return tasks;
1325 }
1326
1327 public boolean isShutdown() {
1328 return ! isRunning(ctl.get());
1329 }
1330
1331 /**
1332 * Returns true if this executor is in the process of terminating
1333 * after {@link #shutdown} or {@link #shutdownNow} but has not
1334 * completely terminated. This method may be useful for
1335 * debugging. A return of {@code true} reported a sufficient
1336 * period after shutdown may indicate that submitted tasks have
1337 * ignored or suppressed interruption, causing this executor not
1338 * to properly terminate.
1339 *
1340 * @return true if terminating but not yet terminated
1341 */
1342 public boolean isTerminating() {
1343 int c = ctl.get();
1344 return ! isRunning(c) && runStateLessThan(c, TERMINATED);
1345 }
1346
1347 public boolean isTerminated() {
1348 return runStateAtLeast(ctl.get(), TERMINATED);
1349 }
1350
1351 public boolean awaitTermination(long timeout, TimeUnit unit)
1352 throws InterruptedException {
1353 long nanos = unit.toNanos(timeout);
1354 final ReentrantLock mainLock = this.mainLock;
1355 mainLock.lock();
1356 try {
1357 for (;;) {
1358 if (runStateAtLeast(ctl.get(), TERMINATED))
1359 return true;
1360 if (nanos <= 0)
1361 return false;
1362 nanos = termination.awaitNanos(nanos);
1363 }
1364 } finally {
1365 mainLock.unlock();
1366 }
1367 }
1368
1369 /**
1370 * Invokes {@code shutdown} when this executor is no longer
1371 * referenced and it has no threads.
1372 */
1373 protected void finalize() {
1374 shutdown();
1375 }
1376
1377 /**
1378 * Sets the thread factory used to create new threads.
1379 *
1380 * @param threadFactory the new thread factory
1381 * @throws NullPointerException if threadFactory is null
1382 * @see #getThreadFactory
1383 */
1384 public void setThreadFactory(ThreadFactory threadFactory) {
1385 if (threadFactory == null)
1386 throw new NullPointerException();
1387 this.threadFactory = threadFactory;
1388 }
1389
1390 /**
1391 * Returns the thread factory used to create new threads.
1392 *
1393 * @return the current thread factory
1394 * @see #setThreadFactory
1395 */
1396 public ThreadFactory getThreadFactory() {
1397 return threadFactory;
1398 }
1399
1400 /**
1401 * Sets a new handler for unexecutable tasks.
1402 *
1403 * @param handler the new handler
1404 * @throws NullPointerException if handler is null
1405 * @see #getRejectedExecutionHandler
1406 */
1407 public void setRejectedExecutionHandler(RejectedExecutionHandler handler) {
1408 if (handler == null)
1409 throw new NullPointerException();
1410 this.handler = handler;
1411 }
1412
1413 /**
1414 * Returns the current handler for unexecutable tasks.
1415 *
1416 * @return the current handler
1417 * @see #setRejectedExecutionHandler
1418 */
1419 public RejectedExecutionHandler getRejectedExecutionHandler() {
1420 return handler;
1421 }
1422
1423 /**
1424 * Sets the core number of threads. This overrides any value set
1425 * in the constructor. If the new value is smaller than the
1426 * current value, excess existing threads will be terminated when
1427 * they next become idle. If larger, new threads will, if needed,
1428 * be started to execute any queued tasks.
1429 *
1430 * @param corePoolSize the new core size
1431 * @throws IllegalArgumentException if {@code corePoolSize < 0}
1432 * @see #getCorePoolSize
1433 */
1434 public void setCorePoolSize(int corePoolSize) {
1435 if (corePoolSize < 0)
1436 throw new IllegalArgumentException();
1437 int delta = corePoolSize - this.corePoolSize;
1438 this.corePoolSize = corePoolSize;
1439 if (workerCountOf(ctl.get()) > corePoolSize)
1440 interruptIdleWorkers();
1441 else if (delta > 0) {
1442 // We don't really know how many new threads are "needed".
1443 // As a heuristic, prestart enough new workers (up to new
1444 // core size) to handle the current number of tasks in
1445 // queue, but stop if queue becomes empty while doing so.
1446 int k = Math.min(delta, workQueue.size());
1447 while (k-- > 0 && addWorker(null, true)) {
1448 if (workQueue.isEmpty())
1449 break;
1450 }
1451 }
1452 }
1453
1454 /**
1455 * Returns the core number of threads.
1456 *
1457 * @return the core number of threads
1458 * @see #setCorePoolSize
1459 */
1460 public int getCorePoolSize() {
1461 return corePoolSize;
1462 }
1463
1464 /**
1465 * Starts a core thread, causing it to idly wait for work. This
1466 * overrides the default policy of starting core threads only when
1467 * new tasks are executed. This method will return {@code false}
1468 * if all core threads have already been started.
1469 *
1470 * @return {@code true} if a thread was started
1471 */
1472 public boolean prestartCoreThread() {
1473 return workerCountOf(ctl.get()) < corePoolSize &&
1474 addWorker(null, true);
1475 }
1476
1477 /**
1478 * Starts all core threads, causing them to idly wait for work. This
1479 * overrides the default policy of starting core threads only when
1480 * new tasks are executed.
1481 *
1482 * @return the number of threads started
1483 */
1484 public int prestartAllCoreThreads() {
1485 int n = 0;
1486 while (addWorker(null, true))
1487 ++n;
1488 return n;
1489 }
1490
1491 /**
1492 * Returns true if this pool allows core threads to time out and
1493 * terminate if no tasks arrive within the keepAlive time, being
1494 * replaced if needed when new tasks arrive. When true, the same
1495 * keep-alive policy applying to non-core threads applies also to
1496 * core threads. When false (the default), core threads are never
1497 * terminated due to lack of incoming tasks.
1498 *
1499 * @return {@code true} if core threads are allowed to time out,
1500 * else {@code false}
1501 *
1502 * @since 1.6
1503 */
1504 public boolean allowsCoreThreadTimeOut() {
1505 return allowCoreThreadTimeOut;
1506 }
1507
1508 /**
1509 * Sets the policy governing whether core threads may time out and
1510 * terminate if no tasks arrive within the keep-alive time, being
1511 * replaced if needed when new tasks arrive. When false, core
1512 * threads are never terminated due to lack of incoming
1513 * tasks. When true, the same keep-alive policy applying to
1514 * non-core threads applies also to core threads. To avoid
1515 * continual thread replacement, the keep-alive time must be
1516 * greater than zero when setting {@code true}. This method
1517 * should in general be called before the pool is actively used.
1518 *
1519 * @param value {@code true} if should time out, else {@code false}
1520 * @throws IllegalArgumentException if value is {@code true}
1521 * and the current keep-alive time is not greater than zero
1522 *
1523 * @since 1.6
1524 */
1525 public void allowCoreThreadTimeOut(boolean value) {
1526 if (value && keepAliveTime <= 0)
1527 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1528 if (value != allowCoreThreadTimeOut) {
1529 allowCoreThreadTimeOut = value;
1530 if (value)
1531 interruptIdleWorkers();
1532 }
1533 }
1534
1535 /**
1536 * Sets the maximum allowed number of threads. This overrides any
1537 * value set in the constructor. If the new value is smaller than
1538 * the current value, excess existing threads will be
1539 * terminated when they next become idle.
1540 *
1541 * @param maximumPoolSize the new maximum
1542 * @throws IllegalArgumentException if the new maximum is
1543 * less than or equal to zero, or
1544 * less than the {@linkplain #getCorePoolSize core pool size}
1545 * @see #getMaximumPoolSize
1546 */
1547 public void setMaximumPoolSize(int maximumPoolSize) {
1548 if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize)
1549 throw new IllegalArgumentException();
1550 this.maximumPoolSize = maximumPoolSize;
1551 if (workerCountOf(ctl.get()) > maximumPoolSize)
1552 interruptIdleWorkers();
1553 }
1554
1555 /**
1556 * Returns the maximum allowed number of threads.
1557 *
1558 * @return the maximum allowed number of threads
1559 * @see #setMaximumPoolSize
1560 */
1561 public int getMaximumPoolSize() {
1562 return maximumPoolSize;
1563 }
1564
1565 /**
1566 * Sets the time limit for which threads may remain idle before
1567 * being terminated. If there are more than the core number of
1568 * threads currently in the pool, after waiting this amount of
1569 * time without processing a task, excess threads will be
1570 * terminated. This overrides any value set in the constructor.
1571 *
1572 * @param time the time to wait. A time value of zero will cause
1573 * excess threads to terminate immediately after executing tasks.
1574 * @param unit the time unit of the {@code time} argument
1575 * @throws IllegalArgumentException if {@code time} less than zero or
1576 * if {@code time} is zero and {@code allowsCoreThreadTimeOut}
1577 * @see #getKeepAliveTime
1578 */
1579 public void setKeepAliveTime(long time, TimeUnit unit) {
1580 if (time < 0)
1581 throw new IllegalArgumentException();
1582 if (time == 0 && allowsCoreThreadTimeOut())
1583 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1584 long keepAliveTime = unit.toNanos(time);
1585 long delta = keepAliveTime - this.keepAliveTime;
1586 this.keepAliveTime = keepAliveTime;
1587 if (delta < 0)
1588 interruptIdleWorkers();
1589 }
1590
1591 /**
1592 * Returns the thread keep-alive time, which is the amount of time
1593 * that threads in excess of the core pool size may remain
1594 * idle before being terminated.
1595 *
1596 * @param unit the desired time unit of the result
1597 * @return the time limit
1598 * @see #setKeepAliveTime
1599 */
1600 public long getKeepAliveTime(TimeUnit unit) {
1601 return unit.convert(keepAliveTime, TimeUnit.NANOSECONDS);
1602 }
1603
1604 /* User-level queue utilities */
1605
1606 /**
1607 * Returns the task queue used by this executor. Access to the
1608 * task queue is intended primarily for debugging and monitoring.
1609 * This queue may be in active use. Retrieving the task queue
1610 * does not prevent queued tasks from executing.
1611 *
1612 * @return the task queue
1613 */
1614 public BlockingQueue<Runnable> getQueue() {
1615 return workQueue;
1616 }
1617
1618 /**
1619 * Removes this task from the executor's internal queue if it is
1620 * present, thus causing it not to be run if it has not already
1621 * started.
1622 *
1623 * <p> This method may be useful as one part of a cancellation
1624 * scheme. It may fail to remove tasks that have been converted
1625 * into other forms before being placed on the internal queue. For
1626 * example, a task entered using {@code submit} might be
1627 * converted into a form that maintains {@code Future} status.
1628 * However, in such cases, method {@link #purge} may be used to
1629 * remove those Futures that have been cancelled.
1630 *
1631 * @param task the task to remove
1632 * @return true if the task was removed
1633 */
1634 public boolean remove(Runnable task) {
1635 boolean removed = workQueue.remove(task);
1636 tryTerminate(); // In case SHUTDOWN and now empty
1637 return removed;
1638 }
1639
1640 /**
1641 * Tries to remove from the work queue all {@link Future}
1642 * tasks that have been cancelled. This method can be useful as a
1643 * storage reclamation operation, that has no other impact on
1644 * functionality. Cancelled tasks are never executed, but may
1645 * accumulate in work queues until worker threads can actively
1646 * remove them. Invoking this method instead tries to remove them now.
1647 * However, this method may fail to remove tasks in
1648 * the presence of interference by other threads.
1649 */
1650 public void purge() {
1651 final BlockingQueue<Runnable> q = workQueue;
1652 try {
1653 Iterator<Runnable> it = q.iterator();
1654 while (it.hasNext()) {
1655 Runnable r = it.next();
1656 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1657 it.remove();
1658 }
1659 } catch (ConcurrentModificationException fallThrough) {
1660 // Take slow path if we encounter interference during traversal.
1661 // Make copy for traversal and call remove for cancelled entries.
1662 // The slow path is more likely to be O(N*N).
1663 for (Object r : q.toArray())
1664 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1665 q.remove(r);
1666 }
1667
1668 tryTerminate(); // In case SHUTDOWN and now empty
1669 }
1670
1671 /* Statistics */
1672
1673 /**
1674 * Returns the current number of threads in the pool.
1675 *
1676 * @return the number of threads
1677 */
1678 public int getPoolSize() {
1679 final ReentrantLock mainLock = this.mainLock;
1680 mainLock.lock();
1681 try {
1682 // Remove rare and surprising possibility of
1683 // isTerminated() && getPoolSize() > 0
1684 return runStateAtLeast(ctl.get(), TIDYING) ? 0
1685 : workers.size();
1686 } finally {
1687 mainLock.unlock();
1688 }
1689 }
1690
1691 /**
1692 * Returns the approximate number of threads that are actively
1693 * executing tasks.
1694 *
1695 * @return the number of threads
1696 */
1697 public int getActiveCount() {
1698 final ReentrantLock mainLock = this.mainLock;
1699 mainLock.lock();
1700 try {
1701 int n = 0;
1702 for (Worker w : workers)
1703 if (w.isLocked())
1704 ++n;
1705 return n;
1706 } finally {
1707 mainLock.unlock();
1708 }
1709 }
1710
1711 /**
1712 * Returns the largest number of threads that have ever
1713 * simultaneously been in the pool.
1714 *
1715 * @return the number of threads
1716 */
1717 public int getLargestPoolSize() {
1718 final ReentrantLock mainLock = this.mainLock;
1719 mainLock.lock();
1720 try {
1721 return largestPoolSize;
1722 } finally {
1723 mainLock.unlock();
1724 }
1725 }
1726
1727 /**
1728 * Returns the approximate total number of tasks that have ever been
1729 * scheduled for execution. Because the states of tasks and
1730 * threads may change dynamically during computation, the returned
1731 * value is only an approximation.
1732 *
1733 * @return the number of tasks
1734 */
1735 public long getTaskCount() {
1736 final ReentrantLock mainLock = this.mainLock;
1737 mainLock.lock();
1738 try {
1739 long n = completedTaskCount;
1740 for (Worker w : workers) {
1741 n += w.completedTasks;
1742 if (w.isLocked())
1743 ++n;
1744 }
1745 return n + workQueue.size();
1746 } finally {
1747 mainLock.unlock();
1748 }
1749 }
1750
1751 /**
1752 * Returns the approximate total number of tasks that have
1753 * completed execution. Because the states of tasks and threads
1754 * may change dynamically during computation, the returned value
1755 * is only an approximation, but one that does not ever decrease
1756 * across successive calls.
1757 *
1758 * @return the number of tasks
1759 */
1760 public long getCompletedTaskCount() {
1761 final ReentrantLock mainLock = this.mainLock;
1762 mainLock.lock();
1763 try {
1764 long n = completedTaskCount;
1765 for (Worker w : workers)
1766 n += w.completedTasks;
1767 return n;
1768 } finally {
1769 mainLock.unlock();
1770 }
1771 }
1772
1773 /* Extension hooks */
1774
1775 /**
1776 * Method invoked prior to executing the given Runnable in the
1777 * given thread. This method is invoked by thread {@code t} that
1778 * will execute task {@code r}, and may be used to re-initialize
1779 * ThreadLocals, or to perform logging.
1780 *
1781 * <p>This implementation does nothing, but may be customized in
1782 * subclasses. Note: To properly nest multiple overridings, subclasses
1783 * should generally invoke {@code super.beforeExecute} at the end of
1784 * this method.
1785 *
1786 * @param t the thread that will run task {@code r}
1787 * @param r the task that will be executed
1788 */
1789 protected void beforeExecute(Thread t, Runnable r) { }
1790
1791 /**
1792 * Method invoked upon completion of execution of the given Runnable.
1793 * This method is invoked by the thread that executed the task. If
1794 * non-null, the Throwable is the uncaught {@code RuntimeException}
1795 * or {@code Error} that caused execution to terminate abruptly.
1796 *
1797 * <p>This implementation does nothing, but may be customized in
1798 * subclasses. Note: To properly nest multiple overridings, subclasses
1799 * should generally invoke {@code super.afterExecute} at the
1800 * beginning of this method.
1801 *
1802 * <p><b>Note:</b> When actions are enclosed in tasks (such as
1803 * {@link FutureTask}) either explicitly or via methods such as
1804 * {@code submit}, these task objects catch and maintain
1805 * computational exceptions, and so they do not cause abrupt
1806 * termination, and the internal exceptions are <em>not</em>
1807 * passed to this method. If you would like to trap both kinds of
1808 * failures in this method, you can further probe for such cases,
1809 * as in this sample subclass that prints either the direct cause
1810 * or the underlying exception if a task has been aborted:
1811 *
1812 * <pre> {@code
1813 * class ExtendedExecutor extends ThreadPoolExecutor {
1814 * // ...
1815 * protected void afterExecute(Runnable r, Throwable t) {
1816 * super.afterExecute(r, t);
1817 * if (t == null && r instanceof Future<?>) {
1818 * try {
1819 * Object result = ((Future<?>) r).get();
1820 * } catch (CancellationException ce) {
1821 * t = ce;
1822 * } catch (ExecutionException ee) {
1823 * t = ee.getCause();
1824 * } catch (InterruptedException ie) {
1825 * Thread.currentThread().interrupt(); // ignore/reset
1826 * }
1827 * }
1828 * if (t != null)
1829 * System.out.println(t);
1830 * }
1831 * }}</pre>
1832 *
1833 * @param r the runnable that has completed
1834 * @param t the exception that caused termination, or null if
1835 * execution completed normally
1836 */
1837 protected void afterExecute(Runnable r, Throwable t) { }
1838
1839 /**
1840 * Method invoked when the Executor has terminated. Default
1841 * implementation does nothing. Note: To properly nest multiple
1842 * overridings, subclasses should generally invoke
1843 * {@code super.terminated} within this method.
1844 */
1845 protected void terminated() { }
1846
1847 /* Predefined RejectedExecutionHandlers */
1848
1849 /**
1850 * A handler for rejected tasks that runs the rejected task
1851 * directly in the calling thread of the {@code execute} method,
1852 * unless the executor has been shut down, in which case the task
1853 * is discarded.
1854 */
1855 public static class CallerRunsPolicy implements RejectedExecutionHandler {
1856 /**
1857 * Creates a {@code CallerRunsPolicy}.
1858 */
1859 public CallerRunsPolicy() { }
1860
1861 /**
1862 * Executes task r in the caller's thread, unless the executor
1863 * has been shut down, in which case the task is discarded.
1864 *
1865 * @param r the runnable task requested to be executed
1866 * @param e the executor attempting to execute this task
1867 */
1868 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1869 if (!e.isShutdown()) {
1870 r.run();
1871 }
1872 }
1873 }
1874
1875 /**
1876 * A handler for rejected tasks that throws a
1877 * {@code RejectedExecutionException}.
1878 */
1879 public static class AbortPolicy implements RejectedExecutionHandler {
1880 /**
1881 * Creates an {@code AbortPolicy}.
1882 */
1883 public AbortPolicy() { }
1884
1885 /**
1886 * Always throws RejectedExecutionException.
1887 *
1888 * @param r the runnable task requested to be executed
1889 * @param e the executor attempting to execute this task
1890 * @throws RejectedExecutionException always.
1891 */
1892 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1893 throw new RejectedExecutionException();
1894 }
1895 }
1896
1897 /**
1898 * A handler for rejected tasks that silently discards the
1899 * rejected task.
1900 */
1901 public static class DiscardPolicy implements RejectedExecutionHandler {
1902 /**
1903 * Creates a {@code DiscardPolicy}.
1904 */
1905 public DiscardPolicy() { }
1906
1907 /**
1908 * Does nothing, which has the effect of discarding task r.
1909 *
1910 * @param r the runnable task requested to be executed
1911 * @param e the executor attempting to execute this task
1912 */
1913 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1914 }
1915 }
1916
1917 /**
1918 * A handler for rejected tasks that discards the oldest unhandled
1919 * request and then retries {@code execute}, unless the executor
1920 * is shut down, in which case the task is discarded.
1921 */
1922 public static class DiscardOldestPolicy implements RejectedExecutionHandler {
1923 /**
1924 * Creates a {@code DiscardOldestPolicy} for the given executor.
1925 */
1926 public DiscardOldestPolicy() { }
1927
1928 /**
1929 * Obtains and ignores the next task that the executor
1930 * would otherwise execute, if one is immediately available,
1931 * and then retries execution of task r, unless the executor
1932 * is shut down, in which case task r is instead discarded.
1933 *
1934 * @param r the runnable task requested to be executed
1935 * @param e the executor attempting to execute this task
1936 */
1937 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1938 if (!e.isShutdown()) {
1939 e.getQueue().poll();
1940 e.execute(r);
1941 }
1942 }
1943 }
1944 }