ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ThreadPoolExecutor.java
Revision: 1.132
Committed: Thu Jun 21 04:06:50 2012 UTC (11 years, 11 months ago) by jsr166
Branch: MAIN
Changes since 1.131: +1 -1 lines
Log Message:
move variable to nested scope

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain, as explained at
4 * http://creativecommons.org/publicdomain/zero/1.0/
5 */
6
7 package java.util.concurrent;
8 import java.util.concurrent.locks.AbstractQueuedSynchronizer;
9 import java.util.concurrent.locks.Condition;
10 import java.util.concurrent.locks.ReentrantLock;
11 import java.util.concurrent.atomic.AtomicInteger;
12 import java.util.*;
13
14 /**
15 * An {@link ExecutorService} that executes each submitted task using
16 * one of possibly several pooled threads, normally configured
17 * using {@link Executors} factory methods.
18 *
19 * <p>Thread pools address two different problems: they usually
20 * provide improved performance when executing large numbers of
21 * asynchronous tasks, due to reduced per-task invocation overhead,
22 * and they provide a means of bounding and managing the resources,
23 * including threads, consumed when executing a collection of tasks.
24 * Each {@code ThreadPoolExecutor} also maintains some basic
25 * statistics, such as the number of completed tasks.
26 *
27 * <p>To be useful across a wide range of contexts, this class
28 * provides many adjustable parameters and extensibility
29 * hooks. However, programmers are urged to use the more convenient
30 * {@link Executors} factory methods {@link
31 * Executors#newCachedThreadPool} (unbounded thread pool, with
32 * automatic thread reclamation), {@link Executors#newFixedThreadPool}
33 * (fixed size thread pool) and {@link
34 * Executors#newSingleThreadExecutor} (single background thread), that
35 * preconfigure settings for the most common usage
36 * scenarios. Otherwise, use the following guide when manually
37 * configuring and tuning this class:
38 *
39 * <dl>
40 *
41 * <dt>Core and maximum pool sizes</dt>
42 *
43 * <dd>A {@code ThreadPoolExecutor} will automatically adjust the
44 * pool size (see {@link #getPoolSize})
45 * according to the bounds set by
46 * corePoolSize (see {@link #getCorePoolSize}) and
47 * maximumPoolSize (see {@link #getMaximumPoolSize}).
48 *
49 * When a new task is submitted in method {@link #execute}, and fewer
50 * than corePoolSize threads are running, a new thread is created to
51 * handle the request, even if other worker threads are idle. If
52 * there are more than corePoolSize but less than maximumPoolSize
53 * threads running, a new thread will be created only if the queue is
54 * full. By setting corePoolSize and maximumPoolSize the same, you
55 * create a fixed-size thread pool. By setting maximumPoolSize to an
56 * essentially unbounded value such as {@code Integer.MAX_VALUE}, you
57 * allow the pool to accommodate an arbitrary number of concurrent
58 * tasks. Most typically, core and maximum pool sizes are set only
59 * upon construction, but they may also be changed dynamically using
60 * {@link #setCorePoolSize} and {@link #setMaximumPoolSize}. </dd>
61 *
62 * <dt>On-demand construction</dt>
63 *
64 * <dd> By default, even core threads are initially created and
65 * started only when new tasks arrive, but this can be overridden
66 * dynamically using method {@link #prestartCoreThread} or {@link
67 * #prestartAllCoreThreads}. You probably want to prestart threads if
68 * you construct the pool with a non-empty queue. </dd>
69 *
70 * <dt>Creating new threads</dt>
71 *
72 * <dd>New threads are created using a {@link ThreadFactory}. If not
73 * otherwise specified, a {@link Executors#defaultThreadFactory} is
74 * used, that creates threads to all be in the same {@link
75 * ThreadGroup} and with the same {@code NORM_PRIORITY} priority and
76 * non-daemon status. By supplying a different ThreadFactory, you can
77 * alter the thread's name, thread group, priority, daemon status,
78 * etc. If a {@code ThreadFactory} fails to create a thread when asked
79 * by returning null from {@code newThread}, the executor will
80 * continue, but might not be able to execute any tasks. Threads
81 * should possess the "modifyThread" {@code RuntimePermission}. If
82 * worker threads or other threads using the pool do not possess this
83 * permission, service may be degraded: configuration changes may not
84 * take effect in a timely manner, and a shutdown pool may remain in a
85 * state in which termination is possible but not completed.</dd>
86 *
87 * <dt>Keep-alive times</dt>
88 *
89 * <dd>If the pool currently has more than corePoolSize threads,
90 * excess threads will be terminated if they have been idle for more
91 * than the keepAliveTime (see {@link #getKeepAliveTime}). This
92 * provides a means of reducing resource consumption when the pool is
93 * not being actively used. If the pool becomes more active later, new
94 * threads will be constructed. This parameter can also be changed
95 * dynamically using method {@link #setKeepAliveTime}. Using a value
96 * of {@code Long.MAX_VALUE} {@link TimeUnit#NANOSECONDS} effectively
97 * disables idle threads from ever terminating prior to shut down. By
98 * default, the keep-alive policy applies only when there are more
99 * than corePoolSizeThreads. But method {@link
100 * #allowCoreThreadTimeOut(boolean)} can be used to apply this
101 * time-out policy to core threads as well, so long as the
102 * keepAliveTime value is non-zero. </dd>
103 *
104 * <dt>Queuing</dt>
105 *
106 * <dd>Any {@link BlockingQueue} may be used to transfer and hold
107 * submitted tasks. The use of this queue interacts with pool sizing:
108 *
109 * <ul>
110 *
111 * <li> If fewer than corePoolSize threads are running, the Executor
112 * always prefers adding a new thread
113 * rather than queuing.</li>
114 *
115 * <li> If corePoolSize or more threads are running, the Executor
116 * always prefers queuing a request rather than adding a new
117 * thread.</li>
118 *
119 * <li> If a request cannot be queued, a new thread is created unless
120 * this would exceed maximumPoolSize, in which case, the task will be
121 * rejected.</li>
122 *
123 * </ul>
124 *
125 * There are three general strategies for queuing:
126 * <ol>
127 *
128 * <li> <em> Direct handoffs.</em> A good default choice for a work
129 * queue is a {@link SynchronousQueue} that hands off tasks to threads
130 * without otherwise holding them. Here, an attempt to queue a task
131 * will fail if no threads are immediately available to run it, so a
132 * new thread will be constructed. This policy avoids lockups when
133 * handling sets of requests that might have internal dependencies.
134 * Direct handoffs generally require unbounded maximumPoolSizes to
135 * avoid rejection of new submitted tasks. This in turn admits the
136 * possibility of unbounded thread growth when commands continue to
137 * arrive on average faster than they can be processed. </li>
138 *
139 * <li><em> Unbounded queues.</em> Using an unbounded queue (for
140 * example a {@link LinkedBlockingQueue} without a predefined
141 * capacity) will cause new tasks to wait in the queue when all
142 * corePoolSize threads are busy. Thus, no more than corePoolSize
143 * threads will ever be created. (And the value of the maximumPoolSize
144 * therefore doesn't have any effect.) This may be appropriate when
145 * each task is completely independent of others, so tasks cannot
146 * affect each others execution; for example, in a web page server.
147 * While this style of queuing can be useful in smoothing out
148 * transient bursts of requests, it admits the possibility of
149 * unbounded work queue growth when commands continue to arrive on
150 * average faster than they can be processed. </li>
151 *
152 * <li><em>Bounded queues.</em> A bounded queue (for example, an
153 * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
154 * used with finite maximumPoolSizes, but can be more difficult to
155 * tune and control. Queue sizes and maximum pool sizes may be traded
156 * off for each other: Using large queues and small pools minimizes
157 * CPU usage, OS resources, and context-switching overhead, but can
158 * lead to artificially low throughput. If tasks frequently block (for
159 * example if they are I/O bound), a system may be able to schedule
160 * time for more threads than you otherwise allow. Use of small queues
161 * generally requires larger pool sizes, which keeps CPUs busier but
162 * may encounter unacceptable scheduling overhead, which also
163 * decreases throughput. </li>
164 *
165 * </ol>
166 *
167 * </dd>
168 *
169 * <dt>Rejected tasks</dt>
170 *
171 * <dd> New tasks submitted in method {@link #execute} will be
172 * <em>rejected</em> when the Executor has been shut down, and also
173 * when the Executor uses finite bounds for both maximum threads and
174 * work queue capacity, and is saturated. In either case, the {@code
175 * execute} method invokes the {@link
176 * RejectedExecutionHandler#rejectedExecution} method of its {@link
177 * RejectedExecutionHandler}. Four predefined handler policies are
178 * provided:
179 *
180 * <ol>
181 *
182 * <li> In the default {@link ThreadPoolExecutor.AbortPolicy}, the
183 * handler throws a runtime {@link RejectedExecutionException} upon
184 * rejection. </li>
185 *
186 * <li> In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread
187 * that invokes {@code execute} itself runs the task. This provides a
188 * simple feedback control mechanism that will slow down the rate that
189 * new tasks are submitted. </li>
190 *
191 * <li> In {@link ThreadPoolExecutor.DiscardPolicy}, a task that
192 * cannot be executed is simply dropped. </li>
193 *
194 * <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the
195 * executor is not shut down, the task at the head of the work queue
196 * is dropped, and then execution is retried (which can fail again,
197 * causing this to be repeated.) </li>
198 *
199 * </ol>
200 *
201 * It is possible to define and use other kinds of {@link
202 * RejectedExecutionHandler} classes. Doing so requires some care
203 * especially when policies are designed to work only under particular
204 * capacity or queuing policies. </dd>
205 *
206 * <dt>Hook methods</dt>
207 *
208 * <dd>This class provides {@code protected} overridable {@link
209 * #beforeExecute} and {@link #afterExecute} methods that are called
210 * before and after execution of each task. These can be used to
211 * manipulate the execution environment; for example, reinitializing
212 * ThreadLocals, gathering statistics, or adding log
213 * entries. Additionally, method {@link #terminated} can be overridden
214 * to perform any special processing that needs to be done once the
215 * Executor has fully terminated.
216 *
217 * <p>If hook or callback methods throw exceptions, internal worker
218 * threads may in turn fail and abruptly terminate.</dd>
219 *
220 * <dt>Queue maintenance</dt>
221 *
222 * <dd> Method {@link #getQueue} allows access to the work queue for
223 * purposes of monitoring and debugging. Use of this method for any
224 * other purpose is strongly discouraged. Two supplied methods,
225 * {@link #remove} and {@link #purge} are available to assist in
226 * storage reclamation when large numbers of queued tasks become
227 * cancelled.</dd>
228 *
229 * <dt>Finalization</dt>
230 *
231 * <dd> A pool that is no longer referenced in a program <em>AND</em>
232 * has no remaining threads will be {@code shutdown} automatically. If
233 * you would like to ensure that unreferenced pools are reclaimed even
234 * if users forget to call {@link #shutdown}, then you must arrange
235 * that unused threads eventually die, by setting appropriate
236 * keep-alive times, using a lower bound of zero core threads and/or
237 * setting {@link #allowCoreThreadTimeOut(boolean)}. </dd>
238 *
239 * </dl>
240 *
241 * <p> <b>Extension example</b>. Most extensions of this class
242 * override one or more of the protected hook methods. For example,
243 * here is a subclass that adds a simple pause/resume feature:
244 *
245 * <pre> {@code
246 * class PausableThreadPoolExecutor extends ThreadPoolExecutor {
247 * private boolean isPaused;
248 * private ReentrantLock pauseLock = new ReentrantLock();
249 * private Condition unpaused = pauseLock.newCondition();
250 *
251 * public PausableThreadPoolExecutor(...) { super(...); }
252 *
253 * protected void beforeExecute(Thread t, Runnable r) {
254 * super.beforeExecute(t, r);
255 * pauseLock.lock();
256 * try {
257 * while (isPaused) unpaused.await();
258 * } catch (InterruptedException ie) {
259 * t.interrupt();
260 * } finally {
261 * pauseLock.unlock();
262 * }
263 * }
264 *
265 * public void pause() {
266 * pauseLock.lock();
267 * try {
268 * isPaused = true;
269 * } finally {
270 * pauseLock.unlock();
271 * }
272 * }
273 *
274 * public void resume() {
275 * pauseLock.lock();
276 * try {
277 * isPaused = false;
278 * unpaused.signalAll();
279 * } finally {
280 * pauseLock.unlock();
281 * }
282 * }
283 * }}</pre>
284 *
285 * @since 1.5
286 * @author Doug Lea
287 */
288 public class ThreadPoolExecutor extends AbstractExecutorService {
289 /**
290 * The main pool control state, ctl, is an atomic integer packing
291 * two conceptual fields
292 * workerCount, indicating the effective number of threads
293 * runState, indicating whether running, shutting down etc
294 *
295 * In order to pack them into one int, we limit workerCount to
296 * (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2
297 * billion) otherwise representable. If this is ever an issue in
298 * the future, the variable can be changed to be an AtomicLong,
299 * and the shift/mask constants below adjusted. But until the need
300 * arises, this code is a bit faster and simpler using an int.
301 *
302 * The workerCount is the number of workers that have been
303 * permitted to start and not permitted to stop. The value may be
304 * transiently different from the actual number of live threads,
305 * for example when a ThreadFactory fails to create a thread when
306 * asked, and when exiting threads are still performing
307 * bookkeeping before terminating. The user-visible pool size is
308 * reported as the current size of the workers set.
309 *
310 * The runState provides the main lifecycle control, taking on values:
311 *
312 * RUNNING: Accept new tasks and process queued tasks
313 * SHUTDOWN: Don't accept new tasks, but process queued tasks
314 * STOP: Don't accept new tasks, don't process queued tasks,
315 * and interrupt in-progress tasks
316 * TIDYING: All tasks have terminated, workerCount is zero,
317 * the thread transitioning to state TIDYING
318 * will run the terminated() hook method
319 * TERMINATED: terminated() has completed
320 *
321 * The numerical order among these values matters, to allow
322 * ordered comparisons. The runState monotonically increases over
323 * time, but need not hit each state. The transitions are:
324 *
325 * RUNNING -> SHUTDOWN
326 * On invocation of shutdown(), perhaps implicitly in finalize()
327 * (RUNNING or SHUTDOWN) -> STOP
328 * On invocation of shutdownNow()
329 * SHUTDOWN -> TIDYING
330 * When both queue and pool are empty
331 * STOP -> TIDYING
332 * When pool is empty
333 * TIDYING -> TERMINATED
334 * When the terminated() hook method has completed
335 *
336 * Threads waiting in awaitTermination() will return when the
337 * state reaches TERMINATED.
338 *
339 * Detecting the transition from SHUTDOWN to TIDYING is less
340 * straightforward than you'd like because the queue may become
341 * empty after non-empty and vice versa during SHUTDOWN state, but
342 * we can only terminate if, after seeing that it is empty, we see
343 * that workerCount is 0 (which sometimes entails a recheck -- see
344 * below).
345 */
346 private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
347 private static final int COUNT_BITS = Integer.SIZE - 3;
348 private static final int CAPACITY = (1 << COUNT_BITS) - 1;
349
350 // runState is stored in the high-order bits
351 private static final int RUNNING = -1 << COUNT_BITS;
352 private static final int SHUTDOWN = 0 << COUNT_BITS;
353 private static final int STOP = 1 << COUNT_BITS;
354 private static final int TIDYING = 2 << COUNT_BITS;
355 private static final int TERMINATED = 3 << COUNT_BITS;
356
357 // Packing and unpacking ctl
358 private static int runStateOf(int c) { return c & ~CAPACITY; }
359 private static int workerCountOf(int c) { return c & CAPACITY; }
360 private static int ctlOf(int rs, int wc) { return rs | wc; }
361
362 /*
363 * Bit field accessors that don't require unpacking ctl.
364 * These depend on the bit layout and on workerCount being never negative.
365 */
366
367 private static boolean runStateLessThan(int c, int s) {
368 return c < s;
369 }
370
371 private static boolean runStateAtLeast(int c, int s) {
372 return c >= s;
373 }
374
375 private static boolean isRunning(int c) {
376 return c < SHUTDOWN;
377 }
378
379 /**
380 * Attempt to CAS-increment the workerCount field of ctl.
381 */
382 private boolean compareAndIncrementWorkerCount(int expect) {
383 return ctl.compareAndSet(expect, expect + 1);
384 }
385
386 /**
387 * Attempt to CAS-decrement the workerCount field of ctl.
388 */
389 private boolean compareAndDecrementWorkerCount(int expect) {
390 return ctl.compareAndSet(expect, expect - 1);
391 }
392
393 /**
394 * Decrements the workerCount field of ctl. This is called only on
395 * abrupt termination of a thread (see processWorkerExit). Other
396 * decrements are performed within getTask.
397 */
398 private void decrementWorkerCount() {
399 do {} while (! compareAndDecrementWorkerCount(ctl.get()));
400 }
401
402 /**
403 * The queue used for holding tasks and handing off to worker
404 * threads. We do not require that workQueue.poll() returning
405 * null necessarily means that workQueue.isEmpty(), so rely
406 * solely on isEmpty to see if the queue is empty (which we must
407 * do for example when deciding whether to transition from
408 * SHUTDOWN to TIDYING). This accommodates special-purpose
409 * queues such as DelayQueues for which poll() is allowed to
410 * return null even if it may later return non-null when delays
411 * expire.
412 */
413 private final BlockingQueue<Runnable> workQueue;
414
415 /**
416 * Lock held on access to workers set and related bookkeeping.
417 * While we could use a concurrent set of some sort, it turns out
418 * to be generally preferable to use a lock. Among the reasons is
419 * that this serializes interruptIdleWorkers, which avoids
420 * unnecessary interrupt storms, especially during shutdown.
421 * Otherwise exiting threads would concurrently interrupt those
422 * that have not yet interrupted. It also simplifies some of the
423 * associated statistics bookkeeping of largestPoolSize etc. We
424 * also hold mainLock on shutdown and shutdownNow, for the sake of
425 * ensuring workers set is stable while separately checking
426 * permission to interrupt and actually interrupting.
427 */
428 private final ReentrantLock mainLock = new ReentrantLock();
429
430 /**
431 * Set containing all worker threads in pool. Accessed only when
432 * holding mainLock.
433 */
434 private final HashSet<Worker> workers = new HashSet<Worker>();
435
436 /**
437 * Wait condition to support awaitTermination
438 */
439 private final Condition termination = mainLock.newCondition();
440
441 /**
442 * Tracks largest attained pool size. Accessed only under
443 * mainLock.
444 */
445 private int largestPoolSize;
446
447 /**
448 * Counter for completed tasks. Updated only on termination of
449 * worker threads. Accessed only under mainLock.
450 */
451 private long completedTaskCount;
452
453 /*
454 * All user control parameters are declared as volatiles so that
455 * ongoing actions are based on freshest values, but without need
456 * for locking, since no internal invariants depend on them
457 * changing synchronously with respect to other actions.
458 */
459
460 /**
461 * Factory for new threads. All threads are created using this
462 * factory (via method addWorker). All callers must be prepared
463 * for addWorker to fail, which may reflect a system or user's
464 * policy limiting the number of threads. Even though it is not
465 * treated as an error, failure to create threads may result in
466 * new tasks being rejected or existing ones remaining stuck in
467 * the queue.
468 *
469 * We go further and preserve pool invariants even in the face of
470 * errors such as OutOfMemoryError, that might be thrown while
471 * trying to create threads. Such errors are rather common due to
472 * the need to allocate a native stack in Thread#start, and users
473 * will want to perform clean pool shutdown to clean up. There
474 * will likely be enough memory available for the cleanup code to
475 * complete without encountering yet another OutOfMemoryError.
476 */
477 private volatile ThreadFactory threadFactory;
478
479 /**
480 * Handler called when saturated or shutdown in execute.
481 */
482 private volatile RejectedExecutionHandler handler;
483
484 /**
485 * Timeout in nanoseconds for idle threads waiting for work.
486 * Threads use this timeout when there are more than corePoolSize
487 * present or if allowCoreThreadTimeOut. Otherwise they wait
488 * forever for new work.
489 */
490 private volatile long keepAliveTime;
491
492 /**
493 * If false (default), core threads stay alive even when idle.
494 * If true, core threads use keepAliveTime to time out waiting
495 * for work.
496 */
497 private volatile boolean allowCoreThreadTimeOut;
498
499 /**
500 * Core pool size is the minimum number of workers to keep alive
501 * (and not allow to time out etc) unless allowCoreThreadTimeOut
502 * is set, in which case the minimum is zero.
503 */
504 private volatile int corePoolSize;
505
506 /**
507 * Maximum pool size. Note that the actual maximum is internally
508 * bounded by CAPACITY.
509 */
510 private volatile int maximumPoolSize;
511
512 /**
513 * The default rejected execution handler
514 */
515 private static final RejectedExecutionHandler defaultHandler =
516 new AbortPolicy();
517
518 /**
519 * Permission required for callers of shutdown and shutdownNow.
520 * We additionally require (see checkShutdownAccess) that callers
521 * have permission to actually interrupt threads in the worker set
522 * (as governed by Thread.interrupt, which relies on
523 * ThreadGroup.checkAccess, which in turn relies on
524 * SecurityManager.checkAccess). Shutdowns are attempted only if
525 * these checks pass.
526 *
527 * All actual invocations of Thread.interrupt (see
528 * interruptIdleWorkers and interruptWorkers) ignore
529 * SecurityExceptions, meaning that the attempted interrupts
530 * silently fail. In the case of shutdown, they should not fail
531 * unless the SecurityManager has inconsistent policies, sometimes
532 * allowing access to a thread and sometimes not. In such cases,
533 * failure to actually interrupt threads may disable or delay full
534 * termination. Other uses of interruptIdleWorkers are advisory,
535 * and failure to actually interrupt will merely delay response to
536 * configuration changes so is not handled exceptionally.
537 */
538 private static final RuntimePermission shutdownPerm =
539 new RuntimePermission("modifyThread");
540
541 /**
542 * Class Worker mainly maintains interrupt control state for
543 * threads running tasks, along with other minor bookkeeping.
544 * This class opportunistically extends AbstractQueuedSynchronizer
545 * to simplify acquiring and releasing a lock surrounding each
546 * task execution. This protects against interrupts that are
547 * intended to wake up a worker thread waiting for a task from
548 * instead interrupting a task being run. We implement a simple
549 * non-reentrant mutual exclusion lock rather than use
550 * ReentrantLock because we do not want worker tasks to be able to
551 * reacquire the lock when they invoke pool control methods like
552 * setCorePoolSize. Additionally, to suppress interrupts until
553 * the thread actually starts running tasks, we initialize lock
554 * state to a negative value, and clear it upon start (in
555 * runWorker).
556 */
557 private final class Worker
558 extends AbstractQueuedSynchronizer
559 implements Runnable
560 {
561 /**
562 * This class will never be serialized, but we provide a
563 * serialVersionUID to suppress a javac warning.
564 */
565 private static final long serialVersionUID = 6138294804551838833L;
566
567 /** Thread this worker is running in. Null if factory fails. */
568 final Thread thread;
569 /** Initial task to run. Possibly null. */
570 Runnable firstTask;
571 /** Per-thread task counter */
572 volatile long completedTasks;
573
574 /**
575 * Creates with given first task and thread from ThreadFactory.
576 * @param firstTask the first task (null if none)
577 */
578 Worker(Runnable firstTask) {
579 setState(-1); // inhibit interrupts until runWorker
580 this.firstTask = firstTask;
581 this.thread = getThreadFactory().newThread(this);
582 }
583
584 /** Delegates main run loop to outer runWorker */
585 public void run() {
586 runWorker(this);
587 }
588
589 // Lock methods
590 //
591 // The value 0 represents the unlocked state.
592 // The value 1 represents the locked state.
593
594 protected boolean isHeldExclusively() {
595 return getState() != 0;
596 }
597
598 protected boolean tryAcquire(int unused) {
599 if (compareAndSetState(0, 1)) {
600 setExclusiveOwnerThread(Thread.currentThread());
601 return true;
602 }
603 return false;
604 }
605
606 protected boolean tryRelease(int unused) {
607 setExclusiveOwnerThread(null);
608 setState(0);
609 return true;
610 }
611
612 public void lock() { acquire(1); }
613 public boolean tryLock() { return tryAcquire(1); }
614 public void unlock() { release(1); }
615 public boolean isLocked() { return isHeldExclusively(); }
616
617 void interruptIfStarted() {
618 Thread t;
619 if (getState() >= 0 && (t = thread) != null && !t.isInterrupted()) {
620 try {
621 t.interrupt();
622 } catch (SecurityException ignore) {
623 }
624 }
625 }
626 }
627
628 /*
629 * Methods for setting control state
630 */
631
632 /**
633 * Transitions runState to given target, or leaves it alone if
634 * already at least the given target.
635 *
636 * @param targetState the desired state, either SHUTDOWN or STOP
637 * (but not TIDYING or TERMINATED -- use tryTerminate for that)
638 */
639 private void advanceRunState(int targetState) {
640 for (;;) {
641 int c = ctl.get();
642 if (runStateAtLeast(c, targetState) ||
643 ctl.compareAndSet(c, ctlOf(targetState, workerCountOf(c))))
644 break;
645 }
646 }
647
648 /**
649 * Transitions to TERMINATED state if either (SHUTDOWN and pool
650 * and queue empty) or (STOP and pool empty). If otherwise
651 * eligible to terminate but workerCount is nonzero, interrupts an
652 * idle worker to ensure that shutdown signals propagate. This
653 * method must be called following any action that might make
654 * termination possible -- reducing worker count or removing tasks
655 * from the queue during shutdown. The method is non-private to
656 * allow access from ScheduledThreadPoolExecutor.
657 */
658 final void tryTerminate() {
659 for (;;) {
660 int c = ctl.get();
661 if (isRunning(c) ||
662 runStateAtLeast(c, TIDYING) ||
663 (runStateOf(c) == SHUTDOWN && ! workQueue.isEmpty()))
664 return;
665 if (workerCountOf(c) != 0) { // Eligible to terminate
666 interruptIdleWorkers(ONLY_ONE);
667 return;
668 }
669
670 final ReentrantLock mainLock = this.mainLock;
671 mainLock.lock();
672 try {
673 if (ctl.compareAndSet(c, ctlOf(TIDYING, 0))) {
674 try {
675 terminated();
676 } finally {
677 ctl.set(ctlOf(TERMINATED, 0));
678 termination.signalAll();
679 }
680 return;
681 }
682 } finally {
683 mainLock.unlock();
684 }
685 // else retry on failed CAS
686 }
687 }
688
689 /*
690 * Methods for controlling interrupts to worker threads.
691 */
692
693 /**
694 * If there is a security manager, makes sure caller has
695 * permission to shut down threads in general (see shutdownPerm).
696 * If this passes, additionally makes sure the caller is allowed
697 * to interrupt each worker thread. This might not be true even if
698 * first check passed, if the SecurityManager treats some threads
699 * specially.
700 */
701 private void checkShutdownAccess() {
702 SecurityManager security = System.getSecurityManager();
703 if (security != null) {
704 security.checkPermission(shutdownPerm);
705 final ReentrantLock mainLock = this.mainLock;
706 mainLock.lock();
707 try {
708 for (Worker w : workers)
709 security.checkAccess(w.thread);
710 } finally {
711 mainLock.unlock();
712 }
713 }
714 }
715
716 /**
717 * Interrupts all threads, even if active. Ignores SecurityExceptions
718 * (in which case some threads may remain uninterrupted).
719 */
720 private void interruptWorkers() {
721 final ReentrantLock mainLock = this.mainLock;
722 mainLock.lock();
723 try {
724 for (Worker w : workers)
725 w.interruptIfStarted();
726 } finally {
727 mainLock.unlock();
728 }
729 }
730
731 /**
732 * Interrupts threads that might be waiting for tasks (as
733 * indicated by not being locked) so they can check for
734 * termination or configuration changes. Ignores
735 * SecurityExceptions (in which case some threads may remain
736 * uninterrupted).
737 *
738 * @param onlyOne If true, interrupt at most one worker. This is
739 * called only from tryTerminate when termination is otherwise
740 * enabled but there are still other workers. In this case, at
741 * most one waiting worker is interrupted to propagate shutdown
742 * signals in case all threads are currently waiting.
743 * Interrupting any arbitrary thread ensures that newly arriving
744 * workers since shutdown began will also eventually exit.
745 * To guarantee eventual termination, it suffices to always
746 * interrupt only one idle worker, but shutdown() interrupts all
747 * idle workers so that redundant workers exit promptly, not
748 * waiting for a straggler task to finish.
749 */
750 private void interruptIdleWorkers(boolean onlyOne) {
751 final ReentrantLock mainLock = this.mainLock;
752 mainLock.lock();
753 try {
754 for (Worker w : workers) {
755 Thread t = w.thread;
756 if (!t.isInterrupted() && w.tryLock()) {
757 try {
758 t.interrupt();
759 } catch (SecurityException ignore) {
760 } finally {
761 w.unlock();
762 }
763 }
764 if (onlyOne)
765 break;
766 }
767 } finally {
768 mainLock.unlock();
769 }
770 }
771
772 /**
773 * Common form of interruptIdleWorkers, to avoid having to
774 * remember what the boolean argument means.
775 */
776 private void interruptIdleWorkers() {
777 interruptIdleWorkers(false);
778 }
779
780 private static final boolean ONLY_ONE = true;
781
782 /*
783 * Misc utilities, most of which are also exported to
784 * ScheduledThreadPoolExecutor
785 */
786
787 /**
788 * Invokes the rejected execution handler for the given command.
789 * Package-protected for use by ScheduledThreadPoolExecutor.
790 */
791 final void reject(Runnable command) {
792 handler.rejectedExecution(command, this);
793 }
794
795 /**
796 * Performs any further cleanup following run state transition on
797 * invocation of shutdown. A no-op here, but used by
798 * ScheduledThreadPoolExecutor to cancel delayed tasks.
799 */
800 void onShutdown() {
801 }
802
803 /**
804 * State check needed by ScheduledThreadPoolExecutor to
805 * enable running tasks during shutdown.
806 *
807 * @param shutdownOK true if should return true if SHUTDOWN
808 */
809 final boolean isRunningOrShutdown(boolean shutdownOK) {
810 int rs = runStateOf(ctl.get());
811 return rs == RUNNING || (rs == SHUTDOWN && shutdownOK);
812 }
813
814 /**
815 * Drains the task queue into a new list, normally using
816 * drainTo. But if the queue is a DelayQueue or any other kind of
817 * queue for which poll or drainTo may fail to remove some
818 * elements, it deletes them one by one.
819 */
820 private List<Runnable> drainQueue() {
821 BlockingQueue<Runnable> q = workQueue;
822 List<Runnable> taskList = new ArrayList<Runnable>();
823 q.drainTo(taskList);
824 if (!q.isEmpty()) {
825 for (Runnable r : q.toArray(new Runnable[0])) {
826 if (q.remove(r))
827 taskList.add(r);
828 }
829 }
830 return taskList;
831 }
832
833 /*
834 * Methods for creating, running and cleaning up after workers
835 */
836
837 /**
838 * Checks if a new worker can be added with respect to current
839 * pool state and the given bound (either core or maximum). If so,
840 * the worker count is adjusted accordingly, and, if possible, a
841 * new worker is created and started, running firstTask as its
842 * first task. This method returns false if the pool is stopped or
843 * eligible to shut down. It also returns false if the thread
844 * factory fails to create a thread when asked. If the thread
845 * creation fails, either due to the thread factory returning
846 * null, or due to an exception (typically OutOfMemoryError in
847 * Thread#start), we roll back cleanly.
848 *
849 * @param firstTask the task the new thread should run first (or
850 * null if none). Workers are created with an initial first task
851 * (in method execute()) to bypass queuing when there are fewer
852 * than corePoolSize threads (in which case we always start one),
853 * or when the queue is full (in which case we must bypass queue).
854 * Initially idle threads are usually created via
855 * prestartCoreThread or to replace other dying workers.
856 *
857 * @param core if true use corePoolSize as bound, else
858 * maximumPoolSize. (A boolean indicator is used here rather than a
859 * value to ensure reads of fresh values after checking other pool
860 * state).
861 * @return true if successful
862 */
863 private boolean addWorker(Runnable firstTask, boolean core) {
864 retry:
865 for (;;) {
866 int c = ctl.get();
867 int rs = runStateOf(c);
868
869 // Check if queue empty only if necessary.
870 if (rs >= SHUTDOWN &&
871 ! (rs == SHUTDOWN &&
872 firstTask == null &&
873 ! workQueue.isEmpty()))
874 return false;
875
876 for (;;) {
877 int wc = workerCountOf(c);
878 if (wc >= CAPACITY ||
879 wc >= (core ? corePoolSize : maximumPoolSize))
880 return false;
881 if (compareAndIncrementWorkerCount(c))
882 break retry;
883 c = ctl.get(); // Re-read ctl
884 if (runStateOf(c) != rs)
885 continue retry;
886 // else CAS failed due to workerCount change; retry inner loop
887 }
888 }
889
890 boolean workerStarted = false;
891 boolean workerAdded = false;
892 Worker w = null;
893 try {
894 w = new Worker(firstTask);
895 final Thread t = w.thread;
896 if (t != null) {
897 final ReentrantLock mainLock = this.mainLock;
898 mainLock.lock();
899 try {
900 // Recheck while holding lock.
901 // Back out on ThreadFactory failure or if
902 // shut down before lock acquired.
903 int c = ctl.get();
904 int rs = runStateOf(c);
905
906 if (rs < SHUTDOWN ||
907 (rs == SHUTDOWN && firstTask == null)) {
908 if (t.isAlive()) // precheck that t is startable
909 throw new IllegalThreadStateException();
910 workers.add(w);
911 int s = workers.size();
912 if (s > largestPoolSize)
913 largestPoolSize = s;
914 workerAdded = true;
915 }
916 } finally {
917 mainLock.unlock();
918 }
919 if (workerAdded) {
920 t.start();
921 workerStarted = true;
922 }
923 }
924 } finally {
925 if (! workerStarted)
926 addWorkerFailed(w);
927 }
928 return workerStarted;
929 }
930
931 /**
932 * Rolls back the worker thread creation.
933 * - removes worker from workers, if present
934 * - decrements worker count
935 * - rechecks for termination, in case the existence of this
936 * worker was holding up termination
937 */
938 private void addWorkerFailed(Worker w) {
939 final ReentrantLock mainLock = this.mainLock;
940 mainLock.lock();
941 try {
942 if (w != null)
943 workers.remove(w);
944 decrementWorkerCount();
945 tryTerminate();
946 } finally {
947 mainLock.unlock();
948 }
949 }
950
951 /**
952 * Performs cleanup and bookkeeping for a dying worker. Called
953 * only from worker threads. Unless completedAbruptly is set,
954 * assumes that workerCount has already been adjusted to account
955 * for exit. This method removes thread from worker set, and
956 * possibly terminates the pool or replaces the worker if either
957 * it exited due to user task exception or if fewer than
958 * corePoolSize workers are running or queue is non-empty but
959 * there are no workers.
960 *
961 * @param w the worker
962 * @param completedAbruptly if the worker died due to user exception
963 */
964 private void processWorkerExit(Worker w, boolean completedAbruptly) {
965 if (completedAbruptly) // If abrupt, then workerCount wasn't adjusted
966 decrementWorkerCount();
967
968 final ReentrantLock mainLock = this.mainLock;
969 mainLock.lock();
970 try {
971 completedTaskCount += w.completedTasks;
972 workers.remove(w);
973 } finally {
974 mainLock.unlock();
975 }
976
977 tryTerminate();
978
979 int c = ctl.get();
980 if (runStateLessThan(c, STOP)) {
981 if (!completedAbruptly) {
982 int min = allowCoreThreadTimeOut ? 0 : corePoolSize;
983 if (min == 0 && ! workQueue.isEmpty())
984 min = 1;
985 if (workerCountOf(c) >= min)
986 return; // replacement not needed
987 }
988 addWorker(null, false);
989 }
990 }
991
992 /**
993 * Performs blocking or timed wait for a task, depending on
994 * current configuration settings, or returns null if this worker
995 * must exit because of any of:
996 * 1. There are more than maximumPoolSize workers (due to
997 * a call to setMaximumPoolSize).
998 * 2. The pool is stopped.
999 * 3. The pool is shutdown and the queue is empty.
1000 * 4. This worker timed out waiting for a task, and timed-out
1001 * workers are subject to termination (that is,
1002 * {@code allowCoreThreadTimeOut || workerCount > corePoolSize})
1003 * both before and after the timed wait.
1004 *
1005 * @return task, or null if the worker must exit, in which case
1006 * workerCount is decremented
1007 */
1008 private Runnable getTask() {
1009 boolean timedOut = false; // Did the last poll() time out?
1010
1011 retry:
1012 for (;;) {
1013 int c = ctl.get();
1014 int rs = runStateOf(c);
1015
1016 // Check if queue empty only if necessary.
1017 if (rs >= SHUTDOWN && (rs >= STOP || workQueue.isEmpty())) {
1018 decrementWorkerCount();
1019 return null;
1020 }
1021
1022 boolean timed; // Are workers subject to culling?
1023
1024 for (;;) {
1025 int wc = workerCountOf(c);
1026 timed = allowCoreThreadTimeOut || wc > corePoolSize;
1027
1028 if (wc <= maximumPoolSize && ! (timedOut && timed))
1029 break;
1030 if (compareAndDecrementWorkerCount(c))
1031 return null;
1032 c = ctl.get(); // Re-read ctl
1033 if (runStateOf(c) != rs)
1034 continue retry;
1035 // else CAS failed due to workerCount change; retry inner loop
1036 }
1037
1038 try {
1039 Runnable r = timed ?
1040 workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
1041 workQueue.take();
1042 if (r != null)
1043 return r;
1044 timedOut = true;
1045 } catch (InterruptedException retry) {
1046 timedOut = false;
1047 }
1048 }
1049 }
1050
1051 /**
1052 * Main worker run loop. Repeatedly gets tasks from queue and
1053 * executes them, while coping with a number of issues:
1054 *
1055 * 1. We may start out with an initial task, in which case we
1056 * don't need to get the first one. Otherwise, as long as pool is
1057 * running, we get tasks from getTask. If it returns null then the
1058 * worker exits due to changed pool state or configuration
1059 * parameters. Other exits result from exception throws in
1060 * external code, in which case completedAbruptly holds, which
1061 * usually leads processWorkerExit to replace this thread.
1062 *
1063 * 2. Before running any task, the lock is acquired to prevent
1064 * other pool interrupts while the task is executing, and
1065 * clearInterruptsForTaskRun called to ensure that unless pool is
1066 * stopping, this thread does not have its interrupt set.
1067 *
1068 * 3. Each task run is preceded by a call to beforeExecute, which
1069 * might throw an exception, in which case we cause thread to die
1070 * (breaking loop with completedAbruptly true) without processing
1071 * the task.
1072 *
1073 * 4. Assuming beforeExecute completes normally, we run the task,
1074 * gathering any of its thrown exceptions to send to
1075 * afterExecute. We separately handle RuntimeException, Error
1076 * (both of which the specs guarantee that we trap) and arbitrary
1077 * Throwables. Because we cannot rethrow Throwables within
1078 * Runnable.run, we wrap them within Errors on the way out (to the
1079 * thread's UncaughtExceptionHandler). Any thrown exception also
1080 * conservatively causes thread to die.
1081 *
1082 * 5. After task.run completes, we call afterExecute, which may
1083 * also throw an exception, which will also cause thread to
1084 * die. According to JLS Sec 14.20, this exception is the one that
1085 * will be in effect even if task.run throws.
1086 *
1087 * The net effect of the exception mechanics is that afterExecute
1088 * and the thread's UncaughtExceptionHandler have as accurate
1089 * information as we can provide about any problems encountered by
1090 * user code.
1091 *
1092 * @param w the worker
1093 */
1094 final void runWorker(Worker w) {
1095 Thread wt = Thread.currentThread();
1096 Runnable task = w.firstTask;
1097 w.firstTask = null;
1098 w.unlock(); // allow interrupts
1099 boolean completedAbruptly = true;
1100 try {
1101 while (task != null || (task = getTask()) != null) {
1102 w.lock();
1103 // If pool is stopping, ensure thread is interrupted;
1104 // if not, ensure thread is not interrupted. This
1105 // requires a recheck in second case to deal with
1106 // shutdownNow race while clearing interrupt
1107 if ((runStateAtLeast(ctl.get(), STOP) ||
1108 (Thread.interrupted() &&
1109 runStateAtLeast(ctl.get(), STOP))) &&
1110 !wt.isInterrupted())
1111 wt.interrupt();
1112 try {
1113 beforeExecute(wt, task);
1114 Throwable thrown = null;
1115 try {
1116 task.run();
1117 } catch (RuntimeException x) {
1118 thrown = x; throw x;
1119 } catch (Error x) {
1120 thrown = x; throw x;
1121 } catch (Throwable x) {
1122 thrown = x; throw new Error(x);
1123 } finally {
1124 afterExecute(task, thrown);
1125 }
1126 } finally {
1127 task = null;
1128 w.completedTasks++;
1129 w.unlock();
1130 }
1131 }
1132 completedAbruptly = false;
1133 } finally {
1134 processWorkerExit(w, completedAbruptly);
1135 }
1136 }
1137
1138 // Public constructors and methods
1139
1140 /**
1141 * Creates a new {@code ThreadPoolExecutor} with the given initial
1142 * parameters and default thread factory and rejected execution handler.
1143 * It may be more convenient to use one of the {@link Executors} factory
1144 * methods instead of this general purpose constructor.
1145 *
1146 * @param corePoolSize the number of threads to keep in the pool, even
1147 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1148 * @param maximumPoolSize the maximum number of threads to allow in the
1149 * pool
1150 * @param keepAliveTime when the number of threads is greater than
1151 * the core, this is the maximum time that excess idle threads
1152 * will wait for new tasks before terminating.
1153 * @param unit the time unit for the {@code keepAliveTime} argument
1154 * @param workQueue the queue to use for holding tasks before they are
1155 * executed. This queue will hold only the {@code Runnable}
1156 * tasks submitted by the {@code execute} method.
1157 * @throws IllegalArgumentException if one of the following holds:<br>
1158 * {@code corePoolSize < 0}<br>
1159 * {@code keepAliveTime < 0}<br>
1160 * {@code maximumPoolSize <= 0}<br>
1161 * {@code maximumPoolSize < corePoolSize}
1162 * @throws NullPointerException if {@code workQueue} is null
1163 */
1164 public ThreadPoolExecutor(int corePoolSize,
1165 int maximumPoolSize,
1166 long keepAliveTime,
1167 TimeUnit unit,
1168 BlockingQueue<Runnable> workQueue) {
1169 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1170 Executors.defaultThreadFactory(), defaultHandler);
1171 }
1172
1173 /**
1174 * Creates a new {@code ThreadPoolExecutor} with the given initial
1175 * parameters and default rejected execution handler.
1176 *
1177 * @param corePoolSize the number of threads to keep in the pool, even
1178 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1179 * @param maximumPoolSize the maximum number of threads to allow in the
1180 * pool
1181 * @param keepAliveTime when the number of threads is greater than
1182 * the core, this is the maximum time that excess idle threads
1183 * will wait for new tasks before terminating.
1184 * @param unit the time unit for the {@code keepAliveTime} argument
1185 * @param workQueue the queue to use for holding tasks before they are
1186 * executed. This queue will hold only the {@code Runnable}
1187 * tasks submitted by the {@code execute} method.
1188 * @param threadFactory the factory to use when the executor
1189 * creates a new thread
1190 * @throws IllegalArgumentException if one of the following holds:<br>
1191 * {@code corePoolSize < 0}<br>
1192 * {@code keepAliveTime < 0}<br>
1193 * {@code maximumPoolSize <= 0}<br>
1194 * {@code maximumPoolSize < corePoolSize}
1195 * @throws NullPointerException if {@code workQueue}
1196 * or {@code threadFactory} is null
1197 */
1198 public ThreadPoolExecutor(int corePoolSize,
1199 int maximumPoolSize,
1200 long keepAliveTime,
1201 TimeUnit unit,
1202 BlockingQueue<Runnable> workQueue,
1203 ThreadFactory threadFactory) {
1204 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1205 threadFactory, defaultHandler);
1206 }
1207
1208 /**
1209 * Creates a new {@code ThreadPoolExecutor} with the given initial
1210 * parameters and default thread factory.
1211 *
1212 * @param corePoolSize the number of threads to keep in the pool, even
1213 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1214 * @param maximumPoolSize the maximum number of threads to allow in the
1215 * pool
1216 * @param keepAliveTime when the number of threads is greater than
1217 * the core, this is the maximum time that excess idle threads
1218 * will wait for new tasks before terminating.
1219 * @param unit the time unit for the {@code keepAliveTime} argument
1220 * @param workQueue the queue to use for holding tasks before they are
1221 * executed. This queue will hold only the {@code Runnable}
1222 * tasks submitted by the {@code execute} method.
1223 * @param handler the handler to use when execution is blocked
1224 * because the thread bounds and queue capacities are reached
1225 * @throws IllegalArgumentException if one of the following holds:<br>
1226 * {@code corePoolSize < 0}<br>
1227 * {@code keepAliveTime < 0}<br>
1228 * {@code maximumPoolSize <= 0}<br>
1229 * {@code maximumPoolSize < corePoolSize}
1230 * @throws NullPointerException if {@code workQueue}
1231 * or {@code handler} is null
1232 */
1233 public ThreadPoolExecutor(int corePoolSize,
1234 int maximumPoolSize,
1235 long keepAliveTime,
1236 TimeUnit unit,
1237 BlockingQueue<Runnable> workQueue,
1238 RejectedExecutionHandler handler) {
1239 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1240 Executors.defaultThreadFactory(), handler);
1241 }
1242
1243 /**
1244 * Creates a new {@code ThreadPoolExecutor} with the given initial
1245 * parameters.
1246 *
1247 * @param corePoolSize the number of threads to keep in the pool, even
1248 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1249 * @param maximumPoolSize the maximum number of threads to allow in the
1250 * pool
1251 * @param keepAliveTime when the number of threads is greater than
1252 * the core, this is the maximum time that excess idle threads
1253 * will wait for new tasks before terminating.
1254 * @param unit the time unit for the {@code keepAliveTime} argument
1255 * @param workQueue the queue to use for holding tasks before they are
1256 * executed. This queue will hold only the {@code Runnable}
1257 * tasks submitted by the {@code execute} method.
1258 * @param threadFactory the factory to use when the executor
1259 * creates a new thread
1260 * @param handler the handler to use when execution is blocked
1261 * because the thread bounds and queue capacities are reached
1262 * @throws IllegalArgumentException if one of the following holds:<br>
1263 * {@code corePoolSize < 0}<br>
1264 * {@code keepAliveTime < 0}<br>
1265 * {@code maximumPoolSize <= 0}<br>
1266 * {@code maximumPoolSize < corePoolSize}
1267 * @throws NullPointerException if {@code workQueue}
1268 * or {@code threadFactory} or {@code handler} is null
1269 */
1270 public ThreadPoolExecutor(int corePoolSize,
1271 int maximumPoolSize,
1272 long keepAliveTime,
1273 TimeUnit unit,
1274 BlockingQueue<Runnable> workQueue,
1275 ThreadFactory threadFactory,
1276 RejectedExecutionHandler handler) {
1277 if (corePoolSize < 0 ||
1278 maximumPoolSize <= 0 ||
1279 maximumPoolSize < corePoolSize ||
1280 keepAliveTime < 0)
1281 throw new IllegalArgumentException();
1282 if (workQueue == null || threadFactory == null || handler == null)
1283 throw new NullPointerException();
1284 this.corePoolSize = corePoolSize;
1285 this.maximumPoolSize = maximumPoolSize;
1286 this.workQueue = workQueue;
1287 this.keepAliveTime = unit.toNanos(keepAliveTime);
1288 this.threadFactory = threadFactory;
1289 this.handler = handler;
1290 }
1291
1292 /**
1293 * Executes the given task sometime in the future. The task
1294 * may execute in a new thread or in an existing pooled thread.
1295 *
1296 * If the task cannot be submitted for execution, either because this
1297 * executor has been shutdown or because its capacity has been reached,
1298 * the task is handled by the current {@code RejectedExecutionHandler}.
1299 *
1300 * @param command the task to execute
1301 * @throws RejectedExecutionException at discretion of
1302 * {@code RejectedExecutionHandler}, if the task
1303 * cannot be accepted for execution
1304 * @throws NullPointerException if {@code command} is null
1305 */
1306 public void execute(Runnable command) {
1307 if (command == null)
1308 throw new NullPointerException();
1309 /*
1310 * Proceed in 3 steps:
1311 *
1312 * 1. If fewer than corePoolSize threads are running, try to
1313 * start a new thread with the given command as its first
1314 * task. The call to addWorker atomically checks runState and
1315 * workerCount, and so prevents false alarms that would add
1316 * threads when it shouldn't, by returning false.
1317 *
1318 * 2. If a task can be successfully queued, then we still need
1319 * to double-check whether we should have added a thread
1320 * (because existing ones died since last checking) or that
1321 * the pool shut down since entry into this method. So we
1322 * recheck state and if necessary roll back the enqueuing if
1323 * stopped, or start a new thread if there are none.
1324 *
1325 * 3. If we cannot queue task, then we try to add a new
1326 * thread. If it fails, we know we are shut down or saturated
1327 * and so reject the task.
1328 */
1329 int c = ctl.get();
1330 if (workerCountOf(c) < corePoolSize) {
1331 if (addWorker(command, true))
1332 return;
1333 c = ctl.get();
1334 }
1335 if (isRunning(c) && workQueue.offer(command)) {
1336 int recheck = ctl.get();
1337 if (! isRunning(recheck) && remove(command))
1338 reject(command);
1339 else if (workerCountOf(recheck) == 0)
1340 addWorker(null, false);
1341 }
1342 else if (!addWorker(command, false))
1343 reject(command);
1344 }
1345
1346 /**
1347 * Initiates an orderly shutdown in which previously submitted
1348 * tasks are executed, but no new tasks will be accepted.
1349 * Invocation has no additional effect if already shut down.
1350 *
1351 * <p>This method does not wait for previously submitted tasks to
1352 * complete execution. Use {@link #awaitTermination awaitTermination}
1353 * to do that.
1354 *
1355 * @throws SecurityException {@inheritDoc}
1356 */
1357 public void shutdown() {
1358 final ReentrantLock mainLock = this.mainLock;
1359 mainLock.lock();
1360 try {
1361 checkShutdownAccess();
1362 advanceRunState(SHUTDOWN);
1363 interruptIdleWorkers();
1364 onShutdown(); // hook for ScheduledThreadPoolExecutor
1365 } finally {
1366 mainLock.unlock();
1367 }
1368 tryTerminate();
1369 }
1370
1371 /**
1372 * Attempts to stop all actively executing tasks, halts the
1373 * processing of waiting tasks, and returns a list of the tasks
1374 * that were awaiting execution. These tasks are drained (removed)
1375 * from the task queue upon return from this method.
1376 *
1377 * <p>This method does not wait for actively executing tasks to
1378 * terminate. Use {@link #awaitTermination awaitTermination} to
1379 * do that.
1380 *
1381 * <p>There are no guarantees beyond best-effort attempts to stop
1382 * processing actively executing tasks. This implementation
1383 * cancels tasks via {@link Thread#interrupt}, so any task that
1384 * fails to respond to interrupts may never terminate.
1385 *
1386 * @throws SecurityException {@inheritDoc}
1387 */
1388 public List<Runnable> shutdownNow() {
1389 List<Runnable> tasks;
1390 final ReentrantLock mainLock = this.mainLock;
1391 mainLock.lock();
1392 try {
1393 checkShutdownAccess();
1394 advanceRunState(STOP);
1395 interruptWorkers();
1396 tasks = drainQueue();
1397 } finally {
1398 mainLock.unlock();
1399 }
1400 tryTerminate();
1401 return tasks;
1402 }
1403
1404 public boolean isShutdown() {
1405 return ! isRunning(ctl.get());
1406 }
1407
1408 /**
1409 * Returns true if this executor is in the process of terminating
1410 * after {@link #shutdown} or {@link #shutdownNow} but has not
1411 * completely terminated. This method may be useful for
1412 * debugging. A return of {@code true} reported a sufficient
1413 * period after shutdown may indicate that submitted tasks have
1414 * ignored or suppressed interruption, causing this executor not
1415 * to properly terminate.
1416 *
1417 * @return true if terminating but not yet terminated
1418 */
1419 public boolean isTerminating() {
1420 int c = ctl.get();
1421 return ! isRunning(c) && runStateLessThan(c, TERMINATED);
1422 }
1423
1424 public boolean isTerminated() {
1425 return runStateAtLeast(ctl.get(), TERMINATED);
1426 }
1427
1428 public boolean awaitTermination(long timeout, TimeUnit unit)
1429 throws InterruptedException {
1430 long nanos = unit.toNanos(timeout);
1431 final ReentrantLock mainLock = this.mainLock;
1432 mainLock.lock();
1433 try {
1434 for (;;) {
1435 if (runStateAtLeast(ctl.get(), TERMINATED))
1436 return true;
1437 if (nanos <= 0)
1438 return false;
1439 nanos = termination.awaitNanos(nanos);
1440 }
1441 } finally {
1442 mainLock.unlock();
1443 }
1444 }
1445
1446 /**
1447 * Invokes {@code shutdown} when this executor is no longer
1448 * referenced and it has no threads.
1449 */
1450 protected void finalize() {
1451 shutdown();
1452 }
1453
1454 /**
1455 * Sets the thread factory used to create new threads.
1456 *
1457 * @param threadFactory the new thread factory
1458 * @throws NullPointerException if threadFactory is null
1459 * @see #getThreadFactory
1460 */
1461 public void setThreadFactory(ThreadFactory threadFactory) {
1462 if (threadFactory == null)
1463 throw new NullPointerException();
1464 this.threadFactory = threadFactory;
1465 }
1466
1467 /**
1468 * Returns the thread factory used to create new threads.
1469 *
1470 * @return the current thread factory
1471 * @see #setThreadFactory
1472 */
1473 public ThreadFactory getThreadFactory() {
1474 return threadFactory;
1475 }
1476
1477 /**
1478 * Sets a new handler for unexecutable tasks.
1479 *
1480 * @param handler the new handler
1481 * @throws NullPointerException if handler is null
1482 * @see #getRejectedExecutionHandler
1483 */
1484 public void setRejectedExecutionHandler(RejectedExecutionHandler handler) {
1485 if (handler == null)
1486 throw new NullPointerException();
1487 this.handler = handler;
1488 }
1489
1490 /**
1491 * Returns the current handler for unexecutable tasks.
1492 *
1493 * @return the current handler
1494 * @see #setRejectedExecutionHandler
1495 */
1496 public RejectedExecutionHandler getRejectedExecutionHandler() {
1497 return handler;
1498 }
1499
1500 /**
1501 * Sets the core number of threads. This overrides any value set
1502 * in the constructor. If the new value is smaller than the
1503 * current value, excess existing threads will be terminated when
1504 * they next become idle. If larger, new threads will, if needed,
1505 * be started to execute any queued tasks.
1506 *
1507 * @param corePoolSize the new core size
1508 * @throws IllegalArgumentException if {@code corePoolSize < 0}
1509 * @see #getCorePoolSize
1510 */
1511 public void setCorePoolSize(int corePoolSize) {
1512 if (corePoolSize < 0)
1513 throw new IllegalArgumentException();
1514 int delta = corePoolSize - this.corePoolSize;
1515 this.corePoolSize = corePoolSize;
1516 if (workerCountOf(ctl.get()) > corePoolSize)
1517 interruptIdleWorkers();
1518 else if (delta > 0) {
1519 // We don't really know how many new threads are "needed".
1520 // As a heuristic, prestart enough new workers (up to new
1521 // core size) to handle the current number of tasks in
1522 // queue, but stop if queue becomes empty while doing so.
1523 int k = Math.min(delta, workQueue.size());
1524 while (k-- > 0 && addWorker(null, true)) {
1525 if (workQueue.isEmpty())
1526 break;
1527 }
1528 }
1529 }
1530
1531 /**
1532 * Returns the core number of threads.
1533 *
1534 * @return the core number of threads
1535 * @see #setCorePoolSize
1536 */
1537 public int getCorePoolSize() {
1538 return corePoolSize;
1539 }
1540
1541 /**
1542 * Starts a core thread, causing it to idly wait for work. This
1543 * overrides the default policy of starting core threads only when
1544 * new tasks are executed. This method will return {@code false}
1545 * if all core threads have already been started.
1546 *
1547 * @return {@code true} if a thread was started
1548 */
1549 public boolean prestartCoreThread() {
1550 return workerCountOf(ctl.get()) < corePoolSize &&
1551 addWorker(null, true);
1552 }
1553
1554 /**
1555 * Same as prestartCoreThread except arranges that at least one
1556 * thread is started even if corePoolSize is 0.
1557 */
1558 void ensurePrestart() {
1559 int wc = workerCountOf(ctl.get());
1560 if (wc < corePoolSize)
1561 addWorker(null, true);
1562 else if (wc == 0)
1563 addWorker(null, false);
1564 }
1565
1566 /**
1567 * Starts all core threads, causing them to idly wait for work. This
1568 * overrides the default policy of starting core threads only when
1569 * new tasks are executed.
1570 *
1571 * @return the number of threads started
1572 */
1573 public int prestartAllCoreThreads() {
1574 int n = 0;
1575 while (addWorker(null, true))
1576 ++n;
1577 return n;
1578 }
1579
1580 /**
1581 * Returns true if this pool allows core threads to time out and
1582 * terminate if no tasks arrive within the keepAlive time, being
1583 * replaced if needed when new tasks arrive. When true, the same
1584 * keep-alive policy applying to non-core threads applies also to
1585 * core threads. When false (the default), core threads are never
1586 * terminated due to lack of incoming tasks.
1587 *
1588 * @return {@code true} if core threads are allowed to time out,
1589 * else {@code false}
1590 *
1591 * @since 1.6
1592 */
1593 public boolean allowsCoreThreadTimeOut() {
1594 return allowCoreThreadTimeOut;
1595 }
1596
1597 /**
1598 * Sets the policy governing whether core threads may time out and
1599 * terminate if no tasks arrive within the keep-alive time, being
1600 * replaced if needed when new tasks arrive. When false, core
1601 * threads are never terminated due to lack of incoming
1602 * tasks. When true, the same keep-alive policy applying to
1603 * non-core threads applies also to core threads. To avoid
1604 * continual thread replacement, the keep-alive time must be
1605 * greater than zero when setting {@code true}. This method
1606 * should in general be called before the pool is actively used.
1607 *
1608 * @param value {@code true} if should time out, else {@code false}
1609 * @throws IllegalArgumentException if value is {@code true}
1610 * and the current keep-alive time is not greater than zero
1611 *
1612 * @since 1.6
1613 */
1614 public void allowCoreThreadTimeOut(boolean value) {
1615 if (value && keepAliveTime <= 0)
1616 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1617 if (value != allowCoreThreadTimeOut) {
1618 allowCoreThreadTimeOut = value;
1619 if (value)
1620 interruptIdleWorkers();
1621 }
1622 }
1623
1624 /**
1625 * Sets the maximum allowed number of threads. This overrides any
1626 * value set in the constructor. If the new value is smaller than
1627 * the current value, excess existing threads will be
1628 * terminated when they next become idle.
1629 *
1630 * @param maximumPoolSize the new maximum
1631 * @throws IllegalArgumentException if the new maximum is
1632 * less than or equal to zero, or
1633 * less than the {@linkplain #getCorePoolSize core pool size}
1634 * @see #getMaximumPoolSize
1635 */
1636 public void setMaximumPoolSize(int maximumPoolSize) {
1637 if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize)
1638 throw new IllegalArgumentException();
1639 this.maximumPoolSize = maximumPoolSize;
1640 if (workerCountOf(ctl.get()) > maximumPoolSize)
1641 interruptIdleWorkers();
1642 }
1643
1644 /**
1645 * Returns the maximum allowed number of threads.
1646 *
1647 * @return the maximum allowed number of threads
1648 * @see #setMaximumPoolSize
1649 */
1650 public int getMaximumPoolSize() {
1651 return maximumPoolSize;
1652 }
1653
1654 /**
1655 * Sets the time limit for which threads may remain idle before
1656 * being terminated. If there are more than the core number of
1657 * threads currently in the pool, after waiting this amount of
1658 * time without processing a task, excess threads will be
1659 * terminated. This overrides any value set in the constructor.
1660 *
1661 * @param time the time to wait. A time value of zero will cause
1662 * excess threads to terminate immediately after executing tasks.
1663 * @param unit the time unit of the {@code time} argument
1664 * @throws IllegalArgumentException if {@code time} less than zero or
1665 * if {@code time} is zero and {@code allowsCoreThreadTimeOut}
1666 * @see #getKeepAliveTime
1667 */
1668 public void setKeepAliveTime(long time, TimeUnit unit) {
1669 if (time < 0)
1670 throw new IllegalArgumentException();
1671 if (time == 0 && allowsCoreThreadTimeOut())
1672 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1673 long keepAliveTime = unit.toNanos(time);
1674 long delta = keepAliveTime - this.keepAliveTime;
1675 this.keepAliveTime = keepAliveTime;
1676 if (delta < 0)
1677 interruptIdleWorkers();
1678 }
1679
1680 /**
1681 * Returns the thread keep-alive time, which is the amount of time
1682 * that threads in excess of the core pool size may remain
1683 * idle before being terminated.
1684 *
1685 * @param unit the desired time unit of the result
1686 * @return the time limit
1687 * @see #setKeepAliveTime
1688 */
1689 public long getKeepAliveTime(TimeUnit unit) {
1690 return unit.convert(keepAliveTime, TimeUnit.NANOSECONDS);
1691 }
1692
1693 /* User-level queue utilities */
1694
1695 /**
1696 * Returns the task queue used by this executor. Access to the
1697 * task queue is intended primarily for debugging and monitoring.
1698 * This queue may be in active use. Retrieving the task queue
1699 * does not prevent queued tasks from executing.
1700 *
1701 * @return the task queue
1702 */
1703 public BlockingQueue<Runnable> getQueue() {
1704 return workQueue;
1705 }
1706
1707 /**
1708 * Removes this task from the executor's internal queue if it is
1709 * present, thus causing it not to be run if it has not already
1710 * started.
1711 *
1712 * <p> This method may be useful as one part of a cancellation
1713 * scheme. It may fail to remove tasks that have been converted
1714 * into other forms before being placed on the internal queue. For
1715 * example, a task entered using {@code submit} might be
1716 * converted into a form that maintains {@code Future} status.
1717 * However, in such cases, method {@link #purge} may be used to
1718 * remove those Futures that have been cancelled.
1719 *
1720 * @param task the task to remove
1721 * @return true if the task was removed
1722 */
1723 public boolean remove(Runnable task) {
1724 boolean removed = workQueue.remove(task);
1725 tryTerminate(); // In case SHUTDOWN and now empty
1726 return removed;
1727 }
1728
1729 /**
1730 * Tries to remove from the work queue all {@link Future}
1731 * tasks that have been cancelled. This method can be useful as a
1732 * storage reclamation operation, that has no other impact on
1733 * functionality. Cancelled tasks are never executed, but may
1734 * accumulate in work queues until worker threads can actively
1735 * remove them. Invoking this method instead tries to remove them now.
1736 * However, this method may fail to remove tasks in
1737 * the presence of interference by other threads.
1738 */
1739 public void purge() {
1740 final BlockingQueue<Runnable> q = workQueue;
1741 try {
1742 Iterator<Runnable> it = q.iterator();
1743 while (it.hasNext()) {
1744 Runnable r = it.next();
1745 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1746 it.remove();
1747 }
1748 } catch (ConcurrentModificationException fallThrough) {
1749 // Take slow path if we encounter interference during traversal.
1750 // Make copy for traversal and call remove for cancelled entries.
1751 // The slow path is more likely to be O(N*N).
1752 for (Object r : q.toArray())
1753 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1754 q.remove(r);
1755 }
1756
1757 tryTerminate(); // In case SHUTDOWN and now empty
1758 }
1759
1760 /* Statistics */
1761
1762 /**
1763 * Returns the current number of threads in the pool.
1764 *
1765 * @return the number of threads
1766 */
1767 public int getPoolSize() {
1768 final ReentrantLock mainLock = this.mainLock;
1769 mainLock.lock();
1770 try {
1771 // Remove rare and surprising possibility of
1772 // isTerminated() && getPoolSize() > 0
1773 return runStateAtLeast(ctl.get(), TIDYING) ? 0
1774 : workers.size();
1775 } finally {
1776 mainLock.unlock();
1777 }
1778 }
1779
1780 /**
1781 * Returns the approximate number of threads that are actively
1782 * executing tasks.
1783 *
1784 * @return the number of threads
1785 */
1786 public int getActiveCount() {
1787 final ReentrantLock mainLock = this.mainLock;
1788 mainLock.lock();
1789 try {
1790 int n = 0;
1791 for (Worker w : workers)
1792 if (w.isLocked())
1793 ++n;
1794 return n;
1795 } finally {
1796 mainLock.unlock();
1797 }
1798 }
1799
1800 /**
1801 * Returns the largest number of threads that have ever
1802 * simultaneously been in the pool.
1803 *
1804 * @return the number of threads
1805 */
1806 public int getLargestPoolSize() {
1807 final ReentrantLock mainLock = this.mainLock;
1808 mainLock.lock();
1809 try {
1810 return largestPoolSize;
1811 } finally {
1812 mainLock.unlock();
1813 }
1814 }
1815
1816 /**
1817 * Returns the approximate total number of tasks that have ever been
1818 * scheduled for execution. Because the states of tasks and
1819 * threads may change dynamically during computation, the returned
1820 * value is only an approximation.
1821 *
1822 * @return the number of tasks
1823 */
1824 public long getTaskCount() {
1825 final ReentrantLock mainLock = this.mainLock;
1826 mainLock.lock();
1827 try {
1828 long n = completedTaskCount;
1829 for (Worker w : workers) {
1830 n += w.completedTasks;
1831 if (w.isLocked())
1832 ++n;
1833 }
1834 return n + workQueue.size();
1835 } finally {
1836 mainLock.unlock();
1837 }
1838 }
1839
1840 /**
1841 * Returns the approximate total number of tasks that have
1842 * completed execution. Because the states of tasks and threads
1843 * may change dynamically during computation, the returned value
1844 * is only an approximation, but one that does not ever decrease
1845 * across successive calls.
1846 *
1847 * @return the number of tasks
1848 */
1849 public long getCompletedTaskCount() {
1850 final ReentrantLock mainLock = this.mainLock;
1851 mainLock.lock();
1852 try {
1853 long n = completedTaskCount;
1854 for (Worker w : workers)
1855 n += w.completedTasks;
1856 return n;
1857 } finally {
1858 mainLock.unlock();
1859 }
1860 }
1861
1862 /**
1863 * Returns a string identifying this pool, as well as its state,
1864 * including indications of run state and estimated worker and
1865 * task counts.
1866 *
1867 * @return a string identifying this pool, as well as its state
1868 */
1869 public String toString() {
1870 long ncompleted;
1871 int nworkers, nactive;
1872 final ReentrantLock mainLock = this.mainLock;
1873 mainLock.lock();
1874 try {
1875 ncompleted = completedTaskCount;
1876 nactive = 0;
1877 nworkers = workers.size();
1878 for (Worker w : workers) {
1879 ncompleted += w.completedTasks;
1880 if (w.isLocked())
1881 ++nactive;
1882 }
1883 } finally {
1884 mainLock.unlock();
1885 }
1886 int c = ctl.get();
1887 String rs = (runStateLessThan(c, SHUTDOWN) ? "Running" :
1888 (runStateAtLeast(c, TERMINATED) ? "Terminated" :
1889 "Shutting down"));
1890 return super.toString() +
1891 "[" + rs +
1892 ", pool size = " + nworkers +
1893 ", active threads = " + nactive +
1894 ", queued tasks = " + workQueue.size() +
1895 ", completed tasks = " + ncompleted +
1896 "]";
1897 }
1898
1899 /* Extension hooks */
1900
1901 /**
1902 * Method invoked prior to executing the given Runnable in the
1903 * given thread. This method is invoked by thread {@code t} that
1904 * will execute task {@code r}, and may be used to re-initialize
1905 * ThreadLocals, or to perform logging.
1906 *
1907 * <p>This implementation does nothing, but may be customized in
1908 * subclasses. Note: To properly nest multiple overridings, subclasses
1909 * should generally invoke {@code super.beforeExecute} at the end of
1910 * this method.
1911 *
1912 * @param t the thread that will run task {@code r}
1913 * @param r the task that will be executed
1914 */
1915 protected void beforeExecute(Thread t, Runnable r) { }
1916
1917 /**
1918 * Method invoked upon completion of execution of the given Runnable.
1919 * This method is invoked by the thread that executed the task. If
1920 * non-null, the Throwable is the uncaught {@code RuntimeException}
1921 * or {@code Error} that caused execution to terminate abruptly.
1922 *
1923 * <p>This implementation does nothing, but may be customized in
1924 * subclasses. Note: To properly nest multiple overridings, subclasses
1925 * should generally invoke {@code super.afterExecute} at the
1926 * beginning of this method.
1927 *
1928 * <p><b>Note:</b> When actions are enclosed in tasks (such as
1929 * {@link FutureTask}) either explicitly or via methods such as
1930 * {@code submit}, these task objects catch and maintain
1931 * computational exceptions, and so they do not cause abrupt
1932 * termination, and the internal exceptions are <em>not</em>
1933 * passed to this method. If you would like to trap both kinds of
1934 * failures in this method, you can further probe for such cases,
1935 * as in this sample subclass that prints either the direct cause
1936 * or the underlying exception if a task has been aborted:
1937 *
1938 * <pre> {@code
1939 * class ExtendedExecutor extends ThreadPoolExecutor {
1940 * // ...
1941 * protected void afterExecute(Runnable r, Throwable t) {
1942 * super.afterExecute(r, t);
1943 * if (t == null && r instanceof Future<?>) {
1944 * try {
1945 * Object result = ((Future<?>) r).get();
1946 * } catch (CancellationException ce) {
1947 * t = ce;
1948 * } catch (ExecutionException ee) {
1949 * t = ee.getCause();
1950 * } catch (InterruptedException ie) {
1951 * Thread.currentThread().interrupt(); // ignore/reset
1952 * }
1953 * }
1954 * if (t != null)
1955 * System.out.println(t);
1956 * }
1957 * }}</pre>
1958 *
1959 * @param r the runnable that has completed
1960 * @param t the exception that caused termination, or null if
1961 * execution completed normally
1962 */
1963 protected void afterExecute(Runnable r, Throwable t) { }
1964
1965 /**
1966 * Method invoked when the Executor has terminated. Default
1967 * implementation does nothing. Note: To properly nest multiple
1968 * overridings, subclasses should generally invoke
1969 * {@code super.terminated} within this method.
1970 */
1971 protected void terminated() { }
1972
1973 /* Predefined RejectedExecutionHandlers */
1974
1975 /**
1976 * A handler for rejected tasks that runs the rejected task
1977 * directly in the calling thread of the {@code execute} method,
1978 * unless the executor has been shut down, in which case the task
1979 * is discarded.
1980 */
1981 public static class CallerRunsPolicy implements RejectedExecutionHandler {
1982 /**
1983 * Creates a {@code CallerRunsPolicy}.
1984 */
1985 public CallerRunsPolicy() { }
1986
1987 /**
1988 * Executes task r in the caller's thread, unless the executor
1989 * has been shut down, in which case the task is discarded.
1990 *
1991 * @param r the runnable task requested to be executed
1992 * @param e the executor attempting to execute this task
1993 */
1994 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1995 if (!e.isShutdown()) {
1996 r.run();
1997 }
1998 }
1999 }
2000
2001 /**
2002 * A handler for rejected tasks that throws a
2003 * {@code RejectedExecutionException}.
2004 */
2005 public static class AbortPolicy implements RejectedExecutionHandler {
2006 /**
2007 * Creates an {@code AbortPolicy}.
2008 */
2009 public AbortPolicy() { }
2010
2011 /**
2012 * Always throws RejectedExecutionException.
2013 *
2014 * @param r the runnable task requested to be executed
2015 * @param e the executor attempting to execute this task
2016 * @throws RejectedExecutionException always.
2017 */
2018 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2019 throw new RejectedExecutionException("Task " + r.toString() +
2020 " rejected from " +
2021 e.toString());
2022 }
2023 }
2024
2025 /**
2026 * A handler for rejected tasks that silently discards the
2027 * rejected task.
2028 */
2029 public static class DiscardPolicy implements RejectedExecutionHandler {
2030 /**
2031 * Creates a {@code DiscardPolicy}.
2032 */
2033 public DiscardPolicy() { }
2034
2035 /**
2036 * Does nothing, which has the effect of discarding task r.
2037 *
2038 * @param r the runnable task requested to be executed
2039 * @param e the executor attempting to execute this task
2040 */
2041 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2042 }
2043 }
2044
2045 /**
2046 * A handler for rejected tasks that discards the oldest unhandled
2047 * request and then retries {@code execute}, unless the executor
2048 * is shut down, in which case the task is discarded.
2049 */
2050 public static class DiscardOldestPolicy implements RejectedExecutionHandler {
2051 /**
2052 * Creates a {@code DiscardOldestPolicy} for the given executor.
2053 */
2054 public DiscardOldestPolicy() { }
2055
2056 /**
2057 * Obtains and ignores the next task that the executor
2058 * would otherwise execute, if one is immediately available,
2059 * and then retries execution of task r, unless the executor
2060 * is shut down, in which case task r is instead discarded.
2061 *
2062 * @param r the runnable task requested to be executed
2063 * @param e the executor attempting to execute this task
2064 */
2065 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2066 if (!e.isShutdown()) {
2067 e.getQueue().poll();
2068 e.execute(r);
2069 }
2070 }
2071 }
2072 }