ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/jdk7/java/util/concurrent/ThreadPoolExecutor.java
Revision: 1.13
Committed: Mon Jul 22 18:31:19 2013 UTC (10 years, 10 months ago) by jsr166
Branch: MAIN
Changes since 1.12: +1 -1 lines
Log Message:
javadoc style

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain, as explained at
4 * http://creativecommons.org/publicdomain/zero/1.0/
5 */
6
7 package java.util.concurrent;
8 import java.util.concurrent.locks.AbstractQueuedSynchronizer;
9 import java.util.concurrent.locks.Condition;
10 import java.util.concurrent.locks.ReentrantLock;
11 import java.util.concurrent.atomic.AtomicInteger;
12 import java.util.*;
13
14 /**
15 * An {@link ExecutorService} that executes each submitted task using
16 * one of possibly several pooled threads, normally configured
17 * using {@link Executors} factory methods.
18 *
19 * <p>Thread pools address two different problems: they usually
20 * provide improved performance when executing large numbers of
21 * asynchronous tasks, due to reduced per-task invocation overhead,
22 * and they provide a means of bounding and managing the resources,
23 * including threads, consumed when executing a collection of tasks.
24 * Each {@code ThreadPoolExecutor} also maintains some basic
25 * statistics, such as the number of completed tasks.
26 *
27 * <p>To be useful across a wide range of contexts, this class
28 * provides many adjustable parameters and extensibility
29 * hooks. However, programmers are urged to use the more convenient
30 * {@link Executors} factory methods {@link
31 * Executors#newCachedThreadPool} (unbounded thread pool, with
32 * automatic thread reclamation), {@link Executors#newFixedThreadPool}
33 * (fixed size thread pool) and {@link
34 * Executors#newSingleThreadExecutor} (single background thread), that
35 * preconfigure settings for the most common usage
36 * scenarios. Otherwise, use the following guide when manually
37 * configuring and tuning this class:
38 *
39 * <dl>
40 *
41 * <dt>Core and maximum pool sizes</dt>
42 *
43 * <dd>A {@code ThreadPoolExecutor} will automatically adjust the
44 * pool size (see {@link #getPoolSize})
45 * according to the bounds set by
46 * corePoolSize (see {@link #getCorePoolSize}) and
47 * maximumPoolSize (see {@link #getMaximumPoolSize}).
48 *
49 * When a new task is submitted in method {@link #execute(Runnable)},
50 * and fewer than corePoolSize threads are running, a new thread is
51 * created to handle the request, even if other worker threads are
52 * idle. If there are more than corePoolSize but less than
53 * maximumPoolSize threads running, a new thread will be created only
54 * if the queue is full. By setting corePoolSize and maximumPoolSize
55 * the same, you create a fixed-size thread pool. By setting
56 * maximumPoolSize to an essentially unbounded value such as {@code
57 * Integer.MAX_VALUE}, you allow the pool to accommodate an arbitrary
58 * number of concurrent tasks. Most typically, core and maximum pool
59 * sizes are set only upon construction, but they may also be changed
60 * dynamically using {@link #setCorePoolSize} and {@link
61 * #setMaximumPoolSize}. </dd>
62 *
63 * <dt>On-demand construction</dt>
64 *
65 * <dd>By default, even core threads are initially created and
66 * started only when new tasks arrive, but this can be overridden
67 * dynamically using method {@link #prestartCoreThread} or {@link
68 * #prestartAllCoreThreads}. You probably want to prestart threads if
69 * you construct the pool with a non-empty queue. </dd>
70 *
71 * <dt>Creating new threads</dt>
72 *
73 * <dd>New threads are created using a {@link ThreadFactory}. If not
74 * otherwise specified, a {@link Executors#defaultThreadFactory} is
75 * used, that creates threads to all be in the same {@link
76 * ThreadGroup} and with the same {@code NORM_PRIORITY} priority and
77 * non-daemon status. By supplying a different ThreadFactory, you can
78 * alter the thread's name, thread group, priority, daemon status,
79 * etc. If a {@code ThreadFactory} fails to create a thread when asked
80 * by returning null from {@code newThread}, the executor will
81 * continue, but might not be able to execute any tasks. Threads
82 * should possess the "modifyThread" {@code RuntimePermission}. If
83 * worker threads or other threads using the pool do not possess this
84 * permission, service may be degraded: configuration changes may not
85 * take effect in a timely manner, and a shutdown pool may remain in a
86 * state in which termination is possible but not completed.</dd>
87 *
88 * <dt>Keep-alive times</dt>
89 *
90 * <dd>If the pool currently has more than corePoolSize threads,
91 * excess threads will be terminated if they have been idle for more
92 * than the keepAliveTime (see {@link #getKeepAliveTime(TimeUnit)}).
93 * This provides a means of reducing resource consumption when the
94 * pool is not being actively used. If the pool becomes more active
95 * later, new threads will be constructed. This parameter can also be
96 * changed dynamically using method {@link #setKeepAliveTime(long,
97 * TimeUnit)}. Using a value of {@code Long.MAX_VALUE} {@link
98 * TimeUnit#NANOSECONDS} effectively disables idle threads from ever
99 * terminating prior to shut down. By default, the keep-alive policy
100 * applies only when there are more than corePoolSize threads. But
101 * method {@link #allowCoreThreadTimeOut(boolean)} can be used to
102 * apply this time-out policy to core threads as well, so long as the
103 * keepAliveTime value is non-zero. </dd>
104 *
105 * <dt>Queuing</dt>
106 *
107 * <dd>Any {@link BlockingQueue} may be used to transfer and hold
108 * submitted tasks. The use of this queue interacts with pool sizing:
109 *
110 * <ul>
111 *
112 * <li> If fewer than corePoolSize threads are running, the Executor
113 * always prefers adding a new thread
114 * rather than queuing.</li>
115 *
116 * <li> If corePoolSize or more threads are running, the Executor
117 * always prefers queuing a request rather than adding a new
118 * thread.</li>
119 *
120 * <li> If a request cannot be queued, a new thread is created unless
121 * this would exceed maximumPoolSize, in which case, the task will be
122 * rejected.</li>
123 *
124 * </ul>
125 *
126 * There are three general strategies for queuing:
127 * <ol>
128 *
129 * <li> <em> Direct handoffs.</em> A good default choice for a work
130 * queue is a {@link SynchronousQueue} that hands off tasks to threads
131 * without otherwise holding them. Here, an attempt to queue a task
132 * will fail if no threads are immediately available to run it, so a
133 * new thread will be constructed. This policy avoids lockups when
134 * handling sets of requests that might have internal dependencies.
135 * Direct handoffs generally require unbounded maximumPoolSizes to
136 * avoid rejection of new submitted tasks. This in turn admits the
137 * possibility of unbounded thread growth when commands continue to
138 * arrive on average faster than they can be processed. </li>
139 *
140 * <li><em> Unbounded queues.</em> Using an unbounded queue (for
141 * example a {@link LinkedBlockingQueue} without a predefined
142 * capacity) will cause new tasks to wait in the queue when all
143 * corePoolSize threads are busy. Thus, no more than corePoolSize
144 * threads will ever be created. (And the value of the maximumPoolSize
145 * therefore doesn't have any effect.) This may be appropriate when
146 * each task is completely independent of others, so tasks cannot
147 * affect each others execution; for example, in a web page server.
148 * While this style of queuing can be useful in smoothing out
149 * transient bursts of requests, it admits the possibility of
150 * unbounded work queue growth when commands continue to arrive on
151 * average faster than they can be processed. </li>
152 *
153 * <li><em>Bounded queues.</em> A bounded queue (for example, an
154 * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
155 * used with finite maximumPoolSizes, but can be more difficult to
156 * tune and control. Queue sizes and maximum pool sizes may be traded
157 * off for each other: Using large queues and small pools minimizes
158 * CPU usage, OS resources, and context-switching overhead, but can
159 * lead to artificially low throughput. If tasks frequently block (for
160 * example if they are I/O bound), a system may be able to schedule
161 * time for more threads than you otherwise allow. Use of small queues
162 * generally requires larger pool sizes, which keeps CPUs busier but
163 * may encounter unacceptable scheduling overhead, which also
164 * decreases throughput. </li>
165 *
166 * </ol>
167 *
168 * </dd>
169 *
170 * <dt>Rejected tasks</dt>
171 *
172 * <dd>New tasks submitted in method {@link #execute(Runnable)} will be
173 * <em>rejected</em> when the Executor has been shut down, and also when
174 * the Executor uses finite bounds for both maximum threads and work queue
175 * capacity, and is saturated. In either case, the {@code execute} method
176 * invokes the {@link
177 * RejectedExecutionHandler#rejectedExecution(Runnable, ThreadPoolExecutor)}
178 * method of its {@link RejectedExecutionHandler}. Four predefined handler
179 * policies are provided:
180 *
181 * <ol>
182 *
183 * <li> In the default {@link ThreadPoolExecutor.AbortPolicy}, the
184 * handler throws a runtime {@link RejectedExecutionException} upon
185 * rejection. </li>
186 *
187 * <li> In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread
188 * that invokes {@code execute} itself runs the task. This provides a
189 * simple feedback control mechanism that will slow down the rate that
190 * new tasks are submitted. </li>
191 *
192 * <li> In {@link ThreadPoolExecutor.DiscardPolicy}, a task that
193 * cannot be executed is simply dropped. </li>
194 *
195 * <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the
196 * executor is not shut down, the task at the head of the work queue
197 * is dropped, and then execution is retried (which can fail again,
198 * causing this to be repeated.) </li>
199 *
200 * </ol>
201 *
202 * It is possible to define and use other kinds of {@link
203 * RejectedExecutionHandler} classes. Doing so requires some care
204 * especially when policies are designed to work only under particular
205 * capacity or queuing policies. </dd>
206 *
207 * <dt>Hook methods</dt>
208 *
209 * <dd>This class provides {@code protected} overridable
210 * {@link #beforeExecute(Thread, Runnable)} and
211 * {@link #afterExecute(Runnable, Throwable)} methods that are called
212 * before and after execution of each task. These can be used to
213 * manipulate the execution environment; for example, reinitializing
214 * ThreadLocals, gathering statistics, or adding log entries.
215 * Additionally, method {@link #terminated} can be overridden to perform
216 * any special processing that needs to be done once the Executor has
217 * fully terminated.
218 *
219 * <p>If hook or callback methods throw exceptions, internal worker
220 * threads may in turn fail and abruptly terminate.</dd>
221 *
222 * <dt>Queue maintenance</dt>
223 *
224 * <dd>Method {@link #getQueue()} allows access to the work queue
225 * for purposes of monitoring and debugging. Use of this method for
226 * any other purpose is strongly discouraged. Two supplied methods,
227 * {@link #remove(Runnable)} and {@link #purge} are available to
228 * assist in storage reclamation when large numbers of queued tasks
229 * become cancelled.</dd>
230 *
231 * <dt>Finalization</dt>
232 *
233 * <dd>A pool that is no longer referenced in a program <em>AND</em>
234 * has no remaining threads will be {@code shutdown} automatically. If
235 * you would like to ensure that unreferenced pools are reclaimed even
236 * if users forget to call {@link #shutdown}, then you must arrange
237 * that unused threads eventually die, by setting appropriate
238 * keep-alive times, using a lower bound of zero core threads and/or
239 * setting {@link #allowCoreThreadTimeOut(boolean)}. </dd>
240 *
241 * </dl>
242 *
243 * <p><b>Extension example</b>. Most extensions of this class
244 * override one or more of the protected hook methods. For example,
245 * here is a subclass that adds a simple pause/resume feature:
246 *
247 * <pre> {@code
248 * class PausableThreadPoolExecutor extends ThreadPoolExecutor {
249 * private boolean isPaused;
250 * private ReentrantLock pauseLock = new ReentrantLock();
251 * private Condition unpaused = pauseLock.newCondition();
252 *
253 * public PausableThreadPoolExecutor(...) { super(...); }
254 *
255 * protected void beforeExecute(Thread t, Runnable r) {
256 * super.beforeExecute(t, r);
257 * pauseLock.lock();
258 * try {
259 * while (isPaused) unpaused.await();
260 * } catch (InterruptedException ie) {
261 * t.interrupt();
262 * } finally {
263 * pauseLock.unlock();
264 * }
265 * }
266 *
267 * public void pause() {
268 * pauseLock.lock();
269 * try {
270 * isPaused = true;
271 * } finally {
272 * pauseLock.unlock();
273 * }
274 * }
275 *
276 * public void resume() {
277 * pauseLock.lock();
278 * try {
279 * isPaused = false;
280 * unpaused.signalAll();
281 * } finally {
282 * pauseLock.unlock();
283 * }
284 * }
285 * }}</pre>
286 *
287 * @since 1.5
288 * @author Doug Lea
289 */
290 public class ThreadPoolExecutor extends AbstractExecutorService {
291 /**
292 * The main pool control state, ctl, is an atomic integer packing
293 * two conceptual fields
294 * workerCount, indicating the effective number of threads
295 * runState, indicating whether running, shutting down etc
296 *
297 * In order to pack them into one int, we limit workerCount to
298 * (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2
299 * billion) otherwise representable. If this is ever an issue in
300 * the future, the variable can be changed to be an AtomicLong,
301 * and the shift/mask constants below adjusted. But until the need
302 * arises, this code is a bit faster and simpler using an int.
303 *
304 * The workerCount is the number of workers that have been
305 * permitted to start and not permitted to stop. The value may be
306 * transiently different from the actual number of live threads,
307 * for example when a ThreadFactory fails to create a thread when
308 * asked, and when exiting threads are still performing
309 * bookkeeping before terminating. The user-visible pool size is
310 * reported as the current size of the workers set.
311 *
312 * The runState provides the main lifecycle control, taking on values:
313 *
314 * RUNNING: Accept new tasks and process queued tasks
315 * SHUTDOWN: Don't accept new tasks, but process queued tasks
316 * STOP: Don't accept new tasks, don't process queued tasks,
317 * and interrupt in-progress tasks
318 * TIDYING: All tasks have terminated, workerCount is zero,
319 * the thread transitioning to state TIDYING
320 * will run the terminated() hook method
321 * TERMINATED: terminated() has completed
322 *
323 * The numerical order among these values matters, to allow
324 * ordered comparisons. The runState monotonically increases over
325 * time, but need not hit each state. The transitions are:
326 *
327 * RUNNING -> SHUTDOWN
328 * On invocation of shutdown(), perhaps implicitly in finalize()
329 * (RUNNING or SHUTDOWN) -> STOP
330 * On invocation of shutdownNow()
331 * SHUTDOWN -> TIDYING
332 * When both queue and pool are empty
333 * STOP -> TIDYING
334 * When pool is empty
335 * TIDYING -> TERMINATED
336 * When the terminated() hook method has completed
337 *
338 * Threads waiting in awaitTermination() will return when the
339 * state reaches TERMINATED.
340 *
341 * Detecting the transition from SHUTDOWN to TIDYING is less
342 * straightforward than you'd like because the queue may become
343 * empty after non-empty and vice versa during SHUTDOWN state, but
344 * we can only terminate if, after seeing that it is empty, we see
345 * that workerCount is 0 (which sometimes entails a recheck -- see
346 * below).
347 */
348 private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
349 private static final int COUNT_BITS = Integer.SIZE - 3;
350 private static final int CAPACITY = (1 << COUNT_BITS) - 1;
351
352 // runState is stored in the high-order bits
353 private static final int RUNNING = -1 << COUNT_BITS;
354 private static final int SHUTDOWN = 0 << COUNT_BITS;
355 private static final int STOP = 1 << COUNT_BITS;
356 private static final int TIDYING = 2 << COUNT_BITS;
357 private static final int TERMINATED = 3 << COUNT_BITS;
358
359 // Packing and unpacking ctl
360 private static int runStateOf(int c) { return c & ~CAPACITY; }
361 private static int workerCountOf(int c) { return c & CAPACITY; }
362 private static int ctlOf(int rs, int wc) { return rs | wc; }
363
364 /*
365 * Bit field accessors that don't require unpacking ctl.
366 * These depend on the bit layout and on workerCount being never negative.
367 */
368
369 private static boolean runStateLessThan(int c, int s) {
370 return c < s;
371 }
372
373 private static boolean runStateAtLeast(int c, int s) {
374 return c >= s;
375 }
376
377 private static boolean isRunning(int c) {
378 return c < SHUTDOWN;
379 }
380
381 /**
382 * Attempts to CAS-increment the workerCount field of ctl.
383 */
384 private boolean compareAndIncrementWorkerCount(int expect) {
385 return ctl.compareAndSet(expect, expect + 1);
386 }
387
388 /**
389 * Attempts to CAS-decrement the workerCount field of ctl.
390 */
391 private boolean compareAndDecrementWorkerCount(int expect) {
392 return ctl.compareAndSet(expect, expect - 1);
393 }
394
395 /**
396 * Decrements the workerCount field of ctl. This is called only on
397 * abrupt termination of a thread (see processWorkerExit). Other
398 * decrements are performed within getTask.
399 */
400 private void decrementWorkerCount() {
401 do {} while (! compareAndDecrementWorkerCount(ctl.get()));
402 }
403
404 /**
405 * The queue used for holding tasks and handing off to worker
406 * threads. We do not require that workQueue.poll() returning
407 * null necessarily means that workQueue.isEmpty(), so rely
408 * solely on isEmpty to see if the queue is empty (which we must
409 * do for example when deciding whether to transition from
410 * SHUTDOWN to TIDYING). This accommodates special-purpose
411 * queues such as DelayQueues for which poll() is allowed to
412 * return null even if it may later return non-null when delays
413 * expire.
414 */
415 private final BlockingQueue<Runnable> workQueue;
416
417 /**
418 * Lock held on access to workers set and related bookkeeping.
419 * While we could use a concurrent set of some sort, it turns out
420 * to be generally preferable to use a lock. Among the reasons is
421 * that this serializes interruptIdleWorkers, which avoids
422 * unnecessary interrupt storms, especially during shutdown.
423 * Otherwise exiting threads would concurrently interrupt those
424 * that have not yet interrupted. It also simplifies some of the
425 * associated statistics bookkeeping of largestPoolSize etc. We
426 * also hold mainLock on shutdown and shutdownNow, for the sake of
427 * ensuring workers set is stable while separately checking
428 * permission to interrupt and actually interrupting.
429 */
430 private final ReentrantLock mainLock = new ReentrantLock();
431
432 /**
433 * Set containing all worker threads in pool. Accessed only when
434 * holding mainLock.
435 */
436 private final HashSet<Worker> workers = new HashSet<Worker>();
437
438 /**
439 * Wait condition to support awaitTermination
440 */
441 private final Condition termination = mainLock.newCondition();
442
443 /**
444 * Tracks largest attained pool size. Accessed only under
445 * mainLock.
446 */
447 private int largestPoolSize;
448
449 /**
450 * Counter for completed tasks. Updated only on termination of
451 * worker threads. Accessed only under mainLock.
452 */
453 private long completedTaskCount;
454
455 /*
456 * All user control parameters are declared as volatiles so that
457 * ongoing actions are based on freshest values, but without need
458 * for locking, since no internal invariants depend on them
459 * changing synchronously with respect to other actions.
460 */
461
462 /**
463 * Factory for new threads. All threads are created using this
464 * factory (via method addWorker). All callers must be prepared
465 * for addWorker to fail, which may reflect a system or user's
466 * policy limiting the number of threads. Even though it is not
467 * treated as an error, failure to create threads may result in
468 * new tasks being rejected or existing ones remaining stuck in
469 * the queue.
470 *
471 * We go further and preserve pool invariants even in the face of
472 * errors such as OutOfMemoryError, that might be thrown while
473 * trying to create threads. Such errors are rather common due to
474 * the need to allocate a native stack in Thread.start, and users
475 * will want to perform clean pool shutdown to clean up. There
476 * will likely be enough memory available for the cleanup code to
477 * complete without encountering yet another OutOfMemoryError.
478 */
479 private volatile ThreadFactory threadFactory;
480
481 /**
482 * Handler called when saturated or shutdown in execute.
483 */
484 private volatile RejectedExecutionHandler handler;
485
486 /**
487 * Timeout in nanoseconds for idle threads waiting for work.
488 * Threads use this timeout when there are more than corePoolSize
489 * present or if allowCoreThreadTimeOut. Otherwise they wait
490 * forever for new work.
491 */
492 private volatile long keepAliveTime;
493
494 /**
495 * If false (default), core threads stay alive even when idle.
496 * If true, core threads use keepAliveTime to time out waiting
497 * for work.
498 */
499 private volatile boolean allowCoreThreadTimeOut;
500
501 /**
502 * Core pool size is the minimum number of workers to keep alive
503 * (and not allow to time out etc) unless allowCoreThreadTimeOut
504 * is set, in which case the minimum is zero.
505 */
506 private volatile int corePoolSize;
507
508 /**
509 * Maximum pool size. Note that the actual maximum is internally
510 * bounded by CAPACITY.
511 */
512 private volatile int maximumPoolSize;
513
514 /**
515 * The default rejected execution handler
516 */
517 private static final RejectedExecutionHandler defaultHandler =
518 new AbortPolicy();
519
520 /**
521 * Permission required for callers of shutdown and shutdownNow.
522 * We additionally require (see checkShutdownAccess) that callers
523 * have permission to actually interrupt threads in the worker set
524 * (as governed by Thread.interrupt, which relies on
525 * ThreadGroup.checkAccess, which in turn relies on
526 * SecurityManager.checkAccess). Shutdowns are attempted only if
527 * these checks pass.
528 *
529 * All actual invocations of Thread.interrupt (see
530 * interruptIdleWorkers and interruptWorkers) ignore
531 * SecurityExceptions, meaning that the attempted interrupts
532 * silently fail. In the case of shutdown, they should not fail
533 * unless the SecurityManager has inconsistent policies, sometimes
534 * allowing access to a thread and sometimes not. In such cases,
535 * failure to actually interrupt threads may disable or delay full
536 * termination. Other uses of interruptIdleWorkers are advisory,
537 * and failure to actually interrupt will merely delay response to
538 * configuration changes so is not handled exceptionally.
539 */
540 private static final RuntimePermission shutdownPerm =
541 new RuntimePermission("modifyThread");
542
543 /**
544 * Class Worker mainly maintains interrupt control state for
545 * threads running tasks, along with other minor bookkeeping.
546 * This class opportunistically extends AbstractQueuedSynchronizer
547 * to simplify acquiring and releasing a lock surrounding each
548 * task execution. This protects against interrupts that are
549 * intended to wake up a worker thread waiting for a task from
550 * instead interrupting a task being run. We implement a simple
551 * non-reentrant mutual exclusion lock rather than use
552 * ReentrantLock because we do not want worker tasks to be able to
553 * reacquire the lock when they invoke pool control methods like
554 * setCorePoolSize. Additionally, to suppress interrupts until
555 * the thread actually starts running tasks, we initialize lock
556 * state to a negative value, and clear it upon start (in
557 * runWorker).
558 */
559 private final class Worker
560 extends AbstractQueuedSynchronizer
561 implements Runnable
562 {
563 /**
564 * This class will never be serialized, but we provide a
565 * serialVersionUID to suppress a javac warning.
566 */
567 private static final long serialVersionUID = 6138294804551838833L;
568
569 /** Thread this worker is running in. Null if factory fails. */
570 final Thread thread;
571 /** Initial task to run. Possibly null. */
572 Runnable firstTask;
573 /** Per-thread task counter */
574 volatile long completedTasks;
575
576 /**
577 * Creates with given first task and thread from ThreadFactory.
578 * @param firstTask the first task (null if none)
579 */
580 Worker(Runnable firstTask) {
581 setState(-1); // inhibit interrupts until runWorker
582 this.firstTask = firstTask;
583 this.thread = getThreadFactory().newThread(this);
584 }
585
586 /** Delegates main run loop to outer runWorker. */
587 public void run() {
588 runWorker(this);
589 }
590
591 // Lock methods
592 //
593 // The value 0 represents the unlocked state.
594 // The value 1 represents the locked state.
595
596 protected boolean isHeldExclusively() {
597 return getState() != 0;
598 }
599
600 protected boolean tryAcquire(int unused) {
601 if (compareAndSetState(0, 1)) {
602 setExclusiveOwnerThread(Thread.currentThread());
603 return true;
604 }
605 return false;
606 }
607
608 protected boolean tryRelease(int unused) {
609 setExclusiveOwnerThread(null);
610 setState(0);
611 return true;
612 }
613
614 public void lock() { acquire(1); }
615 public boolean tryLock() { return tryAcquire(1); }
616 public void unlock() { release(1); }
617 public boolean isLocked() { return isHeldExclusively(); }
618
619 void interruptIfStarted() {
620 Thread t;
621 if (getState() >= 0 && (t = thread) != null && !t.isInterrupted()) {
622 try {
623 t.interrupt();
624 } catch (SecurityException ignore) {
625 }
626 }
627 }
628 }
629
630 /*
631 * Methods for setting control state
632 */
633
634 /**
635 * Transitions runState to given target, or leaves it alone if
636 * already at least the given target.
637 *
638 * @param targetState the desired state, either SHUTDOWN or STOP
639 * (but not TIDYING or TERMINATED -- use tryTerminate for that)
640 */
641 private void advanceRunState(int targetState) {
642 for (;;) {
643 int c = ctl.get();
644 if (runStateAtLeast(c, targetState) ||
645 ctl.compareAndSet(c, ctlOf(targetState, workerCountOf(c))))
646 break;
647 }
648 }
649
650 /**
651 * Transitions to TERMINATED state if either (SHUTDOWN and pool
652 * and queue empty) or (STOP and pool empty). If otherwise
653 * eligible to terminate but workerCount is nonzero, interrupts an
654 * idle worker to ensure that shutdown signals propagate. This
655 * method must be called following any action that might make
656 * termination possible -- reducing worker count or removing tasks
657 * from the queue during shutdown. The method is non-private to
658 * allow access from ScheduledThreadPoolExecutor.
659 */
660 final void tryTerminate() {
661 for (;;) {
662 int c = ctl.get();
663 if (isRunning(c) ||
664 runStateAtLeast(c, TIDYING) ||
665 (runStateOf(c) == SHUTDOWN && ! workQueue.isEmpty()))
666 return;
667 if (workerCountOf(c) != 0) { // Eligible to terminate
668 interruptIdleWorkers(ONLY_ONE);
669 return;
670 }
671
672 final ReentrantLock mainLock = this.mainLock;
673 mainLock.lock();
674 try {
675 if (ctl.compareAndSet(c, ctlOf(TIDYING, 0))) {
676 try {
677 terminated();
678 } finally {
679 ctl.set(ctlOf(TERMINATED, 0));
680 termination.signalAll();
681 }
682 return;
683 }
684 } finally {
685 mainLock.unlock();
686 }
687 // else retry on failed CAS
688 }
689 }
690
691 /*
692 * Methods for controlling interrupts to worker threads.
693 */
694
695 /**
696 * If there is a security manager, makes sure caller has
697 * permission to shut down threads in general (see shutdownPerm).
698 * If this passes, additionally makes sure the caller is allowed
699 * to interrupt each worker thread. This might not be true even if
700 * first check passed, if the SecurityManager treats some threads
701 * specially.
702 */
703 private void checkShutdownAccess() {
704 SecurityManager security = System.getSecurityManager();
705 if (security != null) {
706 security.checkPermission(shutdownPerm);
707 final ReentrantLock mainLock = this.mainLock;
708 mainLock.lock();
709 try {
710 for (Worker w : workers)
711 security.checkAccess(w.thread);
712 } finally {
713 mainLock.unlock();
714 }
715 }
716 }
717
718 /**
719 * Interrupts all threads, even if active. Ignores SecurityExceptions
720 * (in which case some threads may remain uninterrupted).
721 */
722 private void interruptWorkers() {
723 final ReentrantLock mainLock = this.mainLock;
724 mainLock.lock();
725 try {
726 for (Worker w : workers)
727 w.interruptIfStarted();
728 } finally {
729 mainLock.unlock();
730 }
731 }
732
733 /**
734 * Interrupts threads that might be waiting for tasks (as
735 * indicated by not being locked) so they can check for
736 * termination or configuration changes. Ignores
737 * SecurityExceptions (in which case some threads may remain
738 * uninterrupted).
739 *
740 * @param onlyOne If true, interrupt at most one worker. This is
741 * called only from tryTerminate when termination is otherwise
742 * enabled but there are still other workers. In this case, at
743 * most one waiting worker is interrupted to propagate shutdown
744 * signals in case all threads are currently waiting.
745 * Interrupting any arbitrary thread ensures that newly arriving
746 * workers since shutdown began will also eventually exit.
747 * To guarantee eventual termination, it suffices to always
748 * interrupt only one idle worker, but shutdown() interrupts all
749 * idle workers so that redundant workers exit promptly, not
750 * waiting for a straggler task to finish.
751 */
752 private void interruptIdleWorkers(boolean onlyOne) {
753 final ReentrantLock mainLock = this.mainLock;
754 mainLock.lock();
755 try {
756 for (Worker w : workers) {
757 Thread t = w.thread;
758 if (!t.isInterrupted() && w.tryLock()) {
759 try {
760 t.interrupt();
761 } catch (SecurityException ignore) {
762 } finally {
763 w.unlock();
764 }
765 }
766 if (onlyOne)
767 break;
768 }
769 } finally {
770 mainLock.unlock();
771 }
772 }
773
774 /**
775 * Common form of interruptIdleWorkers, to avoid having to
776 * remember what the boolean argument means.
777 */
778 private void interruptIdleWorkers() {
779 interruptIdleWorkers(false);
780 }
781
782 private static final boolean ONLY_ONE = true;
783
784 /*
785 * Misc utilities, most of which are also exported to
786 * ScheduledThreadPoolExecutor
787 */
788
789 /**
790 * Invokes the rejected execution handler for the given command.
791 * Package-protected for use by ScheduledThreadPoolExecutor.
792 */
793 final void reject(Runnable command) {
794 handler.rejectedExecution(command, this);
795 }
796
797 /**
798 * Performs any further cleanup following run state transition on
799 * invocation of shutdown. A no-op here, but used by
800 * ScheduledThreadPoolExecutor to cancel delayed tasks.
801 */
802 void onShutdown() {
803 }
804
805 /**
806 * State check needed by ScheduledThreadPoolExecutor to
807 * enable running tasks during shutdown.
808 *
809 * @param shutdownOK true if should return true if SHUTDOWN
810 */
811 final boolean isRunningOrShutdown(boolean shutdownOK) {
812 int rs = runStateOf(ctl.get());
813 return rs == RUNNING || (rs == SHUTDOWN && shutdownOK);
814 }
815
816 /**
817 * Drains the task queue into a new list, normally using
818 * drainTo. But if the queue is a DelayQueue or any other kind of
819 * queue for which poll or drainTo may fail to remove some
820 * elements, it deletes them one by one.
821 */
822 private List<Runnable> drainQueue() {
823 BlockingQueue<Runnable> q = workQueue;
824 ArrayList<Runnable> taskList = new ArrayList<Runnable>();
825 q.drainTo(taskList);
826 if (!q.isEmpty()) {
827 for (Runnable r : q.toArray(new Runnable[0])) {
828 if (q.remove(r))
829 taskList.add(r);
830 }
831 }
832 return taskList;
833 }
834
835 /*
836 * Methods for creating, running and cleaning up after workers
837 */
838
839 /**
840 * Checks if a new worker can be added with respect to current
841 * pool state and the given bound (either core or maximum). If so,
842 * the worker count is adjusted accordingly, and, if possible, a
843 * new worker is created and started, running firstTask as its
844 * first task. This method returns false if the pool is stopped or
845 * eligible to shut down. It also returns false if the thread
846 * factory fails to create a thread when asked. If the thread
847 * creation fails, either due to the thread factory returning
848 * null, or due to an exception (typically OutOfMemoryError in
849 * Thread.start()), we roll back cleanly.
850 *
851 * @param firstTask the task the new thread should run first (or
852 * null if none). Workers are created with an initial first task
853 * (in method execute()) to bypass queuing when there are fewer
854 * than corePoolSize threads (in which case we always start one),
855 * or when the queue is full (in which case we must bypass queue).
856 * Initially idle threads are usually created via
857 * prestartCoreThread or to replace other dying workers.
858 *
859 * @param core if true use corePoolSize as bound, else
860 * maximumPoolSize. (A boolean indicator is used here rather than a
861 * value to ensure reads of fresh values after checking other pool
862 * state).
863 * @return true if successful
864 */
865 private boolean addWorker(Runnable firstTask, boolean core) {
866 retry:
867 for (;;) {
868 int c = ctl.get();
869 int rs = runStateOf(c);
870
871 // Check if queue empty only if necessary.
872 if (rs >= SHUTDOWN &&
873 ! (rs == SHUTDOWN &&
874 firstTask == null &&
875 ! workQueue.isEmpty()))
876 return false;
877
878 for (;;) {
879 int wc = workerCountOf(c);
880 if (wc >= CAPACITY ||
881 wc >= (core ? corePoolSize : maximumPoolSize))
882 return false;
883 if (compareAndIncrementWorkerCount(c))
884 break retry;
885 c = ctl.get(); // Re-read ctl
886 if (runStateOf(c) != rs)
887 continue retry;
888 // else CAS failed due to workerCount change; retry inner loop
889 }
890 }
891
892 boolean workerStarted = false;
893 boolean workerAdded = false;
894 Worker w = null;
895 try {
896 w = new Worker(firstTask);
897 final Thread t = w.thread;
898 if (t != null) {
899 final ReentrantLock mainLock = this.mainLock;
900 mainLock.lock();
901 try {
902 // Recheck while holding lock.
903 // Back out on ThreadFactory failure or if
904 // shut down before lock acquired.
905 int rs = runStateOf(ctl.get());
906
907 if (rs < SHUTDOWN ||
908 (rs == SHUTDOWN && firstTask == null)) {
909 if (t.isAlive()) // precheck that t is startable
910 throw new IllegalThreadStateException();
911 workers.add(w);
912 int s = workers.size();
913 if (s > largestPoolSize)
914 largestPoolSize = s;
915 workerAdded = true;
916 }
917 } finally {
918 mainLock.unlock();
919 }
920 if (workerAdded) {
921 t.start();
922 workerStarted = true;
923 }
924 }
925 } finally {
926 if (! workerStarted)
927 addWorkerFailed(w);
928 }
929 return workerStarted;
930 }
931
932 /**
933 * Rolls back the worker thread creation.
934 * - removes worker from workers, if present
935 * - decrements worker count
936 * - rechecks for termination, in case the existence of this
937 * worker was holding up termination
938 */
939 private void addWorkerFailed(Worker w) {
940 final ReentrantLock mainLock = this.mainLock;
941 mainLock.lock();
942 try {
943 if (w != null)
944 workers.remove(w);
945 decrementWorkerCount();
946 tryTerminate();
947 } finally {
948 mainLock.unlock();
949 }
950 }
951
952 /**
953 * Performs cleanup and bookkeeping for a dying worker. Called
954 * only from worker threads. Unless completedAbruptly is set,
955 * assumes that workerCount has already been adjusted to account
956 * for exit. This method removes thread from worker set, and
957 * possibly terminates the pool or replaces the worker if either
958 * it exited due to user task exception or if fewer than
959 * corePoolSize workers are running or queue is non-empty but
960 * there are no workers.
961 *
962 * @param w the worker
963 * @param completedAbruptly if the worker died due to user exception
964 */
965 private void processWorkerExit(Worker w, boolean completedAbruptly) {
966 if (completedAbruptly) // If abrupt, then workerCount wasn't adjusted
967 decrementWorkerCount();
968
969 final ReentrantLock mainLock = this.mainLock;
970 mainLock.lock();
971 try {
972 completedTaskCount += w.completedTasks;
973 workers.remove(w);
974 } finally {
975 mainLock.unlock();
976 }
977
978 tryTerminate();
979
980 int c = ctl.get();
981 if (runStateLessThan(c, STOP)) {
982 if (!completedAbruptly) {
983 int min = allowCoreThreadTimeOut ? 0 : corePoolSize;
984 if (min == 0 && ! workQueue.isEmpty())
985 min = 1;
986 if (workerCountOf(c) >= min)
987 return; // replacement not needed
988 }
989 addWorker(null, false);
990 }
991 }
992
993 /**
994 * Performs blocking or timed wait for a task, depending on
995 * current configuration settings, or returns null if this worker
996 * must exit because of any of:
997 * 1. There are more than maximumPoolSize workers (due to
998 * a call to setMaximumPoolSize).
999 * 2. The pool is stopped.
1000 * 3. The pool is shutdown and the queue is empty.
1001 * 4. This worker timed out waiting for a task, and timed-out
1002 * workers are subject to termination (that is,
1003 * {@code allowCoreThreadTimeOut || workerCount > corePoolSize})
1004 * both before and after the timed wait, and if the queue is
1005 * non-empty, this worker is not the last thread in the pool.
1006 *
1007 * @return task, or null if the worker must exit, in which case
1008 * workerCount is decremented
1009 */
1010 private Runnable getTask() {
1011 boolean timedOut = false; // Did the last poll() time out?
1012
1013 for (;;) {
1014 int c = ctl.get();
1015 int rs = runStateOf(c);
1016
1017 // Check if queue empty only if necessary.
1018 if (rs >= SHUTDOWN && (rs >= STOP || workQueue.isEmpty())) {
1019 decrementWorkerCount();
1020 return null;
1021 }
1022
1023 int wc = workerCountOf(c);
1024
1025 // Are workers subject to culling?
1026 boolean timed = allowCoreThreadTimeOut || wc > corePoolSize;
1027
1028 if ((wc > maximumPoolSize || (timed && timedOut))
1029 && (wc > 1 || workQueue.isEmpty())) {
1030 if (compareAndDecrementWorkerCount(c))
1031 return null;
1032 continue;
1033 }
1034
1035 try {
1036 Runnable r = timed ?
1037 workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
1038 workQueue.take();
1039 if (r != null)
1040 return r;
1041 timedOut = true;
1042 } catch (InterruptedException retry) {
1043 timedOut = false;
1044 }
1045 }
1046 }
1047
1048 /**
1049 * Main worker run loop. Repeatedly gets tasks from queue and
1050 * executes them, while coping with a number of issues:
1051 *
1052 * 1. We may start out with an initial task, in which case we
1053 * don't need to get the first one. Otherwise, as long as pool is
1054 * running, we get tasks from getTask. If it returns null then the
1055 * worker exits due to changed pool state or configuration
1056 * parameters. Other exits result from exception throws in
1057 * external code, in which case completedAbruptly holds, which
1058 * usually leads processWorkerExit to replace this thread.
1059 *
1060 * 2. Before running any task, the lock is acquired to prevent
1061 * other pool interrupts while the task is executing, and then we
1062 * ensure that unless pool is stopping, this thread does not have
1063 * its interrupt set.
1064 *
1065 * 3. Each task run is preceded by a call to beforeExecute, which
1066 * might throw an exception, in which case we cause thread to die
1067 * (breaking loop with completedAbruptly true) without processing
1068 * the task.
1069 *
1070 * 4. Assuming beforeExecute completes normally, we run the task,
1071 * gathering any of its thrown exceptions to send to afterExecute.
1072 * We separately handle RuntimeException, Error (both of which the
1073 * specs guarantee that we trap) and arbitrary Throwables.
1074 * Because we cannot rethrow Throwables within Runnable.run, we
1075 * wrap them within Errors on the way out (to the thread's
1076 * UncaughtExceptionHandler). Any thrown exception also
1077 * conservatively causes thread to die.
1078 *
1079 * 5. After task.run completes, we call afterExecute, which may
1080 * also throw an exception, which will also cause thread to
1081 * die. According to JLS Sec 14.20, this exception is the one that
1082 * will be in effect even if task.run throws.
1083 *
1084 * The net effect of the exception mechanics is that afterExecute
1085 * and the thread's UncaughtExceptionHandler have as accurate
1086 * information as we can provide about any problems encountered by
1087 * user code.
1088 *
1089 * @param w the worker
1090 */
1091 final void runWorker(Worker w) {
1092 Thread wt = Thread.currentThread();
1093 Runnable task = w.firstTask;
1094 w.firstTask = null;
1095 w.unlock(); // allow interrupts
1096 boolean completedAbruptly = true;
1097 try {
1098 while (task != null || (task = getTask()) != null) {
1099 w.lock();
1100 // If pool is stopping, ensure thread is interrupted;
1101 // if not, ensure thread is not interrupted. This
1102 // requires a recheck in second case to deal with
1103 // shutdownNow race while clearing interrupt
1104 if ((runStateAtLeast(ctl.get(), STOP) ||
1105 (Thread.interrupted() &&
1106 runStateAtLeast(ctl.get(), STOP))) &&
1107 !wt.isInterrupted())
1108 wt.interrupt();
1109 try {
1110 beforeExecute(wt, task);
1111 Throwable thrown = null;
1112 try {
1113 task.run();
1114 } catch (RuntimeException x) {
1115 thrown = x; throw x;
1116 } catch (Error x) {
1117 thrown = x; throw x;
1118 } catch (Throwable x) {
1119 thrown = x; throw new Error(x);
1120 } finally {
1121 afterExecute(task, thrown);
1122 }
1123 } finally {
1124 task = null;
1125 w.completedTasks++;
1126 w.unlock();
1127 }
1128 }
1129 completedAbruptly = false;
1130 } finally {
1131 processWorkerExit(w, completedAbruptly);
1132 }
1133 }
1134
1135 // Public constructors and methods
1136
1137 /**
1138 * Creates a new {@code ThreadPoolExecutor} with the given initial
1139 * parameters and default thread factory and rejected execution handler.
1140 * It may be more convenient to use one of the {@link Executors} factory
1141 * methods instead of this general purpose constructor.
1142 *
1143 * @param corePoolSize the number of threads to keep in the pool, even
1144 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1145 * @param maximumPoolSize the maximum number of threads to allow in the
1146 * pool
1147 * @param keepAliveTime when the number of threads is greater than
1148 * the core, this is the maximum time that excess idle threads
1149 * will wait for new tasks before terminating.
1150 * @param unit the time unit for the {@code keepAliveTime} argument
1151 * @param workQueue the queue to use for holding tasks before they are
1152 * executed. This queue will hold only the {@code Runnable}
1153 * tasks submitted by the {@code execute} method.
1154 * @throws IllegalArgumentException if one of the following holds:<br>
1155 * {@code corePoolSize < 0}<br>
1156 * {@code keepAliveTime < 0}<br>
1157 * {@code maximumPoolSize <= 0}<br>
1158 * {@code maximumPoolSize < corePoolSize}
1159 * @throws NullPointerException if {@code workQueue} is null
1160 */
1161 public ThreadPoolExecutor(int corePoolSize,
1162 int maximumPoolSize,
1163 long keepAliveTime,
1164 TimeUnit unit,
1165 BlockingQueue<Runnable> workQueue) {
1166 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1167 Executors.defaultThreadFactory(), defaultHandler);
1168 }
1169
1170 /**
1171 * Creates a new {@code ThreadPoolExecutor} with the given initial
1172 * parameters and default rejected execution handler.
1173 *
1174 * @param corePoolSize the number of threads to keep in the pool, even
1175 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1176 * @param maximumPoolSize the maximum number of threads to allow in the
1177 * pool
1178 * @param keepAliveTime when the number of threads is greater than
1179 * the core, this is the maximum time that excess idle threads
1180 * will wait for new tasks before terminating.
1181 * @param unit the time unit for the {@code keepAliveTime} argument
1182 * @param workQueue the queue to use for holding tasks before they are
1183 * executed. This queue will hold only the {@code Runnable}
1184 * tasks submitted by the {@code execute} method.
1185 * @param threadFactory the factory to use when the executor
1186 * creates a new thread
1187 * @throws IllegalArgumentException if one of the following holds:<br>
1188 * {@code corePoolSize < 0}<br>
1189 * {@code keepAliveTime < 0}<br>
1190 * {@code maximumPoolSize <= 0}<br>
1191 * {@code maximumPoolSize < corePoolSize}
1192 * @throws NullPointerException if {@code workQueue}
1193 * or {@code threadFactory} is null
1194 */
1195 public ThreadPoolExecutor(int corePoolSize,
1196 int maximumPoolSize,
1197 long keepAliveTime,
1198 TimeUnit unit,
1199 BlockingQueue<Runnable> workQueue,
1200 ThreadFactory threadFactory) {
1201 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1202 threadFactory, defaultHandler);
1203 }
1204
1205 /**
1206 * Creates a new {@code ThreadPoolExecutor} with the given initial
1207 * parameters and default thread factory.
1208 *
1209 * @param corePoolSize the number of threads to keep in the pool, even
1210 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1211 * @param maximumPoolSize the maximum number of threads to allow in the
1212 * pool
1213 * @param keepAliveTime when the number of threads is greater than
1214 * the core, this is the maximum time that excess idle threads
1215 * will wait for new tasks before terminating.
1216 * @param unit the time unit for the {@code keepAliveTime} argument
1217 * @param workQueue the queue to use for holding tasks before they are
1218 * executed. This queue will hold only the {@code Runnable}
1219 * tasks submitted by the {@code execute} method.
1220 * @param handler the handler to use when execution is blocked
1221 * because the thread bounds and queue capacities are reached
1222 * @throws IllegalArgumentException if one of the following holds:<br>
1223 * {@code corePoolSize < 0}<br>
1224 * {@code keepAliveTime < 0}<br>
1225 * {@code maximumPoolSize <= 0}<br>
1226 * {@code maximumPoolSize < corePoolSize}
1227 * @throws NullPointerException if {@code workQueue}
1228 * or {@code handler} is null
1229 */
1230 public ThreadPoolExecutor(int corePoolSize,
1231 int maximumPoolSize,
1232 long keepAliveTime,
1233 TimeUnit unit,
1234 BlockingQueue<Runnable> workQueue,
1235 RejectedExecutionHandler handler) {
1236 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1237 Executors.defaultThreadFactory(), handler);
1238 }
1239
1240 /**
1241 * Creates a new {@code ThreadPoolExecutor} with the given initial
1242 * parameters.
1243 *
1244 * @param corePoolSize the number of threads to keep in the pool, even
1245 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1246 * @param maximumPoolSize the maximum number of threads to allow in the
1247 * pool
1248 * @param keepAliveTime when the number of threads is greater than
1249 * the core, this is the maximum time that excess idle threads
1250 * will wait for new tasks before terminating.
1251 * @param unit the time unit for the {@code keepAliveTime} argument
1252 * @param workQueue the queue to use for holding tasks before they are
1253 * executed. This queue will hold only the {@code Runnable}
1254 * tasks submitted by the {@code execute} method.
1255 * @param threadFactory the factory to use when the executor
1256 * creates a new thread
1257 * @param handler the handler to use when execution is blocked
1258 * because the thread bounds and queue capacities are reached
1259 * @throws IllegalArgumentException if one of the following holds:<br>
1260 * {@code corePoolSize < 0}<br>
1261 * {@code keepAliveTime < 0}<br>
1262 * {@code maximumPoolSize <= 0}<br>
1263 * {@code maximumPoolSize < corePoolSize}
1264 * @throws NullPointerException if {@code workQueue}
1265 * or {@code threadFactory} or {@code handler} is null
1266 */
1267 public ThreadPoolExecutor(int corePoolSize,
1268 int maximumPoolSize,
1269 long keepAliveTime,
1270 TimeUnit unit,
1271 BlockingQueue<Runnable> workQueue,
1272 ThreadFactory threadFactory,
1273 RejectedExecutionHandler handler) {
1274 if (corePoolSize < 0 ||
1275 maximumPoolSize <= 0 ||
1276 maximumPoolSize < corePoolSize ||
1277 keepAliveTime < 0)
1278 throw new IllegalArgumentException();
1279 if (workQueue == null || threadFactory == null || handler == null)
1280 throw new NullPointerException();
1281 this.corePoolSize = corePoolSize;
1282 this.maximumPoolSize = maximumPoolSize;
1283 this.workQueue = workQueue;
1284 this.keepAliveTime = unit.toNanos(keepAliveTime);
1285 this.threadFactory = threadFactory;
1286 this.handler = handler;
1287 }
1288
1289 /**
1290 * Executes the given task sometime in the future. The task
1291 * may execute in a new thread or in an existing pooled thread.
1292 *
1293 * If the task cannot be submitted for execution, either because this
1294 * executor has been shutdown or because its capacity has been reached,
1295 * the task is handled by the current {@code RejectedExecutionHandler}.
1296 *
1297 * @param command the task to execute
1298 * @throws RejectedExecutionException at discretion of
1299 * {@code RejectedExecutionHandler}, if the task
1300 * cannot be accepted for execution
1301 * @throws NullPointerException if {@code command} is null
1302 */
1303 public void execute(Runnable command) {
1304 if (command == null)
1305 throw new NullPointerException();
1306 /*
1307 * Proceed in 3 steps:
1308 *
1309 * 1. If fewer than corePoolSize threads are running, try to
1310 * start a new thread with the given command as its first
1311 * task. The call to addWorker atomically checks runState and
1312 * workerCount, and so prevents false alarms that would add
1313 * threads when it shouldn't, by returning false.
1314 *
1315 * 2. If a task can be successfully queued, then we still need
1316 * to double-check whether we should have added a thread
1317 * (because existing ones died since last checking) or that
1318 * the pool shut down since entry into this method. So we
1319 * recheck state and if necessary roll back the enqueuing if
1320 * stopped, or start a new thread if there are none.
1321 *
1322 * 3. If we cannot queue task, then we try to add a new
1323 * thread. If it fails, we know we are shut down or saturated
1324 * and so reject the task.
1325 */
1326 int c = ctl.get();
1327 if (workerCountOf(c) < corePoolSize) {
1328 if (addWorker(command, true))
1329 return;
1330 c = ctl.get();
1331 }
1332 if (isRunning(c) && workQueue.offer(command)) {
1333 int recheck = ctl.get();
1334 if (! isRunning(recheck) && remove(command))
1335 reject(command);
1336 else if (workerCountOf(recheck) == 0)
1337 addWorker(null, false);
1338 }
1339 else if (!addWorker(command, false))
1340 reject(command);
1341 }
1342
1343 /**
1344 * Initiates an orderly shutdown in which previously submitted
1345 * tasks are executed, but no new tasks will be accepted.
1346 * Invocation has no additional effect if already shut down.
1347 *
1348 * <p>This method does not wait for previously submitted tasks to
1349 * complete execution. Use {@link #awaitTermination awaitTermination}
1350 * to do that.
1351 *
1352 * @throws SecurityException {@inheritDoc}
1353 */
1354 public void shutdown() {
1355 final ReentrantLock mainLock = this.mainLock;
1356 mainLock.lock();
1357 try {
1358 checkShutdownAccess();
1359 advanceRunState(SHUTDOWN);
1360 interruptIdleWorkers();
1361 onShutdown(); // hook for ScheduledThreadPoolExecutor
1362 } finally {
1363 mainLock.unlock();
1364 }
1365 tryTerminate();
1366 }
1367
1368 /**
1369 * Attempts to stop all actively executing tasks, halts the
1370 * processing of waiting tasks, and returns a list of the tasks
1371 * that were awaiting execution. These tasks are drained (removed)
1372 * from the task queue upon return from this method.
1373 *
1374 * <p>This method does not wait for actively executing tasks to
1375 * terminate. Use {@link #awaitTermination awaitTermination} to
1376 * do that.
1377 *
1378 * <p>There are no guarantees beyond best-effort attempts to stop
1379 * processing actively executing tasks. This implementation
1380 * cancels tasks via {@link Thread#interrupt}, so any task that
1381 * fails to respond to interrupts may never terminate.
1382 *
1383 * @throws SecurityException {@inheritDoc}
1384 */
1385 public List<Runnable> shutdownNow() {
1386 List<Runnable> tasks;
1387 final ReentrantLock mainLock = this.mainLock;
1388 mainLock.lock();
1389 try {
1390 checkShutdownAccess();
1391 advanceRunState(STOP);
1392 interruptWorkers();
1393 tasks = drainQueue();
1394 } finally {
1395 mainLock.unlock();
1396 }
1397 tryTerminate();
1398 return tasks;
1399 }
1400
1401 public boolean isShutdown() {
1402 return ! isRunning(ctl.get());
1403 }
1404
1405 /**
1406 * Returns true if this executor is in the process of terminating
1407 * after {@link #shutdown} or {@link #shutdownNow} but has not
1408 * completely terminated. This method may be useful for
1409 * debugging. A return of {@code true} reported a sufficient
1410 * period after shutdown may indicate that submitted tasks have
1411 * ignored or suppressed interruption, causing this executor not
1412 * to properly terminate.
1413 *
1414 * @return {@code true} if terminating but not yet terminated
1415 */
1416 public boolean isTerminating() {
1417 int c = ctl.get();
1418 return ! isRunning(c) && runStateLessThan(c, TERMINATED);
1419 }
1420
1421 public boolean isTerminated() {
1422 return runStateAtLeast(ctl.get(), TERMINATED);
1423 }
1424
1425 public boolean awaitTermination(long timeout, TimeUnit unit)
1426 throws InterruptedException {
1427 long nanos = unit.toNanos(timeout);
1428 final ReentrantLock mainLock = this.mainLock;
1429 mainLock.lock();
1430 try {
1431 for (;;) {
1432 if (runStateAtLeast(ctl.get(), TERMINATED))
1433 return true;
1434 if (nanos <= 0)
1435 return false;
1436 nanos = termination.awaitNanos(nanos);
1437 }
1438 } finally {
1439 mainLock.unlock();
1440 }
1441 }
1442
1443 /**
1444 * Invokes {@code shutdown} when this executor is no longer
1445 * referenced and it has no threads.
1446 */
1447 protected void finalize() {
1448 shutdown();
1449 }
1450
1451 /**
1452 * Sets the thread factory used to create new threads.
1453 *
1454 * @param threadFactory the new thread factory
1455 * @throws NullPointerException if threadFactory is null
1456 * @see #getThreadFactory
1457 */
1458 public void setThreadFactory(ThreadFactory threadFactory) {
1459 if (threadFactory == null)
1460 throw new NullPointerException();
1461 this.threadFactory = threadFactory;
1462 }
1463
1464 /**
1465 * Returns the thread factory used to create new threads.
1466 *
1467 * @return the current thread factory
1468 * @see #setThreadFactory(ThreadFactory)
1469 */
1470 public ThreadFactory getThreadFactory() {
1471 return threadFactory;
1472 }
1473
1474 /**
1475 * Sets a new handler for unexecutable tasks.
1476 *
1477 * @param handler the new handler
1478 * @throws NullPointerException if handler is null
1479 * @see #getRejectedExecutionHandler
1480 */
1481 public void setRejectedExecutionHandler(RejectedExecutionHandler handler) {
1482 if (handler == null)
1483 throw new NullPointerException();
1484 this.handler = handler;
1485 }
1486
1487 /**
1488 * Returns the current handler for unexecutable tasks.
1489 *
1490 * @return the current handler
1491 * @see #setRejectedExecutionHandler(RejectedExecutionHandler)
1492 */
1493 public RejectedExecutionHandler getRejectedExecutionHandler() {
1494 return handler;
1495 }
1496
1497 /**
1498 * Sets the core number of threads. This overrides any value set
1499 * in the constructor. If the new value is smaller than the
1500 * current value, excess existing threads will be terminated when
1501 * they next become idle. If larger, new threads will, if needed,
1502 * be started to execute any queued tasks.
1503 *
1504 * @param corePoolSize the new core size
1505 * @throws IllegalArgumentException if {@code corePoolSize < 0}
1506 * @see #getCorePoolSize
1507 */
1508 public void setCorePoolSize(int corePoolSize) {
1509 if (corePoolSize < 0)
1510 throw new IllegalArgumentException();
1511 int delta = corePoolSize - this.corePoolSize;
1512 this.corePoolSize = corePoolSize;
1513 if (workerCountOf(ctl.get()) > corePoolSize)
1514 interruptIdleWorkers();
1515 else if (delta > 0) {
1516 // We don't really know how many new threads are "needed".
1517 // As a heuristic, prestart enough new workers (up to new
1518 // core size) to handle the current number of tasks in
1519 // queue, but stop if queue becomes empty while doing so.
1520 int k = Math.min(delta, workQueue.size());
1521 while (k-- > 0 && addWorker(null, true)) {
1522 if (workQueue.isEmpty())
1523 break;
1524 }
1525 }
1526 }
1527
1528 /**
1529 * Returns the core number of threads.
1530 *
1531 * @return the core number of threads
1532 * @see #setCorePoolSize
1533 */
1534 public int getCorePoolSize() {
1535 return corePoolSize;
1536 }
1537
1538 /**
1539 * Starts a core thread, causing it to idly wait for work. This
1540 * overrides the default policy of starting core threads only when
1541 * new tasks are executed. This method will return {@code false}
1542 * if all core threads have already been started.
1543 *
1544 * @return {@code true} if a thread was started
1545 */
1546 public boolean prestartCoreThread() {
1547 return workerCountOf(ctl.get()) < corePoolSize &&
1548 addWorker(null, true);
1549 }
1550
1551 /**
1552 * Same as prestartCoreThread except arranges that at least one
1553 * thread is started even if corePoolSize is 0.
1554 */
1555 void ensurePrestart() {
1556 int wc = workerCountOf(ctl.get());
1557 if (wc < corePoolSize)
1558 addWorker(null, true);
1559 else if (wc == 0)
1560 addWorker(null, false);
1561 }
1562
1563 /**
1564 * Starts all core threads, causing them to idly wait for work. This
1565 * overrides the default policy of starting core threads only when
1566 * new tasks are executed.
1567 *
1568 * @return the number of threads started
1569 */
1570 public int prestartAllCoreThreads() {
1571 int n = 0;
1572 while (addWorker(null, true))
1573 ++n;
1574 return n;
1575 }
1576
1577 /**
1578 * Returns true if this pool allows core threads to time out and
1579 * terminate if no tasks arrive within the keepAlive time, being
1580 * replaced if needed when new tasks arrive. When true, the same
1581 * keep-alive policy applying to non-core threads applies also to
1582 * core threads. When false (the default), core threads are never
1583 * terminated due to lack of incoming tasks.
1584 *
1585 * @return {@code true} if core threads are allowed to time out,
1586 * else {@code false}
1587 *
1588 * @since 1.6
1589 */
1590 public boolean allowsCoreThreadTimeOut() {
1591 return allowCoreThreadTimeOut;
1592 }
1593
1594 /**
1595 * Sets the policy governing whether core threads may time out and
1596 * terminate if no tasks arrive within the keep-alive time, being
1597 * replaced if needed when new tasks arrive. When false, core
1598 * threads are never terminated due to lack of incoming
1599 * tasks. When true, the same keep-alive policy applying to
1600 * non-core threads applies also to core threads. To avoid
1601 * continual thread replacement, the keep-alive time must be
1602 * greater than zero when setting {@code true}. This method
1603 * should in general be called before the pool is actively used.
1604 *
1605 * @param value {@code true} if should time out, else {@code false}
1606 * @throws IllegalArgumentException if value is {@code true}
1607 * and the current keep-alive time is not greater than zero
1608 *
1609 * @since 1.6
1610 */
1611 public void allowCoreThreadTimeOut(boolean value) {
1612 if (value && keepAliveTime <= 0)
1613 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1614 if (value != allowCoreThreadTimeOut) {
1615 allowCoreThreadTimeOut = value;
1616 if (value)
1617 interruptIdleWorkers();
1618 }
1619 }
1620
1621 /**
1622 * Sets the maximum allowed number of threads. This overrides any
1623 * value set in the constructor. If the new value is smaller than
1624 * the current value, excess existing threads will be
1625 * terminated when they next become idle.
1626 *
1627 * @param maximumPoolSize the new maximum
1628 * @throws IllegalArgumentException if the new maximum is
1629 * less than or equal to zero, or
1630 * less than the {@linkplain #getCorePoolSize core pool size}
1631 * @see #getMaximumPoolSize
1632 */
1633 public void setMaximumPoolSize(int maximumPoolSize) {
1634 if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize)
1635 throw new IllegalArgumentException();
1636 this.maximumPoolSize = maximumPoolSize;
1637 if (workerCountOf(ctl.get()) > maximumPoolSize)
1638 interruptIdleWorkers();
1639 }
1640
1641 /**
1642 * Returns the maximum allowed number of threads.
1643 *
1644 * @return the maximum allowed number of threads
1645 * @see #setMaximumPoolSize
1646 */
1647 public int getMaximumPoolSize() {
1648 return maximumPoolSize;
1649 }
1650
1651 /**
1652 * Sets the time limit for which threads may remain idle before
1653 * being terminated. If there are more than the core number of
1654 * threads currently in the pool, after waiting this amount of
1655 * time without processing a task, excess threads will be
1656 * terminated. This overrides any value set in the constructor.
1657 *
1658 * @param time the time to wait. A time value of zero will cause
1659 * excess threads to terminate immediately after executing tasks.
1660 * @param unit the time unit of the {@code time} argument
1661 * @throws IllegalArgumentException if {@code time} less than zero or
1662 * if {@code time} is zero and {@code allowsCoreThreadTimeOut}
1663 * @see #getKeepAliveTime(TimeUnit)
1664 */
1665 public void setKeepAliveTime(long time, TimeUnit unit) {
1666 if (time < 0)
1667 throw new IllegalArgumentException();
1668 if (time == 0 && allowsCoreThreadTimeOut())
1669 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1670 long keepAliveTime = unit.toNanos(time);
1671 long delta = keepAliveTime - this.keepAliveTime;
1672 this.keepAliveTime = keepAliveTime;
1673 if (delta < 0)
1674 interruptIdleWorkers();
1675 }
1676
1677 /**
1678 * Returns the thread keep-alive time, which is the amount of time
1679 * that threads in excess of the core pool size may remain
1680 * idle before being terminated.
1681 *
1682 * @param unit the desired time unit of the result
1683 * @return the time limit
1684 * @see #setKeepAliveTime(long, TimeUnit)
1685 */
1686 public long getKeepAliveTime(TimeUnit unit) {
1687 return unit.convert(keepAliveTime, TimeUnit.NANOSECONDS);
1688 }
1689
1690 /* User-level queue utilities */
1691
1692 /**
1693 * Returns the task queue used by this executor. Access to the
1694 * task queue is intended primarily for debugging and monitoring.
1695 * This queue may be in active use. Retrieving the task queue
1696 * does not prevent queued tasks from executing.
1697 *
1698 * @return the task queue
1699 */
1700 public BlockingQueue<Runnable> getQueue() {
1701 return workQueue;
1702 }
1703
1704 /**
1705 * Removes this task from the executor's internal queue if it is
1706 * present, thus causing it not to be run if it has not already
1707 * started.
1708 *
1709 * <p>This method may be useful as one part of a cancellation
1710 * scheme. It may fail to remove tasks that have been converted
1711 * into other forms before being placed on the internal queue. For
1712 * example, a task entered using {@code submit} might be
1713 * converted into a form that maintains {@code Future} status.
1714 * However, in such cases, method {@link #purge} may be used to
1715 * remove those Futures that have been cancelled.
1716 *
1717 * @param task the task to remove
1718 * @return {@code true} if the task was removed
1719 */
1720 public boolean remove(Runnable task) {
1721 boolean removed = workQueue.remove(task);
1722 tryTerminate(); // In case SHUTDOWN and now empty
1723 return removed;
1724 }
1725
1726 /**
1727 * Tries to remove from the work queue all {@link Future}
1728 * tasks that have been cancelled. This method can be useful as a
1729 * storage reclamation operation, that has no other impact on
1730 * functionality. Cancelled tasks are never executed, but may
1731 * accumulate in work queues until worker threads can actively
1732 * remove them. Invoking this method instead tries to remove them now.
1733 * However, this method may fail to remove tasks in
1734 * the presence of interference by other threads.
1735 */
1736 public void purge() {
1737 final BlockingQueue<Runnable> q = workQueue;
1738 try {
1739 Iterator<Runnable> it = q.iterator();
1740 while (it.hasNext()) {
1741 Runnable r = it.next();
1742 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1743 it.remove();
1744 }
1745 } catch (ConcurrentModificationException fallThrough) {
1746 // Take slow path if we encounter interference during traversal.
1747 // Make copy for traversal and call remove for cancelled entries.
1748 // The slow path is more likely to be O(N*N).
1749 for (Object r : q.toArray())
1750 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1751 q.remove(r);
1752 }
1753
1754 tryTerminate(); // In case SHUTDOWN and now empty
1755 }
1756
1757 /* Statistics */
1758
1759 /**
1760 * Returns the current number of threads in the pool.
1761 *
1762 * @return the number of threads
1763 */
1764 public int getPoolSize() {
1765 final ReentrantLock mainLock = this.mainLock;
1766 mainLock.lock();
1767 try {
1768 // Remove rare and surprising possibility of
1769 // isTerminated() && getPoolSize() > 0
1770 return runStateAtLeast(ctl.get(), TIDYING) ? 0
1771 : workers.size();
1772 } finally {
1773 mainLock.unlock();
1774 }
1775 }
1776
1777 /**
1778 * Returns the approximate number of threads that are actively
1779 * executing tasks.
1780 *
1781 * @return the number of threads
1782 */
1783 public int getActiveCount() {
1784 final ReentrantLock mainLock = this.mainLock;
1785 mainLock.lock();
1786 try {
1787 int n = 0;
1788 for (Worker w : workers)
1789 if (w.isLocked())
1790 ++n;
1791 return n;
1792 } finally {
1793 mainLock.unlock();
1794 }
1795 }
1796
1797 /**
1798 * Returns the largest number of threads that have ever
1799 * simultaneously been in the pool.
1800 *
1801 * @return the number of threads
1802 */
1803 public int getLargestPoolSize() {
1804 final ReentrantLock mainLock = this.mainLock;
1805 mainLock.lock();
1806 try {
1807 return largestPoolSize;
1808 } finally {
1809 mainLock.unlock();
1810 }
1811 }
1812
1813 /**
1814 * Returns the approximate total number of tasks that have ever been
1815 * scheduled for execution. Because the states of tasks and
1816 * threads may change dynamically during computation, the returned
1817 * value is only an approximation.
1818 *
1819 * @return the number of tasks
1820 */
1821 public long getTaskCount() {
1822 final ReentrantLock mainLock = this.mainLock;
1823 mainLock.lock();
1824 try {
1825 long n = completedTaskCount;
1826 for (Worker w : workers) {
1827 n += w.completedTasks;
1828 if (w.isLocked())
1829 ++n;
1830 }
1831 return n + workQueue.size();
1832 } finally {
1833 mainLock.unlock();
1834 }
1835 }
1836
1837 /**
1838 * Returns the approximate total number of tasks that have
1839 * completed execution. Because the states of tasks and threads
1840 * may change dynamically during computation, the returned value
1841 * is only an approximation, but one that does not ever decrease
1842 * across successive calls.
1843 *
1844 * @return the number of tasks
1845 */
1846 public long getCompletedTaskCount() {
1847 final ReentrantLock mainLock = this.mainLock;
1848 mainLock.lock();
1849 try {
1850 long n = completedTaskCount;
1851 for (Worker w : workers)
1852 n += w.completedTasks;
1853 return n;
1854 } finally {
1855 mainLock.unlock();
1856 }
1857 }
1858
1859 /**
1860 * Returns a string identifying this pool, as well as its state,
1861 * including indications of run state and estimated worker and
1862 * task counts.
1863 *
1864 * @return a string identifying this pool, as well as its state
1865 */
1866 public String toString() {
1867 long ncompleted;
1868 int nworkers, nactive;
1869 final ReentrantLock mainLock = this.mainLock;
1870 mainLock.lock();
1871 try {
1872 ncompleted = completedTaskCount;
1873 nactive = 0;
1874 nworkers = workers.size();
1875 for (Worker w : workers) {
1876 ncompleted += w.completedTasks;
1877 if (w.isLocked())
1878 ++nactive;
1879 }
1880 } finally {
1881 mainLock.unlock();
1882 }
1883 int c = ctl.get();
1884 String rs = (runStateLessThan(c, SHUTDOWN) ? "Running" :
1885 (runStateAtLeast(c, TERMINATED) ? "Terminated" :
1886 "Shutting down"));
1887 return super.toString() +
1888 "[" + rs +
1889 ", pool size = " + nworkers +
1890 ", active threads = " + nactive +
1891 ", queued tasks = " + workQueue.size() +
1892 ", completed tasks = " + ncompleted +
1893 "]";
1894 }
1895
1896 /* Extension hooks */
1897
1898 /**
1899 * Method invoked prior to executing the given Runnable in the
1900 * given thread. This method is invoked by thread {@code t} that
1901 * will execute task {@code r}, and may be used to re-initialize
1902 * ThreadLocals, or to perform logging.
1903 *
1904 * <p>This implementation does nothing, but may be customized in
1905 * subclasses. Note: To properly nest multiple overridings, subclasses
1906 * should generally invoke {@code super.beforeExecute} at the end of
1907 * this method.
1908 *
1909 * @param t the thread that will run task {@code r}
1910 * @param r the task that will be executed
1911 */
1912 protected void beforeExecute(Thread t, Runnable r) { }
1913
1914 /**
1915 * Method invoked upon completion of execution of the given Runnable.
1916 * This method is invoked by the thread that executed the task. If
1917 * non-null, the Throwable is the uncaught {@code RuntimeException}
1918 * or {@code Error} that caused execution to terminate abruptly.
1919 *
1920 * <p>This implementation does nothing, but may be customized in
1921 * subclasses. Note: To properly nest multiple overridings, subclasses
1922 * should generally invoke {@code super.afterExecute} at the
1923 * beginning of this method.
1924 *
1925 * <p><b>Note:</b> When actions are enclosed in tasks (such as
1926 * {@link FutureTask}) either explicitly or via methods such as
1927 * {@code submit}, these task objects catch and maintain
1928 * computational exceptions, and so they do not cause abrupt
1929 * termination, and the internal exceptions are <em>not</em>
1930 * passed to this method. If you would like to trap both kinds of
1931 * failures in this method, you can further probe for such cases,
1932 * as in this sample subclass that prints either the direct cause
1933 * or the underlying exception if a task has been aborted:
1934 *
1935 * <pre> {@code
1936 * class ExtendedExecutor extends ThreadPoolExecutor {
1937 * // ...
1938 * protected void afterExecute(Runnable r, Throwable t) {
1939 * super.afterExecute(r, t);
1940 * if (t == null && r instanceof Future<?>) {
1941 * try {
1942 * Object result = ((Future<?>) r).get();
1943 * } catch (CancellationException ce) {
1944 * t = ce;
1945 * } catch (ExecutionException ee) {
1946 * t = ee.getCause();
1947 * } catch (InterruptedException ie) {
1948 * Thread.currentThread().interrupt(); // ignore/reset
1949 * }
1950 * }
1951 * if (t != null)
1952 * System.out.println(t);
1953 * }
1954 * }}</pre>
1955 *
1956 * @param r the runnable that has completed
1957 * @param t the exception that caused termination, or null if
1958 * execution completed normally
1959 */
1960 protected void afterExecute(Runnable r, Throwable t) { }
1961
1962 /**
1963 * Method invoked when the Executor has terminated. Default
1964 * implementation does nothing. Note: To properly nest multiple
1965 * overridings, subclasses should generally invoke
1966 * {@code super.terminated} within this method.
1967 */
1968 protected void terminated() { }
1969
1970 /* Predefined RejectedExecutionHandlers */
1971
1972 /**
1973 * A handler for rejected tasks that runs the rejected task
1974 * directly in the calling thread of the {@code execute} method,
1975 * unless the executor has been shut down, in which case the task
1976 * is discarded.
1977 */
1978 public static class CallerRunsPolicy implements RejectedExecutionHandler {
1979 /**
1980 * Creates a {@code CallerRunsPolicy}.
1981 */
1982 public CallerRunsPolicy() { }
1983
1984 /**
1985 * Executes task r in the caller's thread, unless the executor
1986 * has been shut down, in which case the task is discarded.
1987 *
1988 * @param r the runnable task requested to be executed
1989 * @param e the executor attempting to execute this task
1990 */
1991 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1992 if (!e.isShutdown()) {
1993 r.run();
1994 }
1995 }
1996 }
1997
1998 /**
1999 * A handler for rejected tasks that throws a
2000 * {@code RejectedExecutionException}.
2001 */
2002 public static class AbortPolicy implements RejectedExecutionHandler {
2003 /**
2004 * Creates an {@code AbortPolicy}.
2005 */
2006 public AbortPolicy() { }
2007
2008 /**
2009 * Always throws RejectedExecutionException.
2010 *
2011 * @param r the runnable task requested to be executed
2012 * @param e the executor attempting to execute this task
2013 * @throws RejectedExecutionException always
2014 */
2015 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2016 throw new RejectedExecutionException("Task " + r.toString() +
2017 " rejected from " +
2018 e.toString());
2019 }
2020 }
2021
2022 /**
2023 * A handler for rejected tasks that silently discards the
2024 * rejected task.
2025 */
2026 public static class DiscardPolicy implements RejectedExecutionHandler {
2027 /**
2028 * Creates a {@code DiscardPolicy}.
2029 */
2030 public DiscardPolicy() { }
2031
2032 /**
2033 * Does nothing, which has the effect of discarding task r.
2034 *
2035 * @param r the runnable task requested to be executed
2036 * @param e the executor attempting to execute this task
2037 */
2038 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2039 }
2040 }
2041
2042 /**
2043 * A handler for rejected tasks that discards the oldest unhandled
2044 * request and then retries {@code execute}, unless the executor
2045 * is shut down, in which case the task is discarded.
2046 */
2047 public static class DiscardOldestPolicy implements RejectedExecutionHandler {
2048 /**
2049 * Creates a {@code DiscardOldestPolicy} for the given executor.
2050 */
2051 public DiscardOldestPolicy() { }
2052
2053 /**
2054 * Obtains and ignores the next task that the executor
2055 * would otherwise execute, if one is immediately available,
2056 * and then retries execution of task r, unless the executor
2057 * is shut down, in which case task r is instead discarded.
2058 *
2059 * @param r the runnable task requested to be executed
2060 * @param e the executor attempting to execute this task
2061 */
2062 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2063 if (!e.isShutdown()) {
2064 e.getQueue().poll();
2065 e.execute(r);
2066 }
2067 }
2068 }
2069 }