ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ThreadPoolExecutor.java
Revision: 1.156
Committed: Wed Dec 3 21:55:44 2014 UTC (9 years, 6 months ago) by jsr166
Branch: MAIN
Changes since 1.155: +6 -1 lines
Log Message:
never use wildcard imports

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain, as explained at
4 * http://creativecommons.org/publicdomain/zero/1.0/
5 */
6
7 package java.util.concurrent;
8
9 import java.util.concurrent.locks.AbstractQueuedSynchronizer;
10 import java.util.concurrent.locks.Condition;
11 import java.util.concurrent.locks.ReentrantLock;
12 import java.util.concurrent.atomic.AtomicInteger;
13 import java.util.ArrayList;
14 import java.util.ConcurrentModificationException;
15 import java.util.HashSet;
16 import java.util.Iterator;
17 import java.util.List;
18
19 /**
20 * An {@link ExecutorService} that executes each submitted task using
21 * one of possibly several pooled threads, normally configured
22 * using {@link Executors} factory methods.
23 *
24 * <p>Thread pools address two different problems: they usually
25 * provide improved performance when executing large numbers of
26 * asynchronous tasks, due to reduced per-task invocation overhead,
27 * and they provide a means of bounding and managing the resources,
28 * including threads, consumed when executing a collection of tasks.
29 * Each {@code ThreadPoolExecutor} also maintains some basic
30 * statistics, such as the number of completed tasks.
31 *
32 * <p>To be useful across a wide range of contexts, this class
33 * provides many adjustable parameters and extensibility
34 * hooks. However, programmers are urged to use the more convenient
35 * {@link Executors} factory methods {@link
36 * Executors#newCachedThreadPool} (unbounded thread pool, with
37 * automatic thread reclamation), {@link Executors#newFixedThreadPool}
38 * (fixed size thread pool) and {@link
39 * Executors#newSingleThreadExecutor} (single background thread), that
40 * preconfigure settings for the most common usage
41 * scenarios. Otherwise, use the following guide when manually
42 * configuring and tuning this class:
43 *
44 * <dl>
45 *
46 * <dt>Core and maximum pool sizes</dt>
47 *
48 * <dd>A {@code ThreadPoolExecutor} will automatically adjust the
49 * pool size (see {@link #getPoolSize})
50 * according to the bounds set by
51 * corePoolSize (see {@link #getCorePoolSize}) and
52 * maximumPoolSize (see {@link #getMaximumPoolSize}).
53 *
54 * When a new task is submitted in method {@link #execute(Runnable)},
55 * and fewer than corePoolSize threads are running, a new thread is
56 * created to handle the request, even if other worker threads are
57 * idle. If there are more than corePoolSize but less than
58 * maximumPoolSize threads running, a new thread will be created only
59 * if the queue is full. By setting corePoolSize and maximumPoolSize
60 * the same, you create a fixed-size thread pool. By setting
61 * maximumPoolSize to an essentially unbounded value such as {@code
62 * Integer.MAX_VALUE}, you allow the pool to accommodate an arbitrary
63 * number of concurrent tasks. Most typically, core and maximum pool
64 * sizes are set only upon construction, but they may also be changed
65 * dynamically using {@link #setCorePoolSize} and {@link
66 * #setMaximumPoolSize}. </dd>
67 *
68 * <dt>On-demand construction</dt>
69 *
70 * <dd>By default, even core threads are initially created and
71 * started only when new tasks arrive, but this can be overridden
72 * dynamically using method {@link #prestartCoreThread} or {@link
73 * #prestartAllCoreThreads}. You probably want to prestart threads if
74 * you construct the pool with a non-empty queue. </dd>
75 *
76 * <dt>Creating new threads</dt>
77 *
78 * <dd>New threads are created using a {@link ThreadFactory}. If not
79 * otherwise specified, a {@link Executors#defaultThreadFactory} is
80 * used, that creates threads to all be in the same {@link
81 * ThreadGroup} and with the same {@code NORM_PRIORITY} priority and
82 * non-daemon status. By supplying a different ThreadFactory, you can
83 * alter the thread's name, thread group, priority, daemon status,
84 * etc. If a {@code ThreadFactory} fails to create a thread when asked
85 * by returning null from {@code newThread}, the executor will
86 * continue, but might not be able to execute any tasks. Threads
87 * should possess the "modifyThread" {@code RuntimePermission}. If
88 * worker threads or other threads using the pool do not possess this
89 * permission, service may be degraded: configuration changes may not
90 * take effect in a timely manner, and a shutdown pool may remain in a
91 * state in which termination is possible but not completed.</dd>
92 *
93 * <dt>Keep-alive times</dt>
94 *
95 * <dd>If the pool currently has more than corePoolSize threads,
96 * excess threads will be terminated if they have been idle for more
97 * than the keepAliveTime (see {@link #getKeepAliveTime(TimeUnit)}).
98 * This provides a means of reducing resource consumption when the
99 * pool is not being actively used. If the pool becomes more active
100 * later, new threads will be constructed. This parameter can also be
101 * changed dynamically using method {@link #setKeepAliveTime(long,
102 * TimeUnit)}. Using a value of {@code Long.MAX_VALUE} {@link
103 * TimeUnit#NANOSECONDS} effectively disables idle threads from ever
104 * terminating prior to shut down. By default, the keep-alive policy
105 * applies only when there are more than corePoolSize threads, but
106 * method {@link #allowCoreThreadTimeOut(boolean)} can be used to
107 * apply this time-out policy to core threads as well, so long as the
108 * keepAliveTime value is non-zero. </dd>
109 *
110 * <dt>Queuing</dt>
111 *
112 * <dd>Any {@link BlockingQueue} may be used to transfer and hold
113 * submitted tasks. The use of this queue interacts with pool sizing:
114 *
115 * <ul>
116 *
117 * <li> If fewer than corePoolSize threads are running, the Executor
118 * always prefers adding a new thread
119 * rather than queuing.</li>
120 *
121 * <li> If corePoolSize or more threads are running, the Executor
122 * always prefers queuing a request rather than adding a new
123 * thread.</li>
124 *
125 * <li> If a request cannot be queued, a new thread is created unless
126 * this would exceed maximumPoolSize, in which case, the task will be
127 * rejected.</li>
128 *
129 * </ul>
130 *
131 * There are three general strategies for queuing:
132 * <ol>
133 *
134 * <li> <em> Direct handoffs.</em> A good default choice for a work
135 * queue is a {@link SynchronousQueue} that hands off tasks to threads
136 * without otherwise holding them. Here, an attempt to queue a task
137 * will fail if no threads are immediately available to run it, so a
138 * new thread will be constructed. This policy avoids lockups when
139 * handling sets of requests that might have internal dependencies.
140 * Direct handoffs generally require unbounded maximumPoolSizes to
141 * avoid rejection of new submitted tasks. This in turn admits the
142 * possibility of unbounded thread growth when commands continue to
143 * arrive on average faster than they can be processed. </li>
144 *
145 * <li><em> Unbounded queues.</em> Using an unbounded queue (for
146 * example a {@link LinkedBlockingQueue} without a predefined
147 * capacity) will cause new tasks to wait in the queue when all
148 * corePoolSize threads are busy. Thus, no more than corePoolSize
149 * threads will ever be created. (And the value of the maximumPoolSize
150 * therefore doesn't have any effect.) This may be appropriate when
151 * each task is completely independent of others, so tasks cannot
152 * affect each others execution; for example, in a web page server.
153 * While this style of queuing can be useful in smoothing out
154 * transient bursts of requests, it admits the possibility of
155 * unbounded work queue growth when commands continue to arrive on
156 * average faster than they can be processed. </li>
157 *
158 * <li><em>Bounded queues.</em> A bounded queue (for example, an
159 * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
160 * used with finite maximumPoolSizes, but can be more difficult to
161 * tune and control. Queue sizes and maximum pool sizes may be traded
162 * off for each other: Using large queues and small pools minimizes
163 * CPU usage, OS resources, and context-switching overhead, but can
164 * lead to artificially low throughput. If tasks frequently block (for
165 * example if they are I/O bound), a system may be able to schedule
166 * time for more threads than you otherwise allow. Use of small queues
167 * generally requires larger pool sizes, which keeps CPUs busier but
168 * may encounter unacceptable scheduling overhead, which also
169 * decreases throughput. </li>
170 *
171 * </ol>
172 *
173 * </dd>
174 *
175 * <dt>Rejected tasks</dt>
176 *
177 * <dd>New tasks submitted in method {@link #execute(Runnable)} will be
178 * <em>rejected</em> when the Executor has been shut down, and also when
179 * the Executor uses finite bounds for both maximum threads and work queue
180 * capacity, and is saturated. In either case, the {@code execute} method
181 * invokes the {@link
182 * RejectedExecutionHandler#rejectedExecution(Runnable, ThreadPoolExecutor)}
183 * method of its {@link RejectedExecutionHandler}. Four predefined handler
184 * policies are provided:
185 *
186 * <ol>
187 *
188 * <li> In the default {@link ThreadPoolExecutor.AbortPolicy}, the
189 * handler throws a runtime {@link RejectedExecutionException} upon
190 * rejection. </li>
191 *
192 * <li> In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread
193 * that invokes {@code execute} itself runs the task. This provides a
194 * simple feedback control mechanism that will slow down the rate that
195 * new tasks are submitted. </li>
196 *
197 * <li> In {@link ThreadPoolExecutor.DiscardPolicy}, a task that
198 * cannot be executed is simply dropped. </li>
199 *
200 * <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the
201 * executor is not shut down, the task at the head of the work queue
202 * is dropped, and then execution is retried (which can fail again,
203 * causing this to be repeated.) </li>
204 *
205 * </ol>
206 *
207 * It is possible to define and use other kinds of {@link
208 * RejectedExecutionHandler} classes. Doing so requires some care
209 * especially when policies are designed to work only under particular
210 * capacity or queuing policies. </dd>
211 *
212 * <dt>Hook methods</dt>
213 *
214 * <dd>This class provides {@code protected} overridable
215 * {@link #beforeExecute(Thread, Runnable)} and
216 * {@link #afterExecute(Runnable, Throwable)} methods that are called
217 * before and after execution of each task. These can be used to
218 * manipulate the execution environment; for example, reinitializing
219 * ThreadLocals, gathering statistics, or adding log entries.
220 * Additionally, method {@link #terminated} can be overridden to perform
221 * any special processing that needs to be done once the Executor has
222 * fully terminated.
223 *
224 * <p>If hook or callback methods throw exceptions, internal worker
225 * threads may in turn fail and abruptly terminate.</dd>
226 *
227 * <dt>Queue maintenance</dt>
228 *
229 * <dd>Method {@link #getQueue()} allows access to the work queue
230 * for purposes of monitoring and debugging. Use of this method for
231 * any other purpose is strongly discouraged. Two supplied methods,
232 * {@link #remove(Runnable)} and {@link #purge} are available to
233 * assist in storage reclamation when large numbers of queued tasks
234 * become cancelled.</dd>
235 *
236 * <dt>Finalization</dt>
237 *
238 * <dd>A pool that is no longer referenced in a program <em>AND</em>
239 * has no remaining threads will be {@code shutdown} automatically. If
240 * you would like to ensure that unreferenced pools are reclaimed even
241 * if users forget to call {@link #shutdown}, then you must arrange
242 * that unused threads eventually die, by setting appropriate
243 * keep-alive times, using a lower bound of zero core threads and/or
244 * setting {@link #allowCoreThreadTimeOut(boolean)}. </dd>
245 *
246 * </dl>
247 *
248 * <p><b>Extension example</b>. Most extensions of this class
249 * override one or more of the protected hook methods. For example,
250 * here is a subclass that adds a simple pause/resume feature:
251 *
252 * <pre> {@code
253 * class PausableThreadPoolExecutor extends ThreadPoolExecutor {
254 * private boolean isPaused;
255 * private ReentrantLock pauseLock = new ReentrantLock();
256 * private Condition unpaused = pauseLock.newCondition();
257 *
258 * public PausableThreadPoolExecutor(...) { super(...); }
259 *
260 * protected void beforeExecute(Thread t, Runnable r) {
261 * super.beforeExecute(t, r);
262 * pauseLock.lock();
263 * try {
264 * while (isPaused) unpaused.await();
265 * } catch (InterruptedException ie) {
266 * t.interrupt();
267 * } finally {
268 * pauseLock.unlock();
269 * }
270 * }
271 *
272 * public void pause() {
273 * pauseLock.lock();
274 * try {
275 * isPaused = true;
276 * } finally {
277 * pauseLock.unlock();
278 * }
279 * }
280 *
281 * public void resume() {
282 * pauseLock.lock();
283 * try {
284 * isPaused = false;
285 * unpaused.signalAll();
286 * } finally {
287 * pauseLock.unlock();
288 * }
289 * }
290 * }}</pre>
291 *
292 * @since 1.5
293 * @author Doug Lea
294 */
295 public class ThreadPoolExecutor extends AbstractExecutorService {
296 /**
297 * The main pool control state, ctl, is an atomic integer packing
298 * two conceptual fields
299 * workerCount, indicating the effective number of threads
300 * runState, indicating whether running, shutting down etc
301 *
302 * In order to pack them into one int, we limit workerCount to
303 * (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2
304 * billion) otherwise representable. If this is ever an issue in
305 * the future, the variable can be changed to be an AtomicLong,
306 * and the shift/mask constants below adjusted. But until the need
307 * arises, this code is a bit faster and simpler using an int.
308 *
309 * The workerCount is the number of workers that have been
310 * permitted to start and not permitted to stop. The value may be
311 * transiently different from the actual number of live threads,
312 * for example when a ThreadFactory fails to create a thread when
313 * asked, and when exiting threads are still performing
314 * bookkeeping before terminating. The user-visible pool size is
315 * reported as the current size of the workers set.
316 *
317 * The runState provides the main lifecycle control, taking on values:
318 *
319 * RUNNING: Accept new tasks and process queued tasks
320 * SHUTDOWN: Don't accept new tasks, but process queued tasks
321 * STOP: Don't accept new tasks, don't process queued tasks,
322 * and interrupt in-progress tasks
323 * TIDYING: All tasks have terminated, workerCount is zero,
324 * the thread transitioning to state TIDYING
325 * will run the terminated() hook method
326 * TERMINATED: terminated() has completed
327 *
328 * The numerical order among these values matters, to allow
329 * ordered comparisons. The runState monotonically increases over
330 * time, but need not hit each state. The transitions are:
331 *
332 * RUNNING -> SHUTDOWN
333 * On invocation of shutdown(), perhaps implicitly in finalize()
334 * (RUNNING or SHUTDOWN) -> STOP
335 * On invocation of shutdownNow()
336 * SHUTDOWN -> TIDYING
337 * When both queue and pool are empty
338 * STOP -> TIDYING
339 * When pool is empty
340 * TIDYING -> TERMINATED
341 * When the terminated() hook method has completed
342 *
343 * Threads waiting in awaitTermination() will return when the
344 * state reaches TERMINATED.
345 *
346 * Detecting the transition from SHUTDOWN to TIDYING is less
347 * straightforward than you'd like because the queue may become
348 * empty after non-empty and vice versa during SHUTDOWN state, but
349 * we can only terminate if, after seeing that it is empty, we see
350 * that workerCount is 0 (which sometimes entails a recheck -- see
351 * below).
352 */
353 private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
354 private static final int COUNT_BITS = Integer.SIZE - 3;
355 private static final int CAPACITY = (1 << COUNT_BITS) - 1;
356
357 // runState is stored in the high-order bits
358 private static final int RUNNING = -1 << COUNT_BITS;
359 private static final int SHUTDOWN = 0 << COUNT_BITS;
360 private static final int STOP = 1 << COUNT_BITS;
361 private static final int TIDYING = 2 << COUNT_BITS;
362 private static final int TERMINATED = 3 << COUNT_BITS;
363
364 // Packing and unpacking ctl
365 private static int runStateOf(int c) { return c & ~CAPACITY; }
366 private static int workerCountOf(int c) { return c & CAPACITY; }
367 private static int ctlOf(int rs, int wc) { return rs | wc; }
368
369 /*
370 * Bit field accessors that don't require unpacking ctl.
371 * These depend on the bit layout and on workerCount being never negative.
372 */
373
374 private static boolean runStateLessThan(int c, int s) {
375 return c < s;
376 }
377
378 private static boolean runStateAtLeast(int c, int s) {
379 return c >= s;
380 }
381
382 private static boolean isRunning(int c) {
383 return c < SHUTDOWN;
384 }
385
386 /**
387 * Attempts to CAS-increment the workerCount field of ctl.
388 */
389 private boolean compareAndIncrementWorkerCount(int expect) {
390 return ctl.compareAndSet(expect, expect + 1);
391 }
392
393 /**
394 * Attempts to CAS-decrement the workerCount field of ctl.
395 */
396 private boolean compareAndDecrementWorkerCount(int expect) {
397 return ctl.compareAndSet(expect, expect - 1);
398 }
399
400 /**
401 * Decrements the workerCount field of ctl. This is called only on
402 * abrupt termination of a thread (see processWorkerExit). Other
403 * decrements are performed within getTask.
404 */
405 private void decrementWorkerCount() {
406 do {} while (! compareAndDecrementWorkerCount(ctl.get()));
407 }
408
409 /**
410 * The queue used for holding tasks and handing off to worker
411 * threads. We do not require that workQueue.poll() returning
412 * null necessarily means that workQueue.isEmpty(), so rely
413 * solely on isEmpty to see if the queue is empty (which we must
414 * do for example when deciding whether to transition from
415 * SHUTDOWN to TIDYING). This accommodates special-purpose
416 * queues such as DelayQueues for which poll() is allowed to
417 * return null even if it may later return non-null when delays
418 * expire.
419 */
420 private final BlockingQueue<Runnable> workQueue;
421
422 /**
423 * Lock held on access to workers set and related bookkeeping.
424 * While we could use a concurrent set of some sort, it turns out
425 * to be generally preferable to use a lock. Among the reasons is
426 * that this serializes interruptIdleWorkers, which avoids
427 * unnecessary interrupt storms, especially during shutdown.
428 * Otherwise exiting threads would concurrently interrupt those
429 * that have not yet interrupted. It also simplifies some of the
430 * associated statistics bookkeeping of largestPoolSize etc. We
431 * also hold mainLock on shutdown and shutdownNow, for the sake of
432 * ensuring workers set is stable while separately checking
433 * permission to interrupt and actually interrupting.
434 */
435 private final ReentrantLock mainLock = new ReentrantLock();
436
437 /**
438 * Set containing all worker threads in pool. Accessed only when
439 * holding mainLock.
440 */
441 private final HashSet<Worker> workers = new HashSet<>();
442
443 /**
444 * Wait condition to support awaitTermination
445 */
446 private final Condition termination = mainLock.newCondition();
447
448 /**
449 * Tracks largest attained pool size. Accessed only under
450 * mainLock.
451 */
452 private int largestPoolSize;
453
454 /**
455 * Counter for completed tasks. Updated only on termination of
456 * worker threads. Accessed only under mainLock.
457 */
458 private long completedTaskCount;
459
460 /*
461 * All user control parameters are declared as volatiles so that
462 * ongoing actions are based on freshest values, but without need
463 * for locking, since no internal invariants depend on them
464 * changing synchronously with respect to other actions.
465 */
466
467 /**
468 * Factory for new threads. All threads are created using this
469 * factory (via method addWorker). All callers must be prepared
470 * for addWorker to fail, which may reflect a system or user's
471 * policy limiting the number of threads. Even though it is not
472 * treated as an error, failure to create threads may result in
473 * new tasks being rejected or existing ones remaining stuck in
474 * the queue.
475 *
476 * We go further and preserve pool invariants even in the face of
477 * errors such as OutOfMemoryError, that might be thrown while
478 * trying to create threads. Such errors are rather common due to
479 * the need to allocate a native stack in Thread.start, and users
480 * will want to perform clean pool shutdown to clean up. There
481 * will likely be enough memory available for the cleanup code to
482 * complete without encountering yet another OutOfMemoryError.
483 */
484 private volatile ThreadFactory threadFactory;
485
486 /**
487 * Handler called when saturated or shutdown in execute.
488 */
489 private volatile RejectedExecutionHandler handler;
490
491 /**
492 * Timeout in nanoseconds for idle threads waiting for work.
493 * Threads use this timeout when there are more than corePoolSize
494 * present or if allowCoreThreadTimeOut. Otherwise they wait
495 * forever for new work.
496 */
497 private volatile long keepAliveTime;
498
499 /**
500 * If false (default), core threads stay alive even when idle.
501 * If true, core threads use keepAliveTime to time out waiting
502 * for work.
503 */
504 private volatile boolean allowCoreThreadTimeOut;
505
506 /**
507 * Core pool size is the minimum number of workers to keep alive
508 * (and not allow to time out etc) unless allowCoreThreadTimeOut
509 * is set, in which case the minimum is zero.
510 */
511 private volatile int corePoolSize;
512
513 /**
514 * Maximum pool size. Note that the actual maximum is internally
515 * bounded by CAPACITY.
516 */
517 private volatile int maximumPoolSize;
518
519 /**
520 * The default rejected execution handler
521 */
522 private static final RejectedExecutionHandler defaultHandler =
523 new AbortPolicy();
524
525 /**
526 * Permission required for callers of shutdown and shutdownNow.
527 * We additionally require (see checkShutdownAccess) that callers
528 * have permission to actually interrupt threads in the worker set
529 * (as governed by Thread.interrupt, which relies on
530 * ThreadGroup.checkAccess, which in turn relies on
531 * SecurityManager.checkAccess). Shutdowns are attempted only if
532 * these checks pass.
533 *
534 * All actual invocations of Thread.interrupt (see
535 * interruptIdleWorkers and interruptWorkers) ignore
536 * SecurityExceptions, meaning that the attempted interrupts
537 * silently fail. In the case of shutdown, they should not fail
538 * unless the SecurityManager has inconsistent policies, sometimes
539 * allowing access to a thread and sometimes not. In such cases,
540 * failure to actually interrupt threads may disable or delay full
541 * termination. Other uses of interruptIdleWorkers are advisory,
542 * and failure to actually interrupt will merely delay response to
543 * configuration changes so is not handled exceptionally.
544 */
545 private static final RuntimePermission shutdownPerm =
546 new RuntimePermission("modifyThread");
547
548 /**
549 * Class Worker mainly maintains interrupt control state for
550 * threads running tasks, along with other minor bookkeeping.
551 * This class opportunistically extends AbstractQueuedSynchronizer
552 * to simplify acquiring and releasing a lock surrounding each
553 * task execution. This protects against interrupts that are
554 * intended to wake up a worker thread waiting for a task from
555 * instead interrupting a task being run. We implement a simple
556 * non-reentrant mutual exclusion lock rather than use
557 * ReentrantLock because we do not want worker tasks to be able to
558 * reacquire the lock when they invoke pool control methods like
559 * setCorePoolSize. Additionally, to suppress interrupts until
560 * the thread actually starts running tasks, we initialize lock
561 * state to a negative value, and clear it upon start (in
562 * runWorker).
563 */
564 private final class Worker
565 extends AbstractQueuedSynchronizer
566 implements Runnable
567 {
568 /**
569 * This class will never be serialized, but we provide a
570 * serialVersionUID to suppress a javac warning.
571 */
572 private static final long serialVersionUID = 6138294804551838833L;
573
574 /** Thread this worker is running in. Null if factory fails. */
575 final Thread thread;
576 /** Initial task to run. Possibly null. */
577 Runnable firstTask;
578 /** Per-thread task counter */
579 volatile long completedTasks;
580
581 /**
582 * Creates with given first task and thread from ThreadFactory.
583 * @param firstTask the first task (null if none)
584 */
585 Worker(Runnable firstTask) {
586 setState(-1); // inhibit interrupts until runWorker
587 this.firstTask = firstTask;
588 this.thread = getThreadFactory().newThread(this);
589 }
590
591 /** Delegates main run loop to outer runWorker. */
592 public void run() {
593 runWorker(this);
594 }
595
596 // Lock methods
597 //
598 // The value 0 represents the unlocked state.
599 // The value 1 represents the locked state.
600
601 protected boolean isHeldExclusively() {
602 return getState() != 0;
603 }
604
605 protected boolean tryAcquire(int unused) {
606 if (compareAndSetState(0, 1)) {
607 setExclusiveOwnerThread(Thread.currentThread());
608 return true;
609 }
610 return false;
611 }
612
613 protected boolean tryRelease(int unused) {
614 setExclusiveOwnerThread(null);
615 setState(0);
616 return true;
617 }
618
619 public void lock() { acquire(1); }
620 public boolean tryLock() { return tryAcquire(1); }
621 public void unlock() { release(1); }
622 public boolean isLocked() { return isHeldExclusively(); }
623
624 void interruptIfStarted() {
625 Thread t;
626 if (getState() >= 0 && (t = thread) != null && !t.isInterrupted()) {
627 try {
628 t.interrupt();
629 } catch (SecurityException ignore) {
630 }
631 }
632 }
633 }
634
635 /*
636 * Methods for setting control state
637 */
638
639 /**
640 * Transitions runState to given target, or leaves it alone if
641 * already at least the given target.
642 *
643 * @param targetState the desired state, either SHUTDOWN or STOP
644 * (but not TIDYING or TERMINATED -- use tryTerminate for that)
645 */
646 private void advanceRunState(int targetState) {
647 // assert targetState == SHUTDOWN || targetState == STOP;
648 for (;;) {
649 int c = ctl.get();
650 if (runStateAtLeast(c, targetState) ||
651 ctl.compareAndSet(c, ctlOf(targetState, workerCountOf(c))))
652 break;
653 }
654 }
655
656 /**
657 * Transitions to TERMINATED state if either (SHUTDOWN and pool
658 * and queue empty) or (STOP and pool empty). If otherwise
659 * eligible to terminate but workerCount is nonzero, interrupts an
660 * idle worker to ensure that shutdown signals propagate. This
661 * method must be called following any action that might make
662 * termination possible -- reducing worker count or removing tasks
663 * from the queue during shutdown. The method is non-private to
664 * allow access from ScheduledThreadPoolExecutor.
665 */
666 final void tryTerminate() {
667 for (;;) {
668 int c = ctl.get();
669 if (isRunning(c) ||
670 runStateAtLeast(c, TIDYING) ||
671 (runStateOf(c) == SHUTDOWN && ! workQueue.isEmpty()))
672 return;
673 if (workerCountOf(c) != 0) { // Eligible to terminate
674 interruptIdleWorkers(ONLY_ONE);
675 return;
676 }
677
678 final ReentrantLock mainLock = this.mainLock;
679 mainLock.lock();
680 try {
681 if (ctl.compareAndSet(c, ctlOf(TIDYING, 0))) {
682 try {
683 terminated();
684 } finally {
685 ctl.set(ctlOf(TERMINATED, 0));
686 termination.signalAll();
687 }
688 return;
689 }
690 } finally {
691 mainLock.unlock();
692 }
693 // else retry on failed CAS
694 }
695 }
696
697 /*
698 * Methods for controlling interrupts to worker threads.
699 */
700
701 /**
702 * If there is a security manager, makes sure caller has
703 * permission to shut down threads in general (see shutdownPerm).
704 * If this passes, additionally makes sure the caller is allowed
705 * to interrupt each worker thread. This might not be true even if
706 * first check passed, if the SecurityManager treats some threads
707 * specially.
708 */
709 private void checkShutdownAccess() {
710 SecurityManager security = System.getSecurityManager();
711 if (security != null) {
712 security.checkPermission(shutdownPerm);
713 final ReentrantLock mainLock = this.mainLock;
714 mainLock.lock();
715 try {
716 for (Worker w : workers)
717 security.checkAccess(w.thread);
718 } finally {
719 mainLock.unlock();
720 }
721 }
722 }
723
724 /**
725 * Interrupts all threads, even if active. Ignores SecurityExceptions
726 * (in which case some threads may remain uninterrupted).
727 */
728 private void interruptWorkers() {
729 final ReentrantLock mainLock = this.mainLock;
730 mainLock.lock();
731 try {
732 for (Worker w : workers)
733 w.interruptIfStarted();
734 } finally {
735 mainLock.unlock();
736 }
737 }
738
739 /**
740 * Interrupts threads that might be waiting for tasks (as
741 * indicated by not being locked) so they can check for
742 * termination or configuration changes. Ignores
743 * SecurityExceptions (in which case some threads may remain
744 * uninterrupted).
745 *
746 * @param onlyOne If true, interrupt at most one worker. This is
747 * called only from tryTerminate when termination is otherwise
748 * enabled but there are still other workers. In this case, at
749 * most one waiting worker is interrupted to propagate shutdown
750 * signals in case all threads are currently waiting.
751 * Interrupting any arbitrary thread ensures that newly arriving
752 * workers since shutdown began will also eventually exit.
753 * To guarantee eventual termination, it suffices to always
754 * interrupt only one idle worker, but shutdown() interrupts all
755 * idle workers so that redundant workers exit promptly, not
756 * waiting for a straggler task to finish.
757 */
758 private void interruptIdleWorkers(boolean onlyOne) {
759 final ReentrantLock mainLock = this.mainLock;
760 mainLock.lock();
761 try {
762 for (Worker w : workers) {
763 Thread t = w.thread;
764 if (!t.isInterrupted() && w.tryLock()) {
765 try {
766 t.interrupt();
767 } catch (SecurityException ignore) {
768 } finally {
769 w.unlock();
770 }
771 }
772 if (onlyOne)
773 break;
774 }
775 } finally {
776 mainLock.unlock();
777 }
778 }
779
780 /**
781 * Common form of interruptIdleWorkers, to avoid having to
782 * remember what the boolean argument means.
783 */
784 private void interruptIdleWorkers() {
785 interruptIdleWorkers(false);
786 }
787
788 private static final boolean ONLY_ONE = true;
789
790 /*
791 * Misc utilities, most of which are also exported to
792 * ScheduledThreadPoolExecutor
793 */
794
795 /**
796 * Invokes the rejected execution handler for the given command.
797 * Package-protected for use by ScheduledThreadPoolExecutor.
798 */
799 final void reject(Runnable command) {
800 handler.rejectedExecution(command, this);
801 }
802
803 /**
804 * Performs any further cleanup following run state transition on
805 * invocation of shutdown. A no-op here, but used by
806 * ScheduledThreadPoolExecutor to cancel delayed tasks.
807 */
808 void onShutdown() {
809 }
810
811 /**
812 * State check needed by ScheduledThreadPoolExecutor to
813 * enable running tasks during shutdown.
814 *
815 * @param shutdownOK true if should return true if SHUTDOWN
816 */
817 final boolean isRunningOrShutdown(boolean shutdownOK) {
818 int rs = runStateOf(ctl.get());
819 return rs == RUNNING || (rs == SHUTDOWN && shutdownOK);
820 }
821
822 /**
823 * Drains the task queue into a new list, normally using
824 * drainTo. But if the queue is a DelayQueue or any other kind of
825 * queue for which poll or drainTo may fail to remove some
826 * elements, it deletes them one by one.
827 */
828 private List<Runnable> drainQueue() {
829 BlockingQueue<Runnable> q = workQueue;
830 ArrayList<Runnable> taskList = new ArrayList<>();
831 q.drainTo(taskList);
832 if (!q.isEmpty()) {
833 for (Runnable r : q.toArray(new Runnable[0])) {
834 if (q.remove(r))
835 taskList.add(r);
836 }
837 }
838 return taskList;
839 }
840
841 /*
842 * Methods for creating, running and cleaning up after workers
843 */
844
845 /**
846 * Checks if a new worker can be added with respect to current
847 * pool state and the given bound (either core or maximum). If so,
848 * the worker count is adjusted accordingly, and, if possible, a
849 * new worker is created and started, running firstTask as its
850 * first task. This method returns false if the pool is stopped or
851 * eligible to shut down. It also returns false if the thread
852 * factory fails to create a thread when asked. If the thread
853 * creation fails, either due to the thread factory returning
854 * null, or due to an exception (typically OutOfMemoryError in
855 * Thread.start()), we roll back cleanly.
856 *
857 * @param firstTask the task the new thread should run first (or
858 * null if none). Workers are created with an initial first task
859 * (in method execute()) to bypass queuing when there are fewer
860 * than corePoolSize threads (in which case we always start one),
861 * or when the queue is full (in which case we must bypass queue).
862 * Initially idle threads are usually created via
863 * prestartCoreThread or to replace other dying workers.
864 *
865 * @param core if true use corePoolSize as bound, else
866 * maximumPoolSize. (A boolean indicator is used here rather than a
867 * value to ensure reads of fresh values after checking other pool
868 * state).
869 * @return true if successful
870 */
871 private boolean addWorker(Runnable firstTask, boolean core) {
872 retry:
873 for (;;) {
874 int c = ctl.get();
875 int rs = runStateOf(c);
876
877 // Check if queue empty only if necessary.
878 if (rs >= SHUTDOWN &&
879 ! (rs == SHUTDOWN &&
880 firstTask == null &&
881 ! workQueue.isEmpty()))
882 return false;
883
884 for (;;) {
885 int wc = workerCountOf(c);
886 if (wc >= CAPACITY ||
887 wc >= (core ? corePoolSize : maximumPoolSize))
888 return false;
889 if (compareAndIncrementWorkerCount(c))
890 break retry;
891 c = ctl.get(); // Re-read ctl
892 if (runStateOf(c) != rs)
893 continue retry;
894 // else CAS failed due to workerCount change; retry inner loop
895 }
896 }
897
898 boolean workerStarted = false;
899 boolean workerAdded = false;
900 Worker w = null;
901 try {
902 w = new Worker(firstTask);
903 final Thread t = w.thread;
904 if (t != null) {
905 final ReentrantLock mainLock = this.mainLock;
906 mainLock.lock();
907 try {
908 // Recheck while holding lock.
909 // Back out on ThreadFactory failure or if
910 // shut down before lock acquired.
911 int rs = runStateOf(ctl.get());
912
913 if (rs < SHUTDOWN ||
914 (rs == SHUTDOWN && firstTask == null)) {
915 if (t.isAlive()) // precheck that t is startable
916 throw new IllegalThreadStateException();
917 workers.add(w);
918 int s = workers.size();
919 if (s > largestPoolSize)
920 largestPoolSize = s;
921 workerAdded = true;
922 }
923 } finally {
924 mainLock.unlock();
925 }
926 if (workerAdded) {
927 t.start();
928 workerStarted = true;
929 }
930 }
931 } finally {
932 if (! workerStarted)
933 addWorkerFailed(w);
934 }
935 return workerStarted;
936 }
937
938 /**
939 * Rolls back the worker thread creation.
940 * - removes worker from workers, if present
941 * - decrements worker count
942 * - rechecks for termination, in case the existence of this
943 * worker was holding up termination
944 */
945 private void addWorkerFailed(Worker w) {
946 final ReentrantLock mainLock = this.mainLock;
947 mainLock.lock();
948 try {
949 if (w != null)
950 workers.remove(w);
951 decrementWorkerCount();
952 tryTerminate();
953 } finally {
954 mainLock.unlock();
955 }
956 }
957
958 /**
959 * Performs cleanup and bookkeeping for a dying worker. Called
960 * only from worker threads. Unless completedAbruptly is set,
961 * assumes that workerCount has already been adjusted to account
962 * for exit. This method removes thread from worker set, and
963 * possibly terminates the pool or replaces the worker if either
964 * it exited due to user task exception or if fewer than
965 * corePoolSize workers are running or queue is non-empty but
966 * there are no workers.
967 *
968 * @param w the worker
969 * @param completedAbruptly if the worker died due to user exception
970 */
971 private void processWorkerExit(Worker w, boolean completedAbruptly) {
972 if (completedAbruptly) // If abrupt, then workerCount wasn't adjusted
973 decrementWorkerCount();
974
975 final ReentrantLock mainLock = this.mainLock;
976 mainLock.lock();
977 try {
978 completedTaskCount += w.completedTasks;
979 workers.remove(w);
980 } finally {
981 mainLock.unlock();
982 }
983
984 tryTerminate();
985
986 int c = ctl.get();
987 if (runStateLessThan(c, STOP)) {
988 if (!completedAbruptly) {
989 int min = allowCoreThreadTimeOut ? 0 : corePoolSize;
990 if (min == 0 && ! workQueue.isEmpty())
991 min = 1;
992 if (workerCountOf(c) >= min)
993 return; // replacement not needed
994 }
995 addWorker(null, false);
996 }
997 }
998
999 /**
1000 * Performs blocking or timed wait for a task, depending on
1001 * current configuration settings, or returns null if this worker
1002 * must exit because of any of:
1003 * 1. There are more than maximumPoolSize workers (due to
1004 * a call to setMaximumPoolSize).
1005 * 2. The pool is stopped.
1006 * 3. The pool is shutdown and the queue is empty.
1007 * 4. This worker timed out waiting for a task, and timed-out
1008 * workers are subject to termination (that is,
1009 * {@code allowCoreThreadTimeOut || workerCount > corePoolSize})
1010 * both before and after the timed wait, and if the queue is
1011 * non-empty, this worker is not the last thread in the pool.
1012 *
1013 * @return task, or null if the worker must exit, in which case
1014 * workerCount is decremented
1015 */
1016 private Runnable getTask() {
1017 boolean timedOut = false; // Did the last poll() time out?
1018
1019 for (;;) {
1020 int c = ctl.get();
1021 int rs = runStateOf(c);
1022
1023 // Check if queue empty only if necessary.
1024 if (rs >= SHUTDOWN && (rs >= STOP || workQueue.isEmpty())) {
1025 decrementWorkerCount();
1026 return null;
1027 }
1028
1029 int wc = workerCountOf(c);
1030
1031 // Are workers subject to culling?
1032 boolean timed = allowCoreThreadTimeOut || wc > corePoolSize;
1033
1034 if ((wc > maximumPoolSize || (timed && timedOut))
1035 && (wc > 1 || workQueue.isEmpty())) {
1036 if (compareAndDecrementWorkerCount(c))
1037 return null;
1038 continue;
1039 }
1040
1041 try {
1042 Runnable r = timed ?
1043 workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
1044 workQueue.take();
1045 if (r != null)
1046 return r;
1047 timedOut = true;
1048 } catch (InterruptedException retry) {
1049 timedOut = false;
1050 }
1051 }
1052 }
1053
1054 /**
1055 * Main worker run loop. Repeatedly gets tasks from queue and
1056 * executes them, while coping with a number of issues:
1057 *
1058 * 1. We may start out with an initial task, in which case we
1059 * don't need to get the first one. Otherwise, as long as pool is
1060 * running, we get tasks from getTask. If it returns null then the
1061 * worker exits due to changed pool state or configuration
1062 * parameters. Other exits result from exception throws in
1063 * external code, in which case completedAbruptly holds, which
1064 * usually leads processWorkerExit to replace this thread.
1065 *
1066 * 2. Before running any task, the lock is acquired to prevent
1067 * other pool interrupts while the task is executing, and then we
1068 * ensure that unless pool is stopping, this thread does not have
1069 * its interrupt set.
1070 *
1071 * 3. Each task run is preceded by a call to beforeExecute, which
1072 * might throw an exception, in which case we cause thread to die
1073 * (breaking loop with completedAbruptly true) without processing
1074 * the task.
1075 *
1076 * 4. Assuming beforeExecute completes normally, we run the task,
1077 * gathering any of its thrown exceptions to send to afterExecute.
1078 * We separately handle RuntimeException, Error (both of which the
1079 * specs guarantee that we trap) and arbitrary Throwables.
1080 * Because we cannot rethrow Throwables within Runnable.run, we
1081 * wrap them within Errors on the way out (to the thread's
1082 * UncaughtExceptionHandler). Any thrown exception also
1083 * conservatively causes thread to die.
1084 *
1085 * 5. After task.run completes, we call afterExecute, which may
1086 * also throw an exception, which will also cause thread to
1087 * die. According to JLS Sec 14.20, this exception is the one that
1088 * will be in effect even if task.run throws.
1089 *
1090 * The net effect of the exception mechanics is that afterExecute
1091 * and the thread's UncaughtExceptionHandler have as accurate
1092 * information as we can provide about any problems encountered by
1093 * user code.
1094 *
1095 * @param w the worker
1096 */
1097 final void runWorker(Worker w) {
1098 Thread wt = Thread.currentThread();
1099 Runnable task = w.firstTask;
1100 w.firstTask = null;
1101 w.unlock(); // allow interrupts
1102 boolean completedAbruptly = true;
1103 try {
1104 while (task != null || (task = getTask()) != null) {
1105 w.lock();
1106 // If pool is stopping, ensure thread is interrupted;
1107 // if not, ensure thread is not interrupted. This
1108 // requires a recheck in second case to deal with
1109 // shutdownNow race while clearing interrupt
1110 if ((runStateAtLeast(ctl.get(), STOP) ||
1111 (Thread.interrupted() &&
1112 runStateAtLeast(ctl.get(), STOP))) &&
1113 !wt.isInterrupted())
1114 wt.interrupt();
1115 try {
1116 beforeExecute(wt, task);
1117 Throwable thrown = null;
1118 try {
1119 task.run();
1120 } catch (RuntimeException x) {
1121 thrown = x; throw x;
1122 } catch (Error x) {
1123 thrown = x; throw x;
1124 } catch (Throwable x) {
1125 thrown = x; throw new Error(x);
1126 } finally {
1127 afterExecute(task, thrown);
1128 }
1129 } finally {
1130 task = null;
1131 w.completedTasks++;
1132 w.unlock();
1133 }
1134 }
1135 completedAbruptly = false;
1136 } finally {
1137 processWorkerExit(w, completedAbruptly);
1138 }
1139 }
1140
1141 // Public constructors and methods
1142
1143 /**
1144 * Creates a new {@code ThreadPoolExecutor} with the given initial
1145 * parameters and default thread factory and rejected execution handler.
1146 * It may be more convenient to use one of the {@link Executors} factory
1147 * methods instead of this general purpose constructor.
1148 *
1149 * @param corePoolSize the number of threads to keep in the pool, even
1150 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1151 * @param maximumPoolSize the maximum number of threads to allow in the
1152 * pool
1153 * @param keepAliveTime when the number of threads is greater than
1154 * the core, this is the maximum time that excess idle threads
1155 * will wait for new tasks before terminating.
1156 * @param unit the time unit for the {@code keepAliveTime} argument
1157 * @param workQueue the queue to use for holding tasks before they are
1158 * executed. This queue will hold only the {@code Runnable}
1159 * tasks submitted by the {@code execute} method.
1160 * @throws IllegalArgumentException if one of the following holds:<br>
1161 * {@code corePoolSize < 0}<br>
1162 * {@code keepAliveTime < 0}<br>
1163 * {@code maximumPoolSize <= 0}<br>
1164 * {@code maximumPoolSize < corePoolSize}
1165 * @throws NullPointerException if {@code workQueue} is null
1166 */
1167 public ThreadPoolExecutor(int corePoolSize,
1168 int maximumPoolSize,
1169 long keepAliveTime,
1170 TimeUnit unit,
1171 BlockingQueue<Runnable> workQueue) {
1172 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1173 Executors.defaultThreadFactory(), defaultHandler);
1174 }
1175
1176 /**
1177 * Creates a new {@code ThreadPoolExecutor} with the given initial
1178 * parameters and default rejected execution handler.
1179 *
1180 * @param corePoolSize the number of threads to keep in the pool, even
1181 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1182 * @param maximumPoolSize the maximum number of threads to allow in the
1183 * pool
1184 * @param keepAliveTime when the number of threads is greater than
1185 * the core, this is the maximum time that excess idle threads
1186 * will wait for new tasks before terminating.
1187 * @param unit the time unit for the {@code keepAliveTime} argument
1188 * @param workQueue the queue to use for holding tasks before they are
1189 * executed. This queue will hold only the {@code Runnable}
1190 * tasks submitted by the {@code execute} method.
1191 * @param threadFactory the factory to use when the executor
1192 * creates a new thread
1193 * @throws IllegalArgumentException if one of the following holds:<br>
1194 * {@code corePoolSize < 0}<br>
1195 * {@code keepAliveTime < 0}<br>
1196 * {@code maximumPoolSize <= 0}<br>
1197 * {@code maximumPoolSize < corePoolSize}
1198 * @throws NullPointerException if {@code workQueue}
1199 * or {@code threadFactory} is null
1200 */
1201 public ThreadPoolExecutor(int corePoolSize,
1202 int maximumPoolSize,
1203 long keepAliveTime,
1204 TimeUnit unit,
1205 BlockingQueue<Runnable> workQueue,
1206 ThreadFactory threadFactory) {
1207 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1208 threadFactory, defaultHandler);
1209 }
1210
1211 /**
1212 * Creates a new {@code ThreadPoolExecutor} with the given initial
1213 * parameters and default thread factory.
1214 *
1215 * @param corePoolSize the number of threads to keep in the pool, even
1216 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1217 * @param maximumPoolSize the maximum number of threads to allow in the
1218 * pool
1219 * @param keepAliveTime when the number of threads is greater than
1220 * the core, this is the maximum time that excess idle threads
1221 * will wait for new tasks before terminating.
1222 * @param unit the time unit for the {@code keepAliveTime} argument
1223 * @param workQueue the queue to use for holding tasks before they are
1224 * executed. This queue will hold only the {@code Runnable}
1225 * tasks submitted by the {@code execute} method.
1226 * @param handler the handler to use when execution is blocked
1227 * because the thread bounds and queue capacities are reached
1228 * @throws IllegalArgumentException if one of the following holds:<br>
1229 * {@code corePoolSize < 0}<br>
1230 * {@code keepAliveTime < 0}<br>
1231 * {@code maximumPoolSize <= 0}<br>
1232 * {@code maximumPoolSize < corePoolSize}
1233 * @throws NullPointerException if {@code workQueue}
1234 * or {@code handler} is null
1235 */
1236 public ThreadPoolExecutor(int corePoolSize,
1237 int maximumPoolSize,
1238 long keepAliveTime,
1239 TimeUnit unit,
1240 BlockingQueue<Runnable> workQueue,
1241 RejectedExecutionHandler handler) {
1242 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1243 Executors.defaultThreadFactory(), handler);
1244 }
1245
1246 /**
1247 * Creates a new {@code ThreadPoolExecutor} with the given initial
1248 * parameters.
1249 *
1250 * @param corePoolSize the number of threads to keep in the pool, even
1251 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1252 * @param maximumPoolSize the maximum number of threads to allow in the
1253 * pool
1254 * @param keepAliveTime when the number of threads is greater than
1255 * the core, this is the maximum time that excess idle threads
1256 * will wait for new tasks before terminating.
1257 * @param unit the time unit for the {@code keepAliveTime} argument
1258 * @param workQueue the queue to use for holding tasks before they are
1259 * executed. This queue will hold only the {@code Runnable}
1260 * tasks submitted by the {@code execute} method.
1261 * @param threadFactory the factory to use when the executor
1262 * creates a new thread
1263 * @param handler the handler to use when execution is blocked
1264 * because the thread bounds and queue capacities are reached
1265 * @throws IllegalArgumentException if one of the following holds:<br>
1266 * {@code corePoolSize < 0}<br>
1267 * {@code keepAliveTime < 0}<br>
1268 * {@code maximumPoolSize <= 0}<br>
1269 * {@code maximumPoolSize < corePoolSize}
1270 * @throws NullPointerException if {@code workQueue}
1271 * or {@code threadFactory} or {@code handler} is null
1272 */
1273 public ThreadPoolExecutor(int corePoolSize,
1274 int maximumPoolSize,
1275 long keepAliveTime,
1276 TimeUnit unit,
1277 BlockingQueue<Runnable> workQueue,
1278 ThreadFactory threadFactory,
1279 RejectedExecutionHandler handler) {
1280 if (corePoolSize < 0 ||
1281 maximumPoolSize <= 0 ||
1282 maximumPoolSize < corePoolSize ||
1283 keepAliveTime < 0)
1284 throw new IllegalArgumentException();
1285 if (workQueue == null || threadFactory == null || handler == null)
1286 throw new NullPointerException();
1287 this.corePoolSize = corePoolSize;
1288 this.maximumPoolSize = maximumPoolSize;
1289 this.workQueue = workQueue;
1290 this.keepAliveTime = unit.toNanos(keepAliveTime);
1291 this.threadFactory = threadFactory;
1292 this.handler = handler;
1293 }
1294
1295 /**
1296 * Executes the given task sometime in the future. The task
1297 * may execute in a new thread or in an existing pooled thread.
1298 *
1299 * If the task cannot be submitted for execution, either because this
1300 * executor has been shutdown or because its capacity has been reached,
1301 * the task is handled by the current {@code RejectedExecutionHandler}.
1302 *
1303 * @param command the task to execute
1304 * @throws RejectedExecutionException at discretion of
1305 * {@code RejectedExecutionHandler}, if the task
1306 * cannot be accepted for execution
1307 * @throws NullPointerException if {@code command} is null
1308 */
1309 public void execute(Runnable command) {
1310 if (command == null)
1311 throw new NullPointerException();
1312 /*
1313 * Proceed in 3 steps:
1314 *
1315 * 1. If fewer than corePoolSize threads are running, try to
1316 * start a new thread with the given command as its first
1317 * task. The call to addWorker atomically checks runState and
1318 * workerCount, and so prevents false alarms that would add
1319 * threads when it shouldn't, by returning false.
1320 *
1321 * 2. If a task can be successfully queued, then we still need
1322 * to double-check whether we should have added a thread
1323 * (because existing ones died since last checking) or that
1324 * the pool shut down since entry into this method. So we
1325 * recheck state and if necessary roll back the enqueuing if
1326 * stopped, or start a new thread if there are none.
1327 *
1328 * 3. If we cannot queue task, then we try to add a new
1329 * thread. If it fails, we know we are shut down or saturated
1330 * and so reject the task.
1331 */
1332 int c = ctl.get();
1333 if (workerCountOf(c) < corePoolSize) {
1334 if (addWorker(command, true))
1335 return;
1336 c = ctl.get();
1337 }
1338 if (isRunning(c) && workQueue.offer(command)) {
1339 int recheck = ctl.get();
1340 if (! isRunning(recheck) && remove(command))
1341 reject(command);
1342 else if (workerCountOf(recheck) == 0)
1343 addWorker(null, false);
1344 }
1345 else if (!addWorker(command, false))
1346 reject(command);
1347 }
1348
1349 /**
1350 * Initiates an orderly shutdown in which previously submitted
1351 * tasks are executed, but no new tasks will be accepted.
1352 * Invocation has no additional effect if already shut down.
1353 *
1354 * <p>This method does not wait for previously submitted tasks to
1355 * complete execution. Use {@link #awaitTermination awaitTermination}
1356 * to do that.
1357 *
1358 * @throws SecurityException {@inheritDoc}
1359 */
1360 public void shutdown() {
1361 final ReentrantLock mainLock = this.mainLock;
1362 mainLock.lock();
1363 try {
1364 checkShutdownAccess();
1365 advanceRunState(SHUTDOWN);
1366 interruptIdleWorkers();
1367 onShutdown(); // hook for ScheduledThreadPoolExecutor
1368 } finally {
1369 mainLock.unlock();
1370 }
1371 tryTerminate();
1372 }
1373
1374 /**
1375 * Attempts to stop all actively executing tasks, halts the
1376 * processing of waiting tasks, and returns a list of the tasks
1377 * that were awaiting execution. These tasks are drained (removed)
1378 * from the task queue upon return from this method.
1379 *
1380 * <p>This method does not wait for actively executing tasks to
1381 * terminate. Use {@link #awaitTermination awaitTermination} to
1382 * do that.
1383 *
1384 * <p>There are no guarantees beyond best-effort attempts to stop
1385 * processing actively executing tasks. This implementation
1386 * cancels tasks via {@link Thread#interrupt}, so any task that
1387 * fails to respond to interrupts may never terminate.
1388 *
1389 * @throws SecurityException {@inheritDoc}
1390 */
1391 public List<Runnable> shutdownNow() {
1392 List<Runnable> tasks;
1393 final ReentrantLock mainLock = this.mainLock;
1394 mainLock.lock();
1395 try {
1396 checkShutdownAccess();
1397 advanceRunState(STOP);
1398 interruptWorkers();
1399 tasks = drainQueue();
1400 } finally {
1401 mainLock.unlock();
1402 }
1403 tryTerminate();
1404 return tasks;
1405 }
1406
1407 public boolean isShutdown() {
1408 return ! isRunning(ctl.get());
1409 }
1410
1411 /**
1412 * Returns true if this executor is in the process of terminating
1413 * after {@link #shutdown} or {@link #shutdownNow} but has not
1414 * completely terminated. This method may be useful for
1415 * debugging. A return of {@code true} reported a sufficient
1416 * period after shutdown may indicate that submitted tasks have
1417 * ignored or suppressed interruption, causing this executor not
1418 * to properly terminate.
1419 *
1420 * @return {@code true} if terminating but not yet terminated
1421 */
1422 public boolean isTerminating() {
1423 int c = ctl.get();
1424 return ! isRunning(c) && runStateLessThan(c, TERMINATED);
1425 }
1426
1427 public boolean isTerminated() {
1428 return runStateAtLeast(ctl.get(), TERMINATED);
1429 }
1430
1431 public boolean awaitTermination(long timeout, TimeUnit unit)
1432 throws InterruptedException {
1433 long nanos = unit.toNanos(timeout);
1434 final ReentrantLock mainLock = this.mainLock;
1435 mainLock.lock();
1436 try {
1437 for (;;) {
1438 if (runStateAtLeast(ctl.get(), TERMINATED))
1439 return true;
1440 if (nanos <= 0)
1441 return false;
1442 nanos = termination.awaitNanos(nanos);
1443 }
1444 } finally {
1445 mainLock.unlock();
1446 }
1447 }
1448
1449 /**
1450 * Invokes {@code shutdown} when this executor is no longer
1451 * referenced and it has no threads.
1452 */
1453 protected void finalize() {
1454 shutdown();
1455 }
1456
1457 /**
1458 * Sets the thread factory used to create new threads.
1459 *
1460 * @param threadFactory the new thread factory
1461 * @throws NullPointerException if threadFactory is null
1462 * @see #getThreadFactory
1463 */
1464 public void setThreadFactory(ThreadFactory threadFactory) {
1465 if (threadFactory == null)
1466 throw new NullPointerException();
1467 this.threadFactory = threadFactory;
1468 }
1469
1470 /**
1471 * Returns the thread factory used to create new threads.
1472 *
1473 * @return the current thread factory
1474 * @see #setThreadFactory(ThreadFactory)
1475 */
1476 public ThreadFactory getThreadFactory() {
1477 return threadFactory;
1478 }
1479
1480 /**
1481 * Sets a new handler for unexecutable tasks.
1482 *
1483 * @param handler the new handler
1484 * @throws NullPointerException if handler is null
1485 * @see #getRejectedExecutionHandler
1486 */
1487 public void setRejectedExecutionHandler(RejectedExecutionHandler handler) {
1488 if (handler == null)
1489 throw new NullPointerException();
1490 this.handler = handler;
1491 }
1492
1493 /**
1494 * Returns the current handler for unexecutable tasks.
1495 *
1496 * @return the current handler
1497 * @see #setRejectedExecutionHandler(RejectedExecutionHandler)
1498 */
1499 public RejectedExecutionHandler getRejectedExecutionHandler() {
1500 return handler;
1501 }
1502
1503 /**
1504 * Sets the core number of threads. This overrides any value set
1505 * in the constructor. If the new value is smaller than the
1506 * current value, excess existing threads will be terminated when
1507 * they next become idle. If larger, new threads will, if needed,
1508 * be started to execute any queued tasks.
1509 *
1510 * @param corePoolSize the new core size
1511 * @throws IllegalArgumentException if {@code corePoolSize < 0}
1512 * or {@code corePoolSize} is greater than the {@linkplain
1513 * #getMaximumPoolSize() maximum pool size}
1514 * @see #getCorePoolSize
1515 */
1516 public void setCorePoolSize(int corePoolSize) {
1517 if (corePoolSize < 0 || maximumPoolSize < corePoolSize)
1518 throw new IllegalArgumentException();
1519 int delta = corePoolSize - this.corePoolSize;
1520 this.corePoolSize = corePoolSize;
1521 if (workerCountOf(ctl.get()) > corePoolSize)
1522 interruptIdleWorkers();
1523 else if (delta > 0) {
1524 // We don't really know how many new threads are "needed".
1525 // As a heuristic, prestart enough new workers (up to new
1526 // core size) to handle the current number of tasks in
1527 // queue, but stop if queue becomes empty while doing so.
1528 int k = Math.min(delta, workQueue.size());
1529 while (k-- > 0 && addWorker(null, true)) {
1530 if (workQueue.isEmpty())
1531 break;
1532 }
1533 }
1534 }
1535
1536 /**
1537 * Returns the core number of threads.
1538 *
1539 * @return the core number of threads
1540 * @see #setCorePoolSize
1541 */
1542 public int getCorePoolSize() {
1543 return corePoolSize;
1544 }
1545
1546 /**
1547 * Starts a core thread, causing it to idly wait for work. This
1548 * overrides the default policy of starting core threads only when
1549 * new tasks are executed. This method will return {@code false}
1550 * if all core threads have already been started.
1551 *
1552 * @return {@code true} if a thread was started
1553 */
1554 public boolean prestartCoreThread() {
1555 return workerCountOf(ctl.get()) < corePoolSize &&
1556 addWorker(null, true);
1557 }
1558
1559 /**
1560 * Same as prestartCoreThread except arranges that at least one
1561 * thread is started even if corePoolSize is 0.
1562 */
1563 void ensurePrestart() {
1564 int wc = workerCountOf(ctl.get());
1565 if (wc < corePoolSize)
1566 addWorker(null, true);
1567 else if (wc == 0)
1568 addWorker(null, false);
1569 }
1570
1571 /**
1572 * Starts all core threads, causing them to idly wait for work. This
1573 * overrides the default policy of starting core threads only when
1574 * new tasks are executed.
1575 *
1576 * @return the number of threads started
1577 */
1578 public int prestartAllCoreThreads() {
1579 int n = 0;
1580 while (addWorker(null, true))
1581 ++n;
1582 return n;
1583 }
1584
1585 /**
1586 * Returns true if this pool allows core threads to time out and
1587 * terminate if no tasks arrive within the keepAlive time, being
1588 * replaced if needed when new tasks arrive. When true, the same
1589 * keep-alive policy applying to non-core threads applies also to
1590 * core threads. When false (the default), core threads are never
1591 * terminated due to lack of incoming tasks.
1592 *
1593 * @return {@code true} if core threads are allowed to time out,
1594 * else {@code false}
1595 *
1596 * @since 1.6
1597 */
1598 public boolean allowsCoreThreadTimeOut() {
1599 return allowCoreThreadTimeOut;
1600 }
1601
1602 /**
1603 * Sets the policy governing whether core threads may time out and
1604 * terminate if no tasks arrive within the keep-alive time, being
1605 * replaced if needed when new tasks arrive. When false, core
1606 * threads are never terminated due to lack of incoming
1607 * tasks. When true, the same keep-alive policy applying to
1608 * non-core threads applies also to core threads. To avoid
1609 * continual thread replacement, the keep-alive time must be
1610 * greater than zero when setting {@code true}. This method
1611 * should in general be called before the pool is actively used.
1612 *
1613 * @param value {@code true} if should time out, else {@code false}
1614 * @throws IllegalArgumentException if value is {@code true}
1615 * and the current keep-alive time is not greater than zero
1616 *
1617 * @since 1.6
1618 */
1619 public void allowCoreThreadTimeOut(boolean value) {
1620 if (value && keepAliveTime <= 0)
1621 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1622 if (value != allowCoreThreadTimeOut) {
1623 allowCoreThreadTimeOut = value;
1624 if (value)
1625 interruptIdleWorkers();
1626 }
1627 }
1628
1629 /**
1630 * Sets the maximum allowed number of threads. This overrides any
1631 * value set in the constructor. If the new value is smaller than
1632 * the current value, excess existing threads will be
1633 * terminated when they next become idle.
1634 *
1635 * @param maximumPoolSize the new maximum
1636 * @throws IllegalArgumentException if the new maximum is
1637 * less than or equal to zero, or
1638 * less than the {@linkplain #getCorePoolSize core pool size}
1639 * @see #getMaximumPoolSize
1640 */
1641 public void setMaximumPoolSize(int maximumPoolSize) {
1642 if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize)
1643 throw new IllegalArgumentException();
1644 this.maximumPoolSize = maximumPoolSize;
1645 if (workerCountOf(ctl.get()) > maximumPoolSize)
1646 interruptIdleWorkers();
1647 }
1648
1649 /**
1650 * Returns the maximum allowed number of threads.
1651 *
1652 * @return the maximum allowed number of threads
1653 * @see #setMaximumPoolSize
1654 */
1655 public int getMaximumPoolSize() {
1656 return maximumPoolSize;
1657 }
1658
1659 /**
1660 * Sets the thread keep-alive time, which is the amount of time
1661 * that threads may remain idle before being terminated.
1662 * Threads that wait this amount of time without processing a
1663 * task will be terminated if there are more than the core
1664 * number of threads currently in the pool, or if this pool
1665 * {@linkplain #allowsCoreThreadTimeOut() allows core thread timeout}.
1666 * This overrides any value set in the constructor.
1667 *
1668 * @param time the time to wait. A time value of zero will cause
1669 * excess threads to terminate immediately after executing tasks.
1670 * @param unit the time unit of the {@code time} argument
1671 * @throws IllegalArgumentException if {@code time} less than zero or
1672 * if {@code time} is zero and {@code allowsCoreThreadTimeOut}
1673 * @see #getKeepAliveTime(TimeUnit)
1674 */
1675 public void setKeepAliveTime(long time, TimeUnit unit) {
1676 if (time < 0)
1677 throw new IllegalArgumentException();
1678 if (time == 0 && allowsCoreThreadTimeOut())
1679 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1680 long keepAliveTime = unit.toNanos(time);
1681 long delta = keepAliveTime - this.keepAliveTime;
1682 this.keepAliveTime = keepAliveTime;
1683 if (delta < 0)
1684 interruptIdleWorkers();
1685 }
1686
1687 /**
1688 * Returns the thread keep-alive time, which is the amount of time
1689 * that threads may remain idle before being terminated.
1690 * Threads that wait this amount of time without processing a
1691 * task will be terminated if there are more than the core
1692 * number of threads currently in the pool, or if this pool
1693 * {@linkplain #allowsCoreThreadTimeOut() allows core thread timeout}.
1694 *
1695 * @param unit the desired time unit of the result
1696 * @return the time limit
1697 * @see #setKeepAliveTime(long, TimeUnit)
1698 */
1699 public long getKeepAliveTime(TimeUnit unit) {
1700 return unit.convert(keepAliveTime, TimeUnit.NANOSECONDS);
1701 }
1702
1703 /* User-level queue utilities */
1704
1705 /**
1706 * Returns the task queue used by this executor. Access to the
1707 * task queue is intended primarily for debugging and monitoring.
1708 * This queue may be in active use. Retrieving the task queue
1709 * does not prevent queued tasks from executing.
1710 *
1711 * @return the task queue
1712 */
1713 public BlockingQueue<Runnable> getQueue() {
1714 return workQueue;
1715 }
1716
1717 /**
1718 * Removes this task from the executor's internal queue if it is
1719 * present, thus causing it not to be run if it has not already
1720 * started.
1721 *
1722 * <p>This method may be useful as one part of a cancellation
1723 * scheme. It may fail to remove tasks that have been converted
1724 * into other forms before being placed on the internal queue.
1725 * For example, a task entered using {@code submit} might be
1726 * converted into a form that maintains {@code Future} status.
1727 * However, in such cases, method {@link #purge} may be used to
1728 * remove those Futures that have been cancelled.
1729 *
1730 * @param task the task to remove
1731 * @return {@code true} if the task was removed
1732 */
1733 public boolean remove(Runnable task) {
1734 boolean removed = workQueue.remove(task);
1735 tryTerminate(); // In case SHUTDOWN and now empty
1736 return removed;
1737 }
1738
1739 /**
1740 * Tries to remove from the work queue all {@link Future}
1741 * tasks that have been cancelled. This method can be useful as a
1742 * storage reclamation operation, that has no other impact on
1743 * functionality. Cancelled tasks are never executed, but may
1744 * accumulate in work queues until worker threads can actively
1745 * remove them. Invoking this method instead tries to remove them now.
1746 * However, this method may fail to remove tasks in
1747 * the presence of interference by other threads.
1748 */
1749 public void purge() {
1750 final BlockingQueue<Runnable> q = workQueue;
1751 try {
1752 Iterator<Runnable> it = q.iterator();
1753 while (it.hasNext()) {
1754 Runnable r = it.next();
1755 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1756 it.remove();
1757 }
1758 } catch (ConcurrentModificationException fallThrough) {
1759 // Take slow path if we encounter interference during traversal.
1760 // Make copy for traversal and call remove for cancelled entries.
1761 // The slow path is more likely to be O(N*N).
1762 for (Object r : q.toArray())
1763 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1764 q.remove(r);
1765 }
1766
1767 tryTerminate(); // In case SHUTDOWN and now empty
1768 }
1769
1770 /* Statistics */
1771
1772 /**
1773 * Returns the current number of threads in the pool.
1774 *
1775 * @return the number of threads
1776 */
1777 public int getPoolSize() {
1778 final ReentrantLock mainLock = this.mainLock;
1779 mainLock.lock();
1780 try {
1781 // Remove rare and surprising possibility of
1782 // isTerminated() && getPoolSize() > 0
1783 return runStateAtLeast(ctl.get(), TIDYING) ? 0
1784 : workers.size();
1785 } finally {
1786 mainLock.unlock();
1787 }
1788 }
1789
1790 /**
1791 * Returns the approximate number of threads that are actively
1792 * executing tasks.
1793 *
1794 * @return the number of threads
1795 */
1796 public int getActiveCount() {
1797 final ReentrantLock mainLock = this.mainLock;
1798 mainLock.lock();
1799 try {
1800 int n = 0;
1801 for (Worker w : workers)
1802 if (w.isLocked())
1803 ++n;
1804 return n;
1805 } finally {
1806 mainLock.unlock();
1807 }
1808 }
1809
1810 /**
1811 * Returns the largest number of threads that have ever
1812 * simultaneously been in the pool.
1813 *
1814 * @return the number of threads
1815 */
1816 public int getLargestPoolSize() {
1817 final ReentrantLock mainLock = this.mainLock;
1818 mainLock.lock();
1819 try {
1820 return largestPoolSize;
1821 } finally {
1822 mainLock.unlock();
1823 }
1824 }
1825
1826 /**
1827 * Returns the approximate total number of tasks that have ever been
1828 * scheduled for execution. Because the states of tasks and
1829 * threads may change dynamically during computation, the returned
1830 * value is only an approximation.
1831 *
1832 * @return the number of tasks
1833 */
1834 public long getTaskCount() {
1835 final ReentrantLock mainLock = this.mainLock;
1836 mainLock.lock();
1837 try {
1838 long n = completedTaskCount;
1839 for (Worker w : workers) {
1840 n += w.completedTasks;
1841 if (w.isLocked())
1842 ++n;
1843 }
1844 return n + workQueue.size();
1845 } finally {
1846 mainLock.unlock();
1847 }
1848 }
1849
1850 /**
1851 * Returns the approximate total number of tasks that have
1852 * completed execution. Because the states of tasks and threads
1853 * may change dynamically during computation, the returned value
1854 * is only an approximation, but one that does not ever decrease
1855 * across successive calls.
1856 *
1857 * @return the number of tasks
1858 */
1859 public long getCompletedTaskCount() {
1860 final ReentrantLock mainLock = this.mainLock;
1861 mainLock.lock();
1862 try {
1863 long n = completedTaskCount;
1864 for (Worker w : workers)
1865 n += w.completedTasks;
1866 return n;
1867 } finally {
1868 mainLock.unlock();
1869 }
1870 }
1871
1872 /**
1873 * Returns a string identifying this pool, as well as its state,
1874 * including indications of run state and estimated worker and
1875 * task counts.
1876 *
1877 * @return a string identifying this pool, as well as its state
1878 */
1879 public String toString() {
1880 long ncompleted;
1881 int nworkers, nactive;
1882 final ReentrantLock mainLock = this.mainLock;
1883 mainLock.lock();
1884 try {
1885 ncompleted = completedTaskCount;
1886 nactive = 0;
1887 nworkers = workers.size();
1888 for (Worker w : workers) {
1889 ncompleted += w.completedTasks;
1890 if (w.isLocked())
1891 ++nactive;
1892 }
1893 } finally {
1894 mainLock.unlock();
1895 }
1896 int c = ctl.get();
1897 String runState =
1898 runStateLessThan(c, SHUTDOWN) ? "Running" :
1899 runStateAtLeast(c, TERMINATED) ? "Terminated" :
1900 "Shutting down";
1901 return super.toString() +
1902 "[" + runState +
1903 ", pool size = " + nworkers +
1904 ", active threads = " + nactive +
1905 ", queued tasks = " + workQueue.size() +
1906 ", completed tasks = " + ncompleted +
1907 "]";
1908 }
1909
1910 /* Extension hooks */
1911
1912 /**
1913 * Method invoked prior to executing the given Runnable in the
1914 * given thread. This method is invoked by thread {@code t} that
1915 * will execute task {@code r}, and may be used to re-initialize
1916 * ThreadLocals, or to perform logging.
1917 *
1918 * <p>This implementation does nothing, but may be customized in
1919 * subclasses. Note: To properly nest multiple overridings, subclasses
1920 * should generally invoke {@code super.beforeExecute} at the end of
1921 * this method.
1922 *
1923 * @param t the thread that will run task {@code r}
1924 * @param r the task that will be executed
1925 */
1926 protected void beforeExecute(Thread t, Runnable r) { }
1927
1928 /**
1929 * Method invoked upon completion of execution of the given Runnable.
1930 * This method is invoked by the thread that executed the task. If
1931 * non-null, the Throwable is the uncaught {@code RuntimeException}
1932 * or {@code Error} that caused execution to terminate abruptly.
1933 *
1934 * <p>This implementation does nothing, but may be customized in
1935 * subclasses. Note: To properly nest multiple overridings, subclasses
1936 * should generally invoke {@code super.afterExecute} at the
1937 * beginning of this method.
1938 *
1939 * <p><b>Note:</b> When actions are enclosed in tasks (such as
1940 * {@link FutureTask}) either explicitly or via methods such as
1941 * {@code submit}, these task objects catch and maintain
1942 * computational exceptions, and so they do not cause abrupt
1943 * termination, and the internal exceptions are <em>not</em>
1944 * passed to this method. If you would like to trap both kinds of
1945 * failures in this method, you can further probe for such cases,
1946 * as in this sample subclass that prints either the direct cause
1947 * or the underlying exception if a task has been aborted:
1948 *
1949 * <pre> {@code
1950 * class ExtendedExecutor extends ThreadPoolExecutor {
1951 * // ...
1952 * protected void afterExecute(Runnable r, Throwable t) {
1953 * super.afterExecute(r, t);
1954 * if (t == null && r instanceof Future<?>) {
1955 * try {
1956 * Object result = ((Future<?>) r).get();
1957 * } catch (CancellationException ce) {
1958 * t = ce;
1959 * } catch (ExecutionException ee) {
1960 * t = ee.getCause();
1961 * } catch (InterruptedException ie) {
1962 * Thread.currentThread().interrupt(); // ignore/reset
1963 * }
1964 * }
1965 * if (t != null)
1966 * System.out.println(t);
1967 * }
1968 * }}</pre>
1969 *
1970 * @param r the runnable that has completed
1971 * @param t the exception that caused termination, or null if
1972 * execution completed normally
1973 */
1974 protected void afterExecute(Runnable r, Throwable t) { }
1975
1976 /**
1977 * Method invoked when the Executor has terminated. Default
1978 * implementation does nothing. Note: To properly nest multiple
1979 * overridings, subclasses should generally invoke
1980 * {@code super.terminated} within this method.
1981 */
1982 protected void terminated() { }
1983
1984 /* Predefined RejectedExecutionHandlers */
1985
1986 /**
1987 * A handler for rejected tasks that runs the rejected task
1988 * directly in the calling thread of the {@code execute} method,
1989 * unless the executor has been shut down, in which case the task
1990 * is discarded.
1991 */
1992 public static class CallerRunsPolicy implements RejectedExecutionHandler {
1993 /**
1994 * Creates a {@code CallerRunsPolicy}.
1995 */
1996 public CallerRunsPolicy() { }
1997
1998 /**
1999 * Executes task r in the caller's thread, unless the executor
2000 * has been shut down, in which case the task is discarded.
2001 *
2002 * @param r the runnable task requested to be executed
2003 * @param e the executor attempting to execute this task
2004 */
2005 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2006 if (!e.isShutdown()) {
2007 r.run();
2008 }
2009 }
2010 }
2011
2012 /**
2013 * A handler for rejected tasks that throws a
2014 * {@code RejectedExecutionException}.
2015 */
2016 public static class AbortPolicy implements RejectedExecutionHandler {
2017 /**
2018 * Creates an {@code AbortPolicy}.
2019 */
2020 public AbortPolicy() { }
2021
2022 /**
2023 * Always throws RejectedExecutionException.
2024 *
2025 * @param r the runnable task requested to be executed
2026 * @param e the executor attempting to execute this task
2027 * @throws RejectedExecutionException always
2028 */
2029 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2030 throw new RejectedExecutionException("Task " + r.toString() +
2031 " rejected from " +
2032 e.toString());
2033 }
2034 }
2035
2036 /**
2037 * A handler for rejected tasks that silently discards the
2038 * rejected task.
2039 */
2040 public static class DiscardPolicy implements RejectedExecutionHandler {
2041 /**
2042 * Creates a {@code DiscardPolicy}.
2043 */
2044 public DiscardPolicy() { }
2045
2046 /**
2047 * Does nothing, which has the effect of discarding task r.
2048 *
2049 * @param r the runnable task requested to be executed
2050 * @param e the executor attempting to execute this task
2051 */
2052 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2053 }
2054 }
2055
2056 /**
2057 * A handler for rejected tasks that discards the oldest unhandled
2058 * request and then retries {@code execute}, unless the executor
2059 * is shut down, in which case the task is discarded.
2060 */
2061 public static class DiscardOldestPolicy implements RejectedExecutionHandler {
2062 /**
2063 * Creates a {@code DiscardOldestPolicy} for the given executor.
2064 */
2065 public DiscardOldestPolicy() { }
2066
2067 /**
2068 * Obtains and ignores the next task that the executor
2069 * would otherwise execute, if one is immediately available,
2070 * and then retries execution of task r, unless the executor
2071 * is shut down, in which case task r is instead discarded.
2072 *
2073 * @param r the runnable task requested to be executed
2074 * @param e the executor attempting to execute this task
2075 */
2076 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2077 if (!e.isShutdown()) {
2078 e.getQueue().poll();
2079 e.execute(r);
2080 }
2081 }
2082 }
2083 }