ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ThreadPoolExecutor.java
Revision: 1.170
Committed: Wed Mar 22 20:19:55 2017 UTC (7 years, 2 months ago) by jsr166
Branch: MAIN
Changes since 1.169: +16 -9 lines
Log Message:
clarify default rejected execution handler and thread factory

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain, as explained at
4 * http://creativecommons.org/publicdomain/zero/1.0/
5 */
6
7 package java.util.concurrent;
8
9 import java.util.ArrayList;
10 import java.util.ConcurrentModificationException;
11 import java.util.HashSet;
12 import java.util.Iterator;
13 import java.util.List;
14 import java.util.concurrent.atomic.AtomicInteger;
15 import java.util.concurrent.locks.AbstractQueuedSynchronizer;
16 import java.util.concurrent.locks.Condition;
17 import java.util.concurrent.locks.ReentrantLock;
18
19 /**
20 * An {@link ExecutorService} that executes each submitted task using
21 * one of possibly several pooled threads, normally configured
22 * using {@link Executors} factory methods.
23 *
24 * <p>Thread pools address two different problems: they usually
25 * provide improved performance when executing large numbers of
26 * asynchronous tasks, due to reduced per-task invocation overhead,
27 * and they provide a means of bounding and managing the resources,
28 * including threads, consumed when executing a collection of tasks.
29 * Each {@code ThreadPoolExecutor} also maintains some basic
30 * statistics, such as the number of completed tasks.
31 *
32 * <p>To be useful across a wide range of contexts, this class
33 * provides many adjustable parameters and extensibility
34 * hooks. However, programmers are urged to use the more convenient
35 * {@link Executors} factory methods {@link
36 * Executors#newCachedThreadPool} (unbounded thread pool, with
37 * automatic thread reclamation), {@link Executors#newFixedThreadPool}
38 * (fixed size thread pool) and {@link
39 * Executors#newSingleThreadExecutor} (single background thread), that
40 * preconfigure settings for the most common usage
41 * scenarios. Otherwise, use the following guide when manually
42 * configuring and tuning this class:
43 *
44 * <dl>
45 *
46 * <dt>Core and maximum pool sizes</dt>
47 *
48 * <dd>A {@code ThreadPoolExecutor} will automatically adjust the
49 * pool size (see {@link #getPoolSize})
50 * according to the bounds set by
51 * corePoolSize (see {@link #getCorePoolSize}) and
52 * maximumPoolSize (see {@link #getMaximumPoolSize}).
53 *
54 * When a new task is submitted in method {@link #execute(Runnable)},
55 * if fewer than corePoolSize threads are running, a new thread is
56 * created to handle the request, even if other worker threads are
57 * idle. Else if fewer than maximumPoolSize threads are running, a
58 * new thread will be created to handle the request only if the queue
59 * is full. By setting corePoolSize and maximumPoolSize the same, you
60 * create a fixed-size thread pool. By setting maximumPoolSize to an
61 * essentially unbounded value such as {@code Integer.MAX_VALUE}, you
62 * allow the pool to accommodate an arbitrary number of concurrent
63 * tasks. Most typically, core and maximum pool sizes are set only
64 * upon construction, but they may also be changed dynamically using
65 * {@link #setCorePoolSize} and {@link #setMaximumPoolSize}. </dd>
66 *
67 * <dt>On-demand construction</dt>
68 *
69 * <dd>By default, even core threads are initially created and
70 * started only when new tasks arrive, but this can be overridden
71 * dynamically using method {@link #prestartCoreThread} or {@link
72 * #prestartAllCoreThreads}. You probably want to prestart threads if
73 * you construct the pool with a non-empty queue. </dd>
74 *
75 * <dt>Creating new threads</dt>
76 *
77 * <dd>New threads are created using a {@link ThreadFactory}. If not
78 * otherwise specified, a {@link Executors#defaultThreadFactory} is
79 * used, that creates threads to all be in the same {@link
80 * ThreadGroup} and with the same {@code NORM_PRIORITY} priority and
81 * non-daemon status. By supplying a different ThreadFactory, you can
82 * alter the thread's name, thread group, priority, daemon status,
83 * etc. If a {@code ThreadFactory} fails to create a thread when asked
84 * by returning null from {@code newThread}, the executor will
85 * continue, but might not be able to execute any tasks. Threads
86 * should possess the "modifyThread" {@code RuntimePermission}. If
87 * worker threads or other threads using the pool do not possess this
88 * permission, service may be degraded: configuration changes may not
89 * take effect in a timely manner, and a shutdown pool may remain in a
90 * state in which termination is possible but not completed.</dd>
91 *
92 * <dt>Keep-alive times</dt>
93 *
94 * <dd>If the pool currently has more than corePoolSize threads,
95 * excess threads will be terminated if they have been idle for more
96 * than the keepAliveTime (see {@link #getKeepAliveTime(TimeUnit)}).
97 * This provides a means of reducing resource consumption when the
98 * pool is not being actively used. If the pool becomes more active
99 * later, new threads will be constructed. This parameter can also be
100 * changed dynamically using method {@link #setKeepAliveTime(long,
101 * TimeUnit)}. Using a value of {@code Long.MAX_VALUE} {@link
102 * TimeUnit#NANOSECONDS} effectively disables idle threads from ever
103 * terminating prior to shut down. By default, the keep-alive policy
104 * applies only when there are more than corePoolSize threads, but
105 * method {@link #allowCoreThreadTimeOut(boolean)} can be used to
106 * apply this time-out policy to core threads as well, so long as the
107 * keepAliveTime value is non-zero. </dd>
108 *
109 * <dt>Queuing</dt>
110 *
111 * <dd>Any {@link BlockingQueue} may be used to transfer and hold
112 * submitted tasks. The use of this queue interacts with pool sizing:
113 *
114 * <ul>
115 *
116 * <li>If fewer than corePoolSize threads are running, the Executor
117 * always prefers adding a new thread
118 * rather than queuing.
119 *
120 * <li>If corePoolSize or more threads are running, the Executor
121 * always prefers queuing a request rather than adding a new
122 * thread.
123 *
124 * <li>If a request cannot be queued, a new thread is created unless
125 * this would exceed maximumPoolSize, in which case, the task will be
126 * rejected.
127 *
128 * </ul>
129 *
130 * There are three general strategies for queuing:
131 * <ol>
132 *
133 * <li><em> Direct handoffs.</em> A good default choice for a work
134 * queue is a {@link SynchronousQueue} that hands off tasks to threads
135 * without otherwise holding them. Here, an attempt to queue a task
136 * will fail if no threads are immediately available to run it, so a
137 * new thread will be constructed. This policy avoids lockups when
138 * handling sets of requests that might have internal dependencies.
139 * Direct handoffs generally require unbounded maximumPoolSizes to
140 * avoid rejection of new submitted tasks. This in turn admits the
141 * possibility of unbounded thread growth when commands continue to
142 * arrive on average faster than they can be processed.
143 *
144 * <li><em> Unbounded queues.</em> Using an unbounded queue (for
145 * example a {@link LinkedBlockingQueue} without a predefined
146 * capacity) will cause new tasks to wait in the queue when all
147 * corePoolSize threads are busy. Thus, no more than corePoolSize
148 * threads will ever be created. (And the value of the maximumPoolSize
149 * therefore doesn't have any effect.) This may be appropriate when
150 * each task is completely independent of others, so tasks cannot
151 * affect each others execution; for example, in a web page server.
152 * While this style of queuing can be useful in smoothing out
153 * transient bursts of requests, it admits the possibility of
154 * unbounded work queue growth when commands continue to arrive on
155 * average faster than they can be processed.
156 *
157 * <li><em>Bounded queues.</em> A bounded queue (for example, an
158 * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
159 * used with finite maximumPoolSizes, but can be more difficult to
160 * tune and control. Queue sizes and maximum pool sizes may be traded
161 * off for each other: Using large queues and small pools minimizes
162 * CPU usage, OS resources, and context-switching overhead, but can
163 * lead to artificially low throughput. If tasks frequently block (for
164 * example if they are I/O bound), a system may be able to schedule
165 * time for more threads than you otherwise allow. Use of small queues
166 * generally requires larger pool sizes, which keeps CPUs busier but
167 * may encounter unacceptable scheduling overhead, which also
168 * decreases throughput.
169 *
170 * </ol>
171 *
172 * </dd>
173 *
174 * <dt>Rejected tasks</dt>
175 *
176 * <dd>New tasks submitted in method {@link #execute(Runnable)} will be
177 * <em>rejected</em> when the Executor has been shut down, and also when
178 * the Executor uses finite bounds for both maximum threads and work queue
179 * capacity, and is saturated. In either case, the {@code execute} method
180 * invokes the {@link
181 * RejectedExecutionHandler#rejectedExecution(Runnable, ThreadPoolExecutor)}
182 * method of its {@link RejectedExecutionHandler}. Four predefined handler
183 * policies are provided:
184 *
185 * <ol>
186 *
187 * <li>In the default {@link ThreadPoolExecutor.AbortPolicy}, the handler
188 * throws a runtime {@link RejectedExecutionException} upon rejection.
189 *
190 * <li>In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread
191 * that invokes {@code execute} itself runs the task. This provides a
192 * simple feedback control mechanism that will slow down the rate that
193 * new tasks are submitted.
194 *
195 * <li>In {@link ThreadPoolExecutor.DiscardPolicy}, a task that
196 * cannot be executed is simply dropped.
197 *
198 * <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the
199 * executor is not shut down, the task at the head of the work queue
200 * is dropped, and then execution is retried (which can fail again,
201 * causing this to be repeated.)
202 *
203 * </ol>
204 *
205 * It is possible to define and use other kinds of {@link
206 * RejectedExecutionHandler} classes. Doing so requires some care
207 * especially when policies are designed to work only under particular
208 * capacity or queuing policies. </dd>
209 *
210 * <dt>Hook methods</dt>
211 *
212 * <dd>This class provides {@code protected} overridable
213 * {@link #beforeExecute(Thread, Runnable)} and
214 * {@link #afterExecute(Runnable, Throwable)} methods that are called
215 * before and after execution of each task. These can be used to
216 * manipulate the execution environment; for example, reinitializing
217 * ThreadLocals, gathering statistics, or adding log entries.
218 * Additionally, method {@link #terminated} can be overridden to perform
219 * any special processing that needs to be done once the Executor has
220 * fully terminated.
221 *
222 * <p>If hook, callback, or BlockingQueue methods throw exceptions,
223 * internal worker threads may in turn fail, abruptly terminate, and
224 * possibly be replaced.</dd>
225 *
226 * <dt>Queue maintenance</dt>
227 *
228 * <dd>Method {@link #getQueue()} allows access to the work queue
229 * for purposes of monitoring and debugging. Use of this method for
230 * any other purpose is strongly discouraged. Two supplied methods,
231 * {@link #remove(Runnable)} and {@link #purge} are available to
232 * assist in storage reclamation when large numbers of queued tasks
233 * become cancelled.</dd>
234 *
235 * <dt>Finalization</dt>
236 *
237 * <dd>A pool that is no longer referenced in a program <em>AND</em>
238 * has no remaining threads will be {@code shutdown} automatically. If
239 * you would like to ensure that unreferenced pools are reclaimed even
240 * if users forget to call {@link #shutdown}, then you must arrange
241 * that unused threads eventually die, by setting appropriate
242 * keep-alive times, using a lower bound of zero core threads and/or
243 * setting {@link #allowCoreThreadTimeOut(boolean)}. </dd>
244 *
245 * </dl>
246 *
247 * <p><b>Extension example</b>. Most extensions of this class
248 * override one or more of the protected hook methods. For example,
249 * here is a subclass that adds a simple pause/resume feature:
250 *
251 * <pre> {@code
252 * class PausableThreadPoolExecutor extends ThreadPoolExecutor {
253 * private boolean isPaused;
254 * private ReentrantLock pauseLock = new ReentrantLock();
255 * private Condition unpaused = pauseLock.newCondition();
256 *
257 * public PausableThreadPoolExecutor(...) { super(...); }
258 *
259 * protected void beforeExecute(Thread t, Runnable r) {
260 * super.beforeExecute(t, r);
261 * pauseLock.lock();
262 * try {
263 * while (isPaused) unpaused.await();
264 * } catch (InterruptedException ie) {
265 * t.interrupt();
266 * } finally {
267 * pauseLock.unlock();
268 * }
269 * }
270 *
271 * public void pause() {
272 * pauseLock.lock();
273 * try {
274 * isPaused = true;
275 * } finally {
276 * pauseLock.unlock();
277 * }
278 * }
279 *
280 * public void resume() {
281 * pauseLock.lock();
282 * try {
283 * isPaused = false;
284 * unpaused.signalAll();
285 * } finally {
286 * pauseLock.unlock();
287 * }
288 * }
289 * }}</pre>
290 *
291 * @since 1.5
292 * @author Doug Lea
293 */
294 public class ThreadPoolExecutor extends AbstractExecutorService {
295 /**
296 * The main pool control state, ctl, is an atomic integer packing
297 * two conceptual fields
298 * workerCount, indicating the effective number of threads
299 * runState, indicating whether running, shutting down etc
300 *
301 * In order to pack them into one int, we limit workerCount to
302 * (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2
303 * billion) otherwise representable. If this is ever an issue in
304 * the future, the variable can be changed to be an AtomicLong,
305 * and the shift/mask constants below adjusted. But until the need
306 * arises, this code is a bit faster and simpler using an int.
307 *
308 * The workerCount is the number of workers that have been
309 * permitted to start and not permitted to stop. The value may be
310 * transiently different from the actual number of live threads,
311 * for example when a ThreadFactory fails to create a thread when
312 * asked, and when exiting threads are still performing
313 * bookkeeping before terminating. The user-visible pool size is
314 * reported as the current size of the workers set.
315 *
316 * The runState provides the main lifecycle control, taking on values:
317 *
318 * RUNNING: Accept new tasks and process queued tasks
319 * SHUTDOWN: Don't accept new tasks, but process queued tasks
320 * STOP: Don't accept new tasks, don't process queued tasks,
321 * and interrupt in-progress tasks
322 * TIDYING: All tasks have terminated, workerCount is zero,
323 * the thread transitioning to state TIDYING
324 * will run the terminated() hook method
325 * TERMINATED: terminated() has completed
326 *
327 * The numerical order among these values matters, to allow
328 * ordered comparisons. The runState monotonically increases over
329 * time, but need not hit each state. The transitions are:
330 *
331 * RUNNING -> SHUTDOWN
332 * On invocation of shutdown(), perhaps implicitly in finalize()
333 * (RUNNING or SHUTDOWN) -> STOP
334 * On invocation of shutdownNow()
335 * SHUTDOWN -> TIDYING
336 * When both queue and pool are empty
337 * STOP -> TIDYING
338 * When pool is empty
339 * TIDYING -> TERMINATED
340 * When the terminated() hook method has completed
341 *
342 * Threads waiting in awaitTermination() will return when the
343 * state reaches TERMINATED.
344 *
345 * Detecting the transition from SHUTDOWN to TIDYING is less
346 * straightforward than you'd like because the queue may become
347 * empty after non-empty and vice versa during SHUTDOWN state, but
348 * we can only terminate if, after seeing that it is empty, we see
349 * that workerCount is 0 (which sometimes entails a recheck -- see
350 * below).
351 */
352 private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
353 private static final int COUNT_BITS = Integer.SIZE - 3;
354 private static final int CAPACITY = (1 << COUNT_BITS) - 1;
355
356 // runState is stored in the high-order bits
357 private static final int RUNNING = -1 << COUNT_BITS;
358 private static final int SHUTDOWN = 0 << COUNT_BITS;
359 private static final int STOP = 1 << COUNT_BITS;
360 private static final int TIDYING = 2 << COUNT_BITS;
361 private static final int TERMINATED = 3 << COUNT_BITS;
362
363 // Packing and unpacking ctl
364 private static int runStateOf(int c) { return c & ~CAPACITY; }
365 private static int workerCountOf(int c) { return c & CAPACITY; }
366 private static int ctlOf(int rs, int wc) { return rs | wc; }
367
368 /*
369 * Bit field accessors that don't require unpacking ctl.
370 * These depend on the bit layout and on workerCount being never negative.
371 */
372
373 private static boolean runStateLessThan(int c, int s) {
374 return c < s;
375 }
376
377 private static boolean runStateAtLeast(int c, int s) {
378 return c >= s;
379 }
380
381 private static boolean isRunning(int c) {
382 return c < SHUTDOWN;
383 }
384
385 /**
386 * Attempts to CAS-increment the workerCount field of ctl.
387 */
388 private boolean compareAndIncrementWorkerCount(int expect) {
389 return ctl.compareAndSet(expect, expect + 1);
390 }
391
392 /**
393 * Attempts to CAS-decrement the workerCount field of ctl.
394 */
395 private boolean compareAndDecrementWorkerCount(int expect) {
396 return ctl.compareAndSet(expect, expect - 1);
397 }
398
399 /**
400 * Decrements the workerCount field of ctl. This is called only on
401 * abrupt termination of a thread (see processWorkerExit). Other
402 * decrements are performed within getTask.
403 */
404 private void decrementWorkerCount() {
405 do {} while (! compareAndDecrementWorkerCount(ctl.get()));
406 }
407
408 /**
409 * The queue used for holding tasks and handing off to worker
410 * threads. We do not require that workQueue.poll() returning
411 * null necessarily means that workQueue.isEmpty(), so rely
412 * solely on isEmpty to see if the queue is empty (which we must
413 * do for example when deciding whether to transition from
414 * SHUTDOWN to TIDYING). This accommodates special-purpose
415 * queues such as DelayQueues for which poll() is allowed to
416 * return null even if it may later return non-null when delays
417 * expire.
418 */
419 private final BlockingQueue<Runnable> workQueue;
420
421 /**
422 * Lock held on access to workers set and related bookkeeping.
423 * While we could use a concurrent set of some sort, it turns out
424 * to be generally preferable to use a lock. Among the reasons is
425 * that this serializes interruptIdleWorkers, which avoids
426 * unnecessary interrupt storms, especially during shutdown.
427 * Otherwise exiting threads would concurrently interrupt those
428 * that have not yet interrupted. It also simplifies some of the
429 * associated statistics bookkeeping of largestPoolSize etc. We
430 * also hold mainLock on shutdown and shutdownNow, for the sake of
431 * ensuring workers set is stable while separately checking
432 * permission to interrupt and actually interrupting.
433 */
434 private final ReentrantLock mainLock = new ReentrantLock();
435
436 /**
437 * Set containing all worker threads in pool. Accessed only when
438 * holding mainLock.
439 */
440 private final HashSet<Worker> workers = new HashSet<>();
441
442 /**
443 * Wait condition to support awaitTermination.
444 */
445 private final Condition termination = mainLock.newCondition();
446
447 /**
448 * Tracks largest attained pool size. Accessed only under
449 * mainLock.
450 */
451 private int largestPoolSize;
452
453 /**
454 * Counter for completed tasks. Updated only on termination of
455 * worker threads. Accessed only under mainLock.
456 */
457 private long completedTaskCount;
458
459 /*
460 * All user control parameters are declared as volatiles so that
461 * ongoing actions are based on freshest values, but without need
462 * for locking, since no internal invariants depend on them
463 * changing synchronously with respect to other actions.
464 */
465
466 /**
467 * Factory for new threads. All threads are created using this
468 * factory (via method addWorker). All callers must be prepared
469 * for addWorker to fail, which may reflect a system or user's
470 * policy limiting the number of threads. Even though it is not
471 * treated as an error, failure to create threads may result in
472 * new tasks being rejected or existing ones remaining stuck in
473 * the queue.
474 *
475 * We go further and preserve pool invariants even in the face of
476 * errors such as OutOfMemoryError, that might be thrown while
477 * trying to create threads. Such errors are rather common due to
478 * the need to allocate a native stack in Thread.start, and users
479 * will want to perform clean pool shutdown to clean up. There
480 * will likely be enough memory available for the cleanup code to
481 * complete without encountering yet another OutOfMemoryError.
482 */
483 private volatile ThreadFactory threadFactory;
484
485 /**
486 * Handler called when saturated or shutdown in execute.
487 */
488 private volatile RejectedExecutionHandler handler;
489
490 /**
491 * Timeout in nanoseconds for idle threads waiting for work.
492 * Threads use this timeout when there are more than corePoolSize
493 * present or if allowCoreThreadTimeOut. Otherwise they wait
494 * forever for new work.
495 */
496 private volatile long keepAliveTime;
497
498 /**
499 * If false (default), core threads stay alive even when idle.
500 * If true, core threads use keepAliveTime to time out waiting
501 * for work.
502 */
503 private volatile boolean allowCoreThreadTimeOut;
504
505 /**
506 * Core pool size is the minimum number of workers to keep alive
507 * (and not allow to time out etc) unless allowCoreThreadTimeOut
508 * is set, in which case the minimum is zero.
509 */
510 private volatile int corePoolSize;
511
512 /**
513 * Maximum pool size. Note that the actual maximum is internally
514 * bounded by CAPACITY.
515 */
516 private volatile int maximumPoolSize;
517
518 /**
519 * The default rejected execution handler.
520 */
521 private static final RejectedExecutionHandler defaultHandler =
522 new AbortPolicy();
523
524 /**
525 * Permission required for callers of shutdown and shutdownNow.
526 * We additionally require (see checkShutdownAccess) that callers
527 * have permission to actually interrupt threads in the worker set
528 * (as governed by Thread.interrupt, which relies on
529 * ThreadGroup.checkAccess, which in turn relies on
530 * SecurityManager.checkAccess). Shutdowns are attempted only if
531 * these checks pass.
532 *
533 * All actual invocations of Thread.interrupt (see
534 * interruptIdleWorkers and interruptWorkers) ignore
535 * SecurityExceptions, meaning that the attempted interrupts
536 * silently fail. In the case of shutdown, they should not fail
537 * unless the SecurityManager has inconsistent policies, sometimes
538 * allowing access to a thread and sometimes not. In such cases,
539 * failure to actually interrupt threads may disable or delay full
540 * termination. Other uses of interruptIdleWorkers are advisory,
541 * and failure to actually interrupt will merely delay response to
542 * configuration changes so is not handled exceptionally.
543 */
544 private static final RuntimePermission shutdownPerm =
545 new RuntimePermission("modifyThread");
546
547 /**
548 * Class Worker mainly maintains interrupt control state for
549 * threads running tasks, along with other minor bookkeeping.
550 * This class opportunistically extends AbstractQueuedSynchronizer
551 * to simplify acquiring and releasing a lock surrounding each
552 * task execution. This protects against interrupts that are
553 * intended to wake up a worker thread waiting for a task from
554 * instead interrupting a task being run. We implement a simple
555 * non-reentrant mutual exclusion lock rather than use
556 * ReentrantLock because we do not want worker tasks to be able to
557 * reacquire the lock when they invoke pool control methods like
558 * setCorePoolSize. Additionally, to suppress interrupts until
559 * the thread actually starts running tasks, we initialize lock
560 * state to a negative value, and clear it upon start (in
561 * runWorker).
562 */
563 private final class Worker
564 extends AbstractQueuedSynchronizer
565 implements Runnable
566 {
567 /**
568 * This class will never be serialized, but we provide a
569 * serialVersionUID to suppress a javac warning.
570 */
571 private static final long serialVersionUID = 6138294804551838833L;
572
573 /** Thread this worker is running in. Null if factory fails. */
574 final Thread thread;
575 /** Initial task to run. Possibly null. */
576 Runnable firstTask;
577 /** Per-thread task counter */
578 volatile long completedTasks;
579
580 // TODO: switch to AbstractQueuedLongSynchronizer and move
581 // completedTasks into the lock word.
582
583 /**
584 * Creates with given first task and thread from ThreadFactory.
585 * @param firstTask the first task (null if none)
586 */
587 Worker(Runnable firstTask) {
588 setState(-1); // inhibit interrupts until runWorker
589 this.firstTask = firstTask;
590 this.thread = getThreadFactory().newThread(this);
591 }
592
593 /** Delegates main run loop to outer runWorker. */
594 public void run() {
595 runWorker(this);
596 }
597
598 // Lock methods
599 //
600 // The value 0 represents the unlocked state.
601 // The value 1 represents the locked state.
602
603 protected boolean isHeldExclusively() {
604 return getState() != 0;
605 }
606
607 protected boolean tryAcquire(int unused) {
608 if (compareAndSetState(0, 1)) {
609 setExclusiveOwnerThread(Thread.currentThread());
610 return true;
611 }
612 return false;
613 }
614
615 protected boolean tryRelease(int unused) {
616 setExclusiveOwnerThread(null);
617 setState(0);
618 return true;
619 }
620
621 public void lock() { acquire(1); }
622 public boolean tryLock() { return tryAcquire(1); }
623 public void unlock() { release(1); }
624 public boolean isLocked() { return isHeldExclusively(); }
625
626 void interruptIfStarted() {
627 Thread t;
628 if (getState() >= 0 && (t = thread) != null && !t.isInterrupted()) {
629 try {
630 t.interrupt();
631 } catch (SecurityException ignore) {
632 }
633 }
634 }
635 }
636
637 /*
638 * Methods for setting control state
639 */
640
641 /**
642 * Transitions runState to given target, or leaves it alone if
643 * already at least the given target.
644 *
645 * @param targetState the desired state, either SHUTDOWN or STOP
646 * (but not TIDYING or TERMINATED -- use tryTerminate for that)
647 */
648 private void advanceRunState(int targetState) {
649 // assert targetState == SHUTDOWN || targetState == STOP;
650 for (;;) {
651 int c = ctl.get();
652 if (runStateAtLeast(c, targetState) ||
653 ctl.compareAndSet(c, ctlOf(targetState, workerCountOf(c))))
654 break;
655 }
656 }
657
658 /**
659 * Transitions to TERMINATED state if either (SHUTDOWN and pool
660 * and queue empty) or (STOP and pool empty). If otherwise
661 * eligible to terminate but workerCount is nonzero, interrupts an
662 * idle worker to ensure that shutdown signals propagate. This
663 * method must be called following any action that might make
664 * termination possible -- reducing worker count or removing tasks
665 * from the queue during shutdown. The method is non-private to
666 * allow access from ScheduledThreadPoolExecutor.
667 */
668 final void tryTerminate() {
669 for (;;) {
670 int c = ctl.get();
671 if (isRunning(c) ||
672 runStateAtLeast(c, TIDYING) ||
673 (runStateOf(c) == SHUTDOWN && ! workQueue.isEmpty()))
674 return;
675 if (workerCountOf(c) != 0) { // Eligible to terminate
676 interruptIdleWorkers(ONLY_ONE);
677 return;
678 }
679
680 final ReentrantLock mainLock = this.mainLock;
681 mainLock.lock();
682 try {
683 if (ctl.compareAndSet(c, ctlOf(TIDYING, 0))) {
684 try {
685 terminated();
686 } finally {
687 ctl.set(ctlOf(TERMINATED, 0));
688 termination.signalAll();
689 }
690 return;
691 }
692 } finally {
693 mainLock.unlock();
694 }
695 // else retry on failed CAS
696 }
697 }
698
699 /*
700 * Methods for controlling interrupts to worker threads.
701 */
702
703 /**
704 * If there is a security manager, makes sure caller has
705 * permission to shut down threads in general (see shutdownPerm).
706 * If this passes, additionally makes sure the caller is allowed
707 * to interrupt each worker thread. This might not be true even if
708 * first check passed, if the SecurityManager treats some threads
709 * specially.
710 */
711 private void checkShutdownAccess() {
712 SecurityManager security = System.getSecurityManager();
713 if (security != null) {
714 security.checkPermission(shutdownPerm);
715 final ReentrantLock mainLock = this.mainLock;
716 mainLock.lock();
717 try {
718 for (Worker w : workers)
719 security.checkAccess(w.thread);
720 } finally {
721 mainLock.unlock();
722 }
723 }
724 }
725
726 /**
727 * Interrupts all threads, even if active. Ignores SecurityExceptions
728 * (in which case some threads may remain uninterrupted).
729 */
730 private void interruptWorkers() {
731 final ReentrantLock mainLock = this.mainLock;
732 mainLock.lock();
733 try {
734 for (Worker w : workers)
735 w.interruptIfStarted();
736 } finally {
737 mainLock.unlock();
738 }
739 }
740
741 /**
742 * Interrupts threads that might be waiting for tasks (as
743 * indicated by not being locked) so they can check for
744 * termination or configuration changes. Ignores
745 * SecurityExceptions (in which case some threads may remain
746 * uninterrupted).
747 *
748 * @param onlyOne If true, interrupt at most one worker. This is
749 * called only from tryTerminate when termination is otherwise
750 * enabled but there are still other workers. In this case, at
751 * most one waiting worker is interrupted to propagate shutdown
752 * signals in case all threads are currently waiting.
753 * Interrupting any arbitrary thread ensures that newly arriving
754 * workers since shutdown began will also eventually exit.
755 * To guarantee eventual termination, it suffices to always
756 * interrupt only one idle worker, but shutdown() interrupts all
757 * idle workers so that redundant workers exit promptly, not
758 * waiting for a straggler task to finish.
759 */
760 private void interruptIdleWorkers(boolean onlyOne) {
761 final ReentrantLock mainLock = this.mainLock;
762 mainLock.lock();
763 try {
764 for (Worker w : workers) {
765 Thread t = w.thread;
766 if (!t.isInterrupted() && w.tryLock()) {
767 try {
768 t.interrupt();
769 } catch (SecurityException ignore) {
770 } finally {
771 w.unlock();
772 }
773 }
774 if (onlyOne)
775 break;
776 }
777 } finally {
778 mainLock.unlock();
779 }
780 }
781
782 /**
783 * Common form of interruptIdleWorkers, to avoid having to
784 * remember what the boolean argument means.
785 */
786 private void interruptIdleWorkers() {
787 interruptIdleWorkers(false);
788 }
789
790 private static final boolean ONLY_ONE = true;
791
792 /*
793 * Misc utilities, most of which are also exported to
794 * ScheduledThreadPoolExecutor
795 */
796
797 /**
798 * Invokes the rejected execution handler for the given command.
799 * Package-protected for use by ScheduledThreadPoolExecutor.
800 */
801 final void reject(Runnable command) {
802 handler.rejectedExecution(command, this);
803 }
804
805 /**
806 * Performs any further cleanup following run state transition on
807 * invocation of shutdown. A no-op here, but used by
808 * ScheduledThreadPoolExecutor to cancel delayed tasks.
809 */
810 void onShutdown() {
811 }
812
813 /**
814 * State check needed by ScheduledThreadPoolExecutor to
815 * enable running tasks during shutdown.
816 *
817 * @param shutdownOK true if should return true if SHUTDOWN
818 */
819 final boolean isRunningOrShutdown(boolean shutdownOK) {
820 int rs = runStateOf(ctl.get());
821 return rs == RUNNING || (rs == SHUTDOWN && shutdownOK);
822 }
823
824 /**
825 * Drains the task queue into a new list, normally using
826 * drainTo. But if the queue is a DelayQueue or any other kind of
827 * queue for which poll or drainTo may fail to remove some
828 * elements, it deletes them one by one.
829 */
830 private List<Runnable> drainQueue() {
831 BlockingQueue<Runnable> q = workQueue;
832 ArrayList<Runnable> taskList = new ArrayList<>();
833 q.drainTo(taskList);
834 if (!q.isEmpty()) {
835 for (Runnable r : q.toArray(new Runnable[0])) {
836 if (q.remove(r))
837 taskList.add(r);
838 }
839 }
840 return taskList;
841 }
842
843 /*
844 * Methods for creating, running and cleaning up after workers
845 */
846
847 /**
848 * Checks if a new worker can be added with respect to current
849 * pool state and the given bound (either core or maximum). If so,
850 * the worker count is adjusted accordingly, and, if possible, a
851 * new worker is created and started, running firstTask as its
852 * first task. This method returns false if the pool is stopped or
853 * eligible to shut down. It also returns false if the thread
854 * factory fails to create a thread when asked. If the thread
855 * creation fails, either due to the thread factory returning
856 * null, or due to an exception (typically OutOfMemoryError in
857 * Thread.start()), we roll back cleanly.
858 *
859 * @param firstTask the task the new thread should run first (or
860 * null if none). Workers are created with an initial first task
861 * (in method execute()) to bypass queuing when there are fewer
862 * than corePoolSize threads (in which case we always start one),
863 * or when the queue is full (in which case we must bypass queue).
864 * Initially idle threads are usually created via
865 * prestartCoreThread or to replace other dying workers.
866 *
867 * @param core if true use corePoolSize as bound, else
868 * maximumPoolSize. (A boolean indicator is used here rather than a
869 * value to ensure reads of fresh values after checking other pool
870 * state).
871 * @return true if successful
872 */
873 private boolean addWorker(Runnable firstTask, boolean core) {
874 retry:
875 for (;;) {
876 int c = ctl.get();
877 int rs = runStateOf(c);
878
879 // Check if queue empty only if necessary.
880 if (rs >= SHUTDOWN &&
881 ! (rs == SHUTDOWN &&
882 firstTask == null &&
883 ! workQueue.isEmpty()))
884 return false;
885
886 for (;;) {
887 int wc = workerCountOf(c);
888 if (wc >= CAPACITY ||
889 wc >= (core ? corePoolSize : maximumPoolSize))
890 return false;
891 if (compareAndIncrementWorkerCount(c))
892 break retry;
893 c = ctl.get(); // Re-read ctl
894 if (runStateOf(c) != rs)
895 continue retry;
896 // else CAS failed due to workerCount change; retry inner loop
897 }
898 }
899
900 boolean workerStarted = false;
901 boolean workerAdded = false;
902 Worker w = null;
903 try {
904 w = new Worker(firstTask);
905 final Thread t = w.thread;
906 if (t != null) {
907 final ReentrantLock mainLock = this.mainLock;
908 mainLock.lock();
909 try {
910 // Recheck while holding lock.
911 // Back out on ThreadFactory failure or if
912 // shut down before lock acquired.
913 int rs = runStateOf(ctl.get());
914
915 if (rs < SHUTDOWN ||
916 (rs == SHUTDOWN && firstTask == null)) {
917 if (t.isAlive()) // precheck that t is startable
918 throw new IllegalThreadStateException();
919 workers.add(w);
920 int s = workers.size();
921 if (s > largestPoolSize)
922 largestPoolSize = s;
923 workerAdded = true;
924 }
925 } finally {
926 mainLock.unlock();
927 }
928 if (workerAdded) {
929 t.start();
930 workerStarted = true;
931 }
932 }
933 } finally {
934 if (! workerStarted)
935 addWorkerFailed(w);
936 }
937 return workerStarted;
938 }
939
940 /**
941 * Rolls back the worker thread creation.
942 * - removes worker from workers, if present
943 * - decrements worker count
944 * - rechecks for termination, in case the existence of this
945 * worker was holding up termination
946 */
947 private void addWorkerFailed(Worker w) {
948 final ReentrantLock mainLock = this.mainLock;
949 mainLock.lock();
950 try {
951 if (w != null)
952 workers.remove(w);
953 decrementWorkerCount();
954 tryTerminate();
955 } finally {
956 mainLock.unlock();
957 }
958 }
959
960 /**
961 * Performs cleanup and bookkeeping for a dying worker. Called
962 * only from worker threads. Unless completedAbruptly is set,
963 * assumes that workerCount has already been adjusted to account
964 * for exit. This method removes thread from worker set, and
965 * possibly terminates the pool or replaces the worker if either
966 * it exited due to user task exception or if fewer than
967 * corePoolSize workers are running or queue is non-empty but
968 * there are no workers.
969 *
970 * @param w the worker
971 * @param completedAbruptly if the worker died due to user exception
972 */
973 private void processWorkerExit(Worker w, boolean completedAbruptly) {
974 if (completedAbruptly) // If abrupt, then workerCount wasn't adjusted
975 decrementWorkerCount();
976
977 final ReentrantLock mainLock = this.mainLock;
978 mainLock.lock();
979 try {
980 completedTaskCount += w.completedTasks;
981 workers.remove(w);
982 } finally {
983 mainLock.unlock();
984 }
985
986 tryTerminate();
987
988 int c = ctl.get();
989 if (runStateLessThan(c, STOP)) {
990 if (!completedAbruptly) {
991 int min = allowCoreThreadTimeOut ? 0 : corePoolSize;
992 if (min == 0 && ! workQueue.isEmpty())
993 min = 1;
994 if (workerCountOf(c) >= min)
995 return; // replacement not needed
996 }
997 addWorker(null, false);
998 }
999 }
1000
1001 /**
1002 * Performs blocking or timed wait for a task, depending on
1003 * current configuration settings, or returns null if this worker
1004 * must exit because of any of:
1005 * 1. There are more than maximumPoolSize workers (due to
1006 * a call to setMaximumPoolSize).
1007 * 2. The pool is stopped.
1008 * 3. The pool is shutdown and the queue is empty.
1009 * 4. This worker timed out waiting for a task, and timed-out
1010 * workers are subject to termination (that is,
1011 * {@code allowCoreThreadTimeOut || workerCount > corePoolSize})
1012 * both before and after the timed wait, and if the queue is
1013 * non-empty, this worker is not the last thread in the pool.
1014 *
1015 * @return task, or null if the worker must exit, in which case
1016 * workerCount is decremented
1017 */
1018 private Runnable getTask() {
1019 boolean timedOut = false; // Did the last poll() time out?
1020
1021 for (;;) {
1022 int c = ctl.get();
1023 int rs = runStateOf(c);
1024
1025 // Check if queue empty only if necessary.
1026 if (rs >= SHUTDOWN && (rs >= STOP || workQueue.isEmpty())) {
1027 decrementWorkerCount();
1028 return null;
1029 }
1030
1031 int wc = workerCountOf(c);
1032
1033 // Are workers subject to culling?
1034 boolean timed = allowCoreThreadTimeOut || wc > corePoolSize;
1035
1036 if ((wc > maximumPoolSize || (timed && timedOut))
1037 && (wc > 1 || workQueue.isEmpty())) {
1038 if (compareAndDecrementWorkerCount(c))
1039 return null;
1040 continue;
1041 }
1042
1043 try {
1044 Runnable r = timed ?
1045 workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
1046 workQueue.take();
1047 if (r != null)
1048 return r;
1049 timedOut = true;
1050 } catch (InterruptedException retry) {
1051 timedOut = false;
1052 }
1053 }
1054 }
1055
1056 /**
1057 * Main worker run loop. Repeatedly gets tasks from queue and
1058 * executes them, while coping with a number of issues:
1059 *
1060 * 1. We may start out with an initial task, in which case we
1061 * don't need to get the first one. Otherwise, as long as pool is
1062 * running, we get tasks from getTask. If it returns null then the
1063 * worker exits due to changed pool state or configuration
1064 * parameters. Other exits result from exception throws in
1065 * external code, in which case completedAbruptly holds, which
1066 * usually leads processWorkerExit to replace this thread.
1067 *
1068 * 2. Before running any task, the lock is acquired to prevent
1069 * other pool interrupts while the task is executing, and then we
1070 * ensure that unless pool is stopping, this thread does not have
1071 * its interrupt set.
1072 *
1073 * 3. Each task run is preceded by a call to beforeExecute, which
1074 * might throw an exception, in which case we cause thread to die
1075 * (breaking loop with completedAbruptly true) without processing
1076 * the task.
1077 *
1078 * 4. Assuming beforeExecute completes normally, we run the task,
1079 * gathering any of its thrown exceptions to send to afterExecute.
1080 * We separately handle RuntimeException, Error (both of which the
1081 * specs guarantee that we trap) and arbitrary Throwables.
1082 * Because we cannot rethrow Throwables within Runnable.run, we
1083 * wrap them within Errors on the way out (to the thread's
1084 * UncaughtExceptionHandler). Any thrown exception also
1085 * conservatively causes thread to die.
1086 *
1087 * 5. After task.run completes, we call afterExecute, which may
1088 * also throw an exception, which will also cause thread to
1089 * die. According to JLS Sec 14.20, this exception is the one that
1090 * will be in effect even if task.run throws.
1091 *
1092 * The net effect of the exception mechanics is that afterExecute
1093 * and the thread's UncaughtExceptionHandler have as accurate
1094 * information as we can provide about any problems encountered by
1095 * user code.
1096 *
1097 * @param w the worker
1098 */
1099 final void runWorker(Worker w) {
1100 Thread wt = Thread.currentThread();
1101 Runnable task = w.firstTask;
1102 w.firstTask = null;
1103 w.unlock(); // allow interrupts
1104 boolean completedAbruptly = true;
1105 try {
1106 while (task != null || (task = getTask()) != null) {
1107 w.lock();
1108 // If pool is stopping, ensure thread is interrupted;
1109 // if not, ensure thread is not interrupted. This
1110 // requires a recheck in second case to deal with
1111 // shutdownNow race while clearing interrupt
1112 if ((runStateAtLeast(ctl.get(), STOP) ||
1113 (Thread.interrupted() &&
1114 runStateAtLeast(ctl.get(), STOP))) &&
1115 !wt.isInterrupted())
1116 wt.interrupt();
1117 try {
1118 beforeExecute(wt, task);
1119 Throwable thrown = null;
1120 try {
1121 task.run();
1122 } catch (RuntimeException x) {
1123 thrown = x; throw x;
1124 } catch (Error x) {
1125 thrown = x; throw x;
1126 } catch (Throwable x) {
1127 thrown = x; throw new Error(x);
1128 } finally {
1129 afterExecute(task, thrown);
1130 }
1131 } finally {
1132 task = null;
1133 w.completedTasks++;
1134 w.unlock();
1135 }
1136 }
1137 completedAbruptly = false;
1138 } finally {
1139 processWorkerExit(w, completedAbruptly);
1140 }
1141 }
1142
1143 // Public constructors and methods
1144
1145 /**
1146 * Creates a new {@code ThreadPoolExecutor} with the given initial
1147 * parameters, the default thread factory and the default rejected
1148 * execution handler.
1149 *
1150 * <p>It may be more convenient to use one of the {@link Executors}
1151 * factory methods instead of this general purpose constructor.
1152 *
1153 * @param corePoolSize the number of threads to keep in the pool, even
1154 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1155 * @param maximumPoolSize the maximum number of threads to allow in the
1156 * pool
1157 * @param keepAliveTime when the number of threads is greater than
1158 * the core, this is the maximum time that excess idle threads
1159 * will wait for new tasks before terminating.
1160 * @param unit the time unit for the {@code keepAliveTime} argument
1161 * @param workQueue the queue to use for holding tasks before they are
1162 * executed. This queue will hold only the {@code Runnable}
1163 * tasks submitted by the {@code execute} method.
1164 * @throws IllegalArgumentException if one of the following holds:<br>
1165 * {@code corePoolSize < 0}<br>
1166 * {@code keepAliveTime < 0}<br>
1167 * {@code maximumPoolSize <= 0}<br>
1168 * {@code maximumPoolSize < corePoolSize}
1169 * @throws NullPointerException if {@code workQueue} is null
1170 */
1171 public ThreadPoolExecutor(int corePoolSize,
1172 int maximumPoolSize,
1173 long keepAliveTime,
1174 TimeUnit unit,
1175 BlockingQueue<Runnable> workQueue) {
1176 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1177 Executors.defaultThreadFactory(), defaultHandler);
1178 }
1179
1180 /**
1181 * Creates a new {@code ThreadPoolExecutor} with the given initial
1182 * parameters and
1183 * {@linkplain ThreadPoolExecutor.AbortPolicy
1184 * default rejected execution handler}.
1185 *
1186 * @param corePoolSize the number of threads to keep in the pool, even
1187 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1188 * @param maximumPoolSize the maximum number of threads to allow in the
1189 * pool
1190 * @param keepAliveTime when the number of threads is greater than
1191 * the core, this is the maximum time that excess idle threads
1192 * will wait for new tasks before terminating.
1193 * @param unit the time unit for the {@code keepAliveTime} argument
1194 * @param workQueue the queue to use for holding tasks before they are
1195 * executed. This queue will hold only the {@code Runnable}
1196 * tasks submitted by the {@code execute} method.
1197 * @param threadFactory the factory to use when the executor
1198 * creates a new thread
1199 * @throws IllegalArgumentException if one of the following holds:<br>
1200 * {@code corePoolSize < 0}<br>
1201 * {@code keepAliveTime < 0}<br>
1202 * {@code maximumPoolSize <= 0}<br>
1203 * {@code maximumPoolSize < corePoolSize}
1204 * @throws NullPointerException if {@code workQueue}
1205 * or {@code threadFactory} is null
1206 */
1207 public ThreadPoolExecutor(int corePoolSize,
1208 int maximumPoolSize,
1209 long keepAliveTime,
1210 TimeUnit unit,
1211 BlockingQueue<Runnable> workQueue,
1212 ThreadFactory threadFactory) {
1213 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1214 threadFactory, defaultHandler);
1215 }
1216
1217 /**
1218 * Creates a new {@code ThreadPoolExecutor} with the given initial
1219 * parameters and
1220 * {@linkplain Executors#defaultThreadFactory default thread factory}.
1221 *
1222 * @param corePoolSize the number of threads to keep in the pool, even
1223 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1224 * @param maximumPoolSize the maximum number of threads to allow in the
1225 * pool
1226 * @param keepAliveTime when the number of threads is greater than
1227 * the core, this is the maximum time that excess idle threads
1228 * will wait for new tasks before terminating.
1229 * @param unit the time unit for the {@code keepAliveTime} argument
1230 * @param workQueue the queue to use for holding tasks before they are
1231 * executed. This queue will hold only the {@code Runnable}
1232 * tasks submitted by the {@code execute} method.
1233 * @param handler the handler to use when execution is blocked
1234 * because the thread bounds and queue capacities are reached
1235 * @throws IllegalArgumentException if one of the following holds:<br>
1236 * {@code corePoolSize < 0}<br>
1237 * {@code keepAliveTime < 0}<br>
1238 * {@code maximumPoolSize <= 0}<br>
1239 * {@code maximumPoolSize < corePoolSize}
1240 * @throws NullPointerException if {@code workQueue}
1241 * or {@code handler} is null
1242 */
1243 public ThreadPoolExecutor(int corePoolSize,
1244 int maximumPoolSize,
1245 long keepAliveTime,
1246 TimeUnit unit,
1247 BlockingQueue<Runnable> workQueue,
1248 RejectedExecutionHandler handler) {
1249 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1250 Executors.defaultThreadFactory(), handler);
1251 }
1252
1253 /**
1254 * Creates a new {@code ThreadPoolExecutor} with the given initial
1255 * parameters.
1256 *
1257 * @param corePoolSize the number of threads to keep in the pool, even
1258 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1259 * @param maximumPoolSize the maximum number of threads to allow in the
1260 * pool
1261 * @param keepAliveTime when the number of threads is greater than
1262 * the core, this is the maximum time that excess idle threads
1263 * will wait for new tasks before terminating.
1264 * @param unit the time unit for the {@code keepAliveTime} argument
1265 * @param workQueue the queue to use for holding tasks before they are
1266 * executed. This queue will hold only the {@code Runnable}
1267 * tasks submitted by the {@code execute} method.
1268 * @param threadFactory the factory to use when the executor
1269 * creates a new thread
1270 * @param handler the handler to use when execution is blocked
1271 * because the thread bounds and queue capacities are reached
1272 * @throws IllegalArgumentException if one of the following holds:<br>
1273 * {@code corePoolSize < 0}<br>
1274 * {@code keepAliveTime < 0}<br>
1275 * {@code maximumPoolSize <= 0}<br>
1276 * {@code maximumPoolSize < corePoolSize}
1277 * @throws NullPointerException if {@code workQueue}
1278 * or {@code threadFactory} or {@code handler} is null
1279 */
1280 public ThreadPoolExecutor(int corePoolSize,
1281 int maximumPoolSize,
1282 long keepAliveTime,
1283 TimeUnit unit,
1284 BlockingQueue<Runnable> workQueue,
1285 ThreadFactory threadFactory,
1286 RejectedExecutionHandler handler) {
1287 if (corePoolSize < 0 ||
1288 maximumPoolSize <= 0 ||
1289 maximumPoolSize < corePoolSize ||
1290 keepAliveTime < 0)
1291 throw new IllegalArgumentException();
1292 if (workQueue == null || threadFactory == null || handler == null)
1293 throw new NullPointerException();
1294 this.corePoolSize = corePoolSize;
1295 this.maximumPoolSize = maximumPoolSize;
1296 this.workQueue = workQueue;
1297 this.keepAliveTime = unit.toNanos(keepAliveTime);
1298 this.threadFactory = threadFactory;
1299 this.handler = handler;
1300 }
1301
1302 /**
1303 * Executes the given task sometime in the future. The task
1304 * may execute in a new thread or in an existing pooled thread.
1305 *
1306 * If the task cannot be submitted for execution, either because this
1307 * executor has been shutdown or because its capacity has been reached,
1308 * the task is handled by the current {@code RejectedExecutionHandler}.
1309 *
1310 * @param command the task to execute
1311 * @throws RejectedExecutionException at discretion of
1312 * {@code RejectedExecutionHandler}, if the task
1313 * cannot be accepted for execution
1314 * @throws NullPointerException if {@code command} is null
1315 */
1316 public void execute(Runnable command) {
1317 if (command == null)
1318 throw new NullPointerException();
1319 /*
1320 * Proceed in 3 steps:
1321 *
1322 * 1. If fewer than corePoolSize threads are running, try to
1323 * start a new thread with the given command as its first
1324 * task. The call to addWorker atomically checks runState and
1325 * workerCount, and so prevents false alarms that would add
1326 * threads when it shouldn't, by returning false.
1327 *
1328 * 2. If a task can be successfully queued, then we still need
1329 * to double-check whether we should have added a thread
1330 * (because existing ones died since last checking) or that
1331 * the pool shut down since entry into this method. So we
1332 * recheck state and if necessary roll back the enqueuing if
1333 * stopped, or start a new thread if there are none.
1334 *
1335 * 3. If we cannot queue task, then we try to add a new
1336 * thread. If it fails, we know we are shut down or saturated
1337 * and so reject the task.
1338 */
1339 int c = ctl.get();
1340 if (workerCountOf(c) < corePoolSize) {
1341 if (addWorker(command, true))
1342 return;
1343 c = ctl.get();
1344 }
1345 if (isRunning(c) && workQueue.offer(command)) {
1346 int recheck = ctl.get();
1347 if (! isRunning(recheck) && remove(command))
1348 reject(command);
1349 else if (workerCountOf(recheck) == 0)
1350 addWorker(null, false);
1351 }
1352 else if (!addWorker(command, false))
1353 reject(command);
1354 }
1355
1356 /**
1357 * Initiates an orderly shutdown in which previously submitted
1358 * tasks are executed, but no new tasks will be accepted.
1359 * Invocation has no additional effect if already shut down.
1360 *
1361 * <p>This method does not wait for previously submitted tasks to
1362 * complete execution. Use {@link #awaitTermination awaitTermination}
1363 * to do that.
1364 *
1365 * @throws SecurityException {@inheritDoc}
1366 */
1367 public void shutdown() {
1368 final ReentrantLock mainLock = this.mainLock;
1369 mainLock.lock();
1370 try {
1371 checkShutdownAccess();
1372 advanceRunState(SHUTDOWN);
1373 interruptIdleWorkers();
1374 onShutdown(); // hook for ScheduledThreadPoolExecutor
1375 } finally {
1376 mainLock.unlock();
1377 }
1378 tryTerminate();
1379 }
1380
1381 /**
1382 * Attempts to stop all actively executing tasks, halts the
1383 * processing of waiting tasks, and returns a list of the tasks
1384 * that were awaiting execution. These tasks are drained (removed)
1385 * from the task queue upon return from this method.
1386 *
1387 * <p>This method does not wait for actively executing tasks to
1388 * terminate. Use {@link #awaitTermination awaitTermination} to
1389 * do that.
1390 *
1391 * <p>There are no guarantees beyond best-effort attempts to stop
1392 * processing actively executing tasks. This implementation
1393 * interrupts tasks via {@link Thread#interrupt}; any task that
1394 * fails to respond to interrupts may never terminate.
1395 *
1396 * @throws SecurityException {@inheritDoc}
1397 */
1398 public List<Runnable> shutdownNow() {
1399 List<Runnable> tasks;
1400 final ReentrantLock mainLock = this.mainLock;
1401 mainLock.lock();
1402 try {
1403 checkShutdownAccess();
1404 advanceRunState(STOP);
1405 interruptWorkers();
1406 tasks = drainQueue();
1407 } finally {
1408 mainLock.unlock();
1409 }
1410 tryTerminate();
1411 return tasks;
1412 }
1413
1414 public boolean isShutdown() {
1415 return ! isRunning(ctl.get());
1416 }
1417
1418 /**
1419 * Returns true if this executor is in the process of terminating
1420 * after {@link #shutdown} or {@link #shutdownNow} but has not
1421 * completely terminated. This method may be useful for
1422 * debugging. A return of {@code true} reported a sufficient
1423 * period after shutdown may indicate that submitted tasks have
1424 * ignored or suppressed interruption, causing this executor not
1425 * to properly terminate.
1426 *
1427 * @return {@code true} if terminating but not yet terminated
1428 */
1429 public boolean isTerminating() {
1430 int c = ctl.get();
1431 return ! isRunning(c) && runStateLessThan(c, TERMINATED);
1432 }
1433
1434 public boolean isTerminated() {
1435 return runStateAtLeast(ctl.get(), TERMINATED);
1436 }
1437
1438 public boolean awaitTermination(long timeout, TimeUnit unit)
1439 throws InterruptedException {
1440 long nanos = unit.toNanos(timeout);
1441 final ReentrantLock mainLock = this.mainLock;
1442 mainLock.lock();
1443 try {
1444 while (!runStateAtLeast(ctl.get(), TERMINATED)) {
1445 if (nanos <= 0L)
1446 return false;
1447 nanos = termination.awaitNanos(nanos);
1448 }
1449 return true;
1450 } finally {
1451 mainLock.unlock();
1452 }
1453 }
1454
1455 /**
1456 * Invokes {@code shutdown} when this executor is no longer
1457 * referenced and it has no threads.
1458 */
1459 protected void finalize() {
1460 shutdown();
1461 }
1462
1463 /**
1464 * Sets the thread factory used to create new threads.
1465 *
1466 * @param threadFactory the new thread factory
1467 * @throws NullPointerException if threadFactory is null
1468 * @see #getThreadFactory
1469 */
1470 public void setThreadFactory(ThreadFactory threadFactory) {
1471 if (threadFactory == null)
1472 throw new NullPointerException();
1473 this.threadFactory = threadFactory;
1474 }
1475
1476 /**
1477 * Returns the thread factory used to create new threads.
1478 *
1479 * @return the current thread factory
1480 * @see #setThreadFactory(ThreadFactory)
1481 */
1482 public ThreadFactory getThreadFactory() {
1483 return threadFactory;
1484 }
1485
1486 /**
1487 * Sets a new handler for unexecutable tasks.
1488 *
1489 * @param handler the new handler
1490 * @throws NullPointerException if handler is null
1491 * @see #getRejectedExecutionHandler
1492 */
1493 public void setRejectedExecutionHandler(RejectedExecutionHandler handler) {
1494 if (handler == null)
1495 throw new NullPointerException();
1496 this.handler = handler;
1497 }
1498
1499 /**
1500 * Returns the current handler for unexecutable tasks.
1501 *
1502 * @return the current handler
1503 * @see #setRejectedExecutionHandler(RejectedExecutionHandler)
1504 */
1505 public RejectedExecutionHandler getRejectedExecutionHandler() {
1506 return handler;
1507 }
1508
1509 /**
1510 * Sets the core number of threads. This overrides any value set
1511 * in the constructor. If the new value is smaller than the
1512 * current value, excess existing threads will be terminated when
1513 * they next become idle. If larger, new threads will, if needed,
1514 * be started to execute any queued tasks.
1515 *
1516 * @param corePoolSize the new core size
1517 * @throws IllegalArgumentException if {@code corePoolSize < 0}
1518 * or {@code corePoolSize} is greater than the {@linkplain
1519 * #getMaximumPoolSize() maximum pool size}
1520 * @see #getCorePoolSize
1521 */
1522 public void setCorePoolSize(int corePoolSize) {
1523 if (corePoolSize < 0 || maximumPoolSize < corePoolSize)
1524 throw new IllegalArgumentException();
1525 int delta = corePoolSize - this.corePoolSize;
1526 this.corePoolSize = corePoolSize;
1527 if (workerCountOf(ctl.get()) > corePoolSize)
1528 interruptIdleWorkers();
1529 else if (delta > 0) {
1530 // We don't really know how many new threads are "needed".
1531 // As a heuristic, prestart enough new workers (up to new
1532 // core size) to handle the current number of tasks in
1533 // queue, but stop if queue becomes empty while doing so.
1534 int k = Math.min(delta, workQueue.size());
1535 while (k-- > 0 && addWorker(null, true)) {
1536 if (workQueue.isEmpty())
1537 break;
1538 }
1539 }
1540 }
1541
1542 /**
1543 * Returns the core number of threads.
1544 *
1545 * @return the core number of threads
1546 * @see #setCorePoolSize
1547 */
1548 public int getCorePoolSize() {
1549 return corePoolSize;
1550 }
1551
1552 /**
1553 * Starts a core thread, causing it to idly wait for work. This
1554 * overrides the default policy of starting core threads only when
1555 * new tasks are executed. This method will return {@code false}
1556 * if all core threads have already been started.
1557 *
1558 * @return {@code true} if a thread was started
1559 */
1560 public boolean prestartCoreThread() {
1561 return workerCountOf(ctl.get()) < corePoolSize &&
1562 addWorker(null, true);
1563 }
1564
1565 /**
1566 * Same as prestartCoreThread except arranges that at least one
1567 * thread is started even if corePoolSize is 0.
1568 */
1569 void ensurePrestart() {
1570 int wc = workerCountOf(ctl.get());
1571 if (wc < corePoolSize)
1572 addWorker(null, true);
1573 else if (wc == 0)
1574 addWorker(null, false);
1575 }
1576
1577 /**
1578 * Starts all core threads, causing them to idly wait for work. This
1579 * overrides the default policy of starting core threads only when
1580 * new tasks are executed.
1581 *
1582 * @return the number of threads started
1583 */
1584 public int prestartAllCoreThreads() {
1585 int n = 0;
1586 while (addWorker(null, true))
1587 ++n;
1588 return n;
1589 }
1590
1591 /**
1592 * Returns true if this pool allows core threads to time out and
1593 * terminate if no tasks arrive within the keepAlive time, being
1594 * replaced if needed when new tasks arrive. When true, the same
1595 * keep-alive policy applying to non-core threads applies also to
1596 * core threads. When false (the default), core threads are never
1597 * terminated due to lack of incoming tasks.
1598 *
1599 * @return {@code true} if core threads are allowed to time out,
1600 * else {@code false}
1601 *
1602 * @since 1.6
1603 */
1604 public boolean allowsCoreThreadTimeOut() {
1605 return allowCoreThreadTimeOut;
1606 }
1607
1608 /**
1609 * Sets the policy governing whether core threads may time out and
1610 * terminate if no tasks arrive within the keep-alive time, being
1611 * replaced if needed when new tasks arrive. When false, core
1612 * threads are never terminated due to lack of incoming
1613 * tasks. When true, the same keep-alive policy applying to
1614 * non-core threads applies also to core threads. To avoid
1615 * continual thread replacement, the keep-alive time must be
1616 * greater than zero when setting {@code true}. This method
1617 * should in general be called before the pool is actively used.
1618 *
1619 * @param value {@code true} if should time out, else {@code false}
1620 * @throws IllegalArgumentException if value is {@code true}
1621 * and the current keep-alive time is not greater than zero
1622 *
1623 * @since 1.6
1624 */
1625 public void allowCoreThreadTimeOut(boolean value) {
1626 if (value && keepAliveTime <= 0)
1627 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1628 if (value != allowCoreThreadTimeOut) {
1629 allowCoreThreadTimeOut = value;
1630 if (value)
1631 interruptIdleWorkers();
1632 }
1633 }
1634
1635 /**
1636 * Sets the maximum allowed number of threads. This overrides any
1637 * value set in the constructor. If the new value is smaller than
1638 * the current value, excess existing threads will be
1639 * terminated when they next become idle.
1640 *
1641 * @param maximumPoolSize the new maximum
1642 * @throws IllegalArgumentException if the new maximum is
1643 * less than or equal to zero, or
1644 * less than the {@linkplain #getCorePoolSize core pool size}
1645 * @see #getMaximumPoolSize
1646 */
1647 public void setMaximumPoolSize(int maximumPoolSize) {
1648 if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize)
1649 throw new IllegalArgumentException();
1650 this.maximumPoolSize = maximumPoolSize;
1651 if (workerCountOf(ctl.get()) > maximumPoolSize)
1652 interruptIdleWorkers();
1653 }
1654
1655 /**
1656 * Returns the maximum allowed number of threads.
1657 *
1658 * @return the maximum allowed number of threads
1659 * @see #setMaximumPoolSize
1660 */
1661 public int getMaximumPoolSize() {
1662 return maximumPoolSize;
1663 }
1664
1665 /**
1666 * Sets the thread keep-alive time, which is the amount of time
1667 * that threads may remain idle before being terminated.
1668 * Threads that wait this amount of time without processing a
1669 * task will be terminated if there are more than the core
1670 * number of threads currently in the pool, or if this pool
1671 * {@linkplain #allowsCoreThreadTimeOut() allows core thread timeout}.
1672 * This overrides any value set in the constructor.
1673 *
1674 * @param time the time to wait. A time value of zero will cause
1675 * excess threads to terminate immediately after executing tasks.
1676 * @param unit the time unit of the {@code time} argument
1677 * @throws IllegalArgumentException if {@code time} less than zero or
1678 * if {@code time} is zero and {@code allowsCoreThreadTimeOut}
1679 * @see #getKeepAliveTime(TimeUnit)
1680 */
1681 public void setKeepAliveTime(long time, TimeUnit unit) {
1682 if (time < 0)
1683 throw new IllegalArgumentException();
1684 if (time == 0 && allowsCoreThreadTimeOut())
1685 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1686 long keepAliveTime = unit.toNanos(time);
1687 long delta = keepAliveTime - this.keepAliveTime;
1688 this.keepAliveTime = keepAliveTime;
1689 if (delta < 0)
1690 interruptIdleWorkers();
1691 }
1692
1693 /**
1694 * Returns the thread keep-alive time, which is the amount of time
1695 * that threads may remain idle before being terminated.
1696 * Threads that wait this amount of time without processing a
1697 * task will be terminated if there are more than the core
1698 * number of threads currently in the pool, or if this pool
1699 * {@linkplain #allowsCoreThreadTimeOut() allows core thread timeout}.
1700 *
1701 * @param unit the desired time unit of the result
1702 * @return the time limit
1703 * @see #setKeepAliveTime(long, TimeUnit)
1704 */
1705 public long getKeepAliveTime(TimeUnit unit) {
1706 return unit.convert(keepAliveTime, TimeUnit.NANOSECONDS);
1707 }
1708
1709 /* User-level queue utilities */
1710
1711 /**
1712 * Returns the task queue used by this executor. Access to the
1713 * task queue is intended primarily for debugging and monitoring.
1714 * This queue may be in active use. Retrieving the task queue
1715 * does not prevent queued tasks from executing.
1716 *
1717 * @return the task queue
1718 */
1719 public BlockingQueue<Runnable> getQueue() {
1720 return workQueue;
1721 }
1722
1723 /**
1724 * Removes this task from the executor's internal queue if it is
1725 * present, thus causing it not to be run if it has not already
1726 * started.
1727 *
1728 * <p>This method may be useful as one part of a cancellation
1729 * scheme. It may fail to remove tasks that have been converted
1730 * into other forms before being placed on the internal queue.
1731 * For example, a task entered using {@code submit} might be
1732 * converted into a form that maintains {@code Future} status.
1733 * However, in such cases, method {@link #purge} may be used to
1734 * remove those Futures that have been cancelled.
1735 *
1736 * @param task the task to remove
1737 * @return {@code true} if the task was removed
1738 */
1739 public boolean remove(Runnable task) {
1740 boolean removed = workQueue.remove(task);
1741 tryTerminate(); // In case SHUTDOWN and now empty
1742 return removed;
1743 }
1744
1745 /**
1746 * Tries to remove from the work queue all {@link Future}
1747 * tasks that have been cancelled. This method can be useful as a
1748 * storage reclamation operation, that has no other impact on
1749 * functionality. Cancelled tasks are never executed, but may
1750 * accumulate in work queues until worker threads can actively
1751 * remove them. Invoking this method instead tries to remove them now.
1752 * However, this method may fail to remove tasks in
1753 * the presence of interference by other threads.
1754 */
1755 public void purge() {
1756 final BlockingQueue<Runnable> q = workQueue;
1757 try {
1758 Iterator<Runnable> it = q.iterator();
1759 while (it.hasNext()) {
1760 Runnable r = it.next();
1761 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1762 it.remove();
1763 }
1764 } catch (ConcurrentModificationException fallThrough) {
1765 // Take slow path if we encounter interference during traversal.
1766 // Make copy for traversal and call remove for cancelled entries.
1767 // The slow path is more likely to be O(N*N).
1768 for (Object r : q.toArray())
1769 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1770 q.remove(r);
1771 }
1772
1773 tryTerminate(); // In case SHUTDOWN and now empty
1774 }
1775
1776 /* Statistics */
1777
1778 /**
1779 * Returns the current number of threads in the pool.
1780 *
1781 * @return the number of threads
1782 */
1783 public int getPoolSize() {
1784 final ReentrantLock mainLock = this.mainLock;
1785 mainLock.lock();
1786 try {
1787 // Remove rare and surprising possibility of
1788 // isTerminated() && getPoolSize() > 0
1789 return runStateAtLeast(ctl.get(), TIDYING) ? 0
1790 : workers.size();
1791 } finally {
1792 mainLock.unlock();
1793 }
1794 }
1795
1796 /**
1797 * Returns the approximate number of threads that are actively
1798 * executing tasks.
1799 *
1800 * @return the number of threads
1801 */
1802 public int getActiveCount() {
1803 final ReentrantLock mainLock = this.mainLock;
1804 mainLock.lock();
1805 try {
1806 int n = 0;
1807 for (Worker w : workers)
1808 if (w.isLocked())
1809 ++n;
1810 return n;
1811 } finally {
1812 mainLock.unlock();
1813 }
1814 }
1815
1816 /**
1817 * Returns the largest number of threads that have ever
1818 * simultaneously been in the pool.
1819 *
1820 * @return the number of threads
1821 */
1822 public int getLargestPoolSize() {
1823 final ReentrantLock mainLock = this.mainLock;
1824 mainLock.lock();
1825 try {
1826 return largestPoolSize;
1827 } finally {
1828 mainLock.unlock();
1829 }
1830 }
1831
1832 /**
1833 * Returns the approximate total number of tasks that have ever been
1834 * scheduled for execution. Because the states of tasks and
1835 * threads may change dynamically during computation, the returned
1836 * value is only an approximation.
1837 *
1838 * @return the number of tasks
1839 */
1840 public long getTaskCount() {
1841 final ReentrantLock mainLock = this.mainLock;
1842 mainLock.lock();
1843 try {
1844 long n = completedTaskCount;
1845 for (Worker w : workers) {
1846 n += w.completedTasks;
1847 if (w.isLocked())
1848 ++n;
1849 }
1850 return n + workQueue.size();
1851 } finally {
1852 mainLock.unlock();
1853 }
1854 }
1855
1856 /**
1857 * Returns the approximate total number of tasks that have
1858 * completed execution. Because the states of tasks and threads
1859 * may change dynamically during computation, the returned value
1860 * is only an approximation, but one that does not ever decrease
1861 * across successive calls.
1862 *
1863 * @return the number of tasks
1864 */
1865 public long getCompletedTaskCount() {
1866 final ReentrantLock mainLock = this.mainLock;
1867 mainLock.lock();
1868 try {
1869 long n = completedTaskCount;
1870 for (Worker w : workers)
1871 n += w.completedTasks;
1872 return n;
1873 } finally {
1874 mainLock.unlock();
1875 }
1876 }
1877
1878 /**
1879 * Returns a string identifying this pool, as well as its state,
1880 * including indications of run state and estimated worker and
1881 * task counts.
1882 *
1883 * @return a string identifying this pool, as well as its state
1884 */
1885 public String toString() {
1886 long ncompleted;
1887 int nworkers, nactive;
1888 final ReentrantLock mainLock = this.mainLock;
1889 mainLock.lock();
1890 try {
1891 ncompleted = completedTaskCount;
1892 nactive = 0;
1893 nworkers = workers.size();
1894 for (Worker w : workers) {
1895 ncompleted += w.completedTasks;
1896 if (w.isLocked())
1897 ++nactive;
1898 }
1899 } finally {
1900 mainLock.unlock();
1901 }
1902 int c = ctl.get();
1903 String runState =
1904 runStateLessThan(c, SHUTDOWN) ? "Running" :
1905 runStateAtLeast(c, TERMINATED) ? "Terminated" :
1906 "Shutting down";
1907 return super.toString() +
1908 "[" + runState +
1909 ", pool size = " + nworkers +
1910 ", active threads = " + nactive +
1911 ", queued tasks = " + workQueue.size() +
1912 ", completed tasks = " + ncompleted +
1913 "]";
1914 }
1915
1916 /* Extension hooks */
1917
1918 /**
1919 * Method invoked prior to executing the given Runnable in the
1920 * given thread. This method is invoked by thread {@code t} that
1921 * will execute task {@code r}, and may be used to re-initialize
1922 * ThreadLocals, or to perform logging.
1923 *
1924 * <p>This implementation does nothing, but may be customized in
1925 * subclasses. Note: To properly nest multiple overridings, subclasses
1926 * should generally invoke {@code super.beforeExecute} at the end of
1927 * this method.
1928 *
1929 * @param t the thread that will run task {@code r}
1930 * @param r the task that will be executed
1931 */
1932 protected void beforeExecute(Thread t, Runnable r) { }
1933
1934 /**
1935 * Method invoked upon completion of execution of the given Runnable.
1936 * This method is invoked by the thread that executed the task. If
1937 * non-null, the Throwable is the uncaught {@code RuntimeException}
1938 * or {@code Error} that caused execution to terminate abruptly.
1939 *
1940 * <p>This implementation does nothing, but may be customized in
1941 * subclasses. Note: To properly nest multiple overridings, subclasses
1942 * should generally invoke {@code super.afterExecute} at the
1943 * beginning of this method.
1944 *
1945 * <p><b>Note:</b> When actions are enclosed in tasks (such as
1946 * {@link FutureTask}) either explicitly or via methods such as
1947 * {@code submit}, these task objects catch and maintain
1948 * computational exceptions, and so they do not cause abrupt
1949 * termination, and the internal exceptions are <em>not</em>
1950 * passed to this method. If you would like to trap both kinds of
1951 * failures in this method, you can further probe for such cases,
1952 * as in this sample subclass that prints either the direct cause
1953 * or the underlying exception if a task has been aborted:
1954 *
1955 * <pre> {@code
1956 * class ExtendedExecutor extends ThreadPoolExecutor {
1957 * // ...
1958 * protected void afterExecute(Runnable r, Throwable t) {
1959 * super.afterExecute(r, t);
1960 * if (t == null
1961 * && r instanceof Future<?>
1962 * && ((Future<?>)r).isDone()) {
1963 * try {
1964 * Object result = ((Future<?>) r).get();
1965 * } catch (CancellationException ce) {
1966 * t = ce;
1967 * } catch (ExecutionException ee) {
1968 * t = ee.getCause();
1969 * } catch (InterruptedException ie) {
1970 * // ignore/reset
1971 * Thread.currentThread().interrupt();
1972 * }
1973 * }
1974 * if (t != null)
1975 * System.out.println(t);
1976 * }
1977 * }}</pre>
1978 *
1979 * @param r the runnable that has completed
1980 * @param t the exception that caused termination, or null if
1981 * execution completed normally
1982 */
1983 protected void afterExecute(Runnable r, Throwable t) { }
1984
1985 /**
1986 * Method invoked when the Executor has terminated. Default
1987 * implementation does nothing. Note: To properly nest multiple
1988 * overridings, subclasses should generally invoke
1989 * {@code super.terminated} within this method.
1990 */
1991 protected void terminated() { }
1992
1993 /* Predefined RejectedExecutionHandlers */
1994
1995 /**
1996 * A handler for rejected tasks that runs the rejected task
1997 * directly in the calling thread of the {@code execute} method,
1998 * unless the executor has been shut down, in which case the task
1999 * is discarded.
2000 */
2001 public static class CallerRunsPolicy implements RejectedExecutionHandler {
2002 /**
2003 * Creates a {@code CallerRunsPolicy}.
2004 */
2005 public CallerRunsPolicy() { }
2006
2007 /**
2008 * Executes task r in the caller's thread, unless the executor
2009 * has been shut down, in which case the task is discarded.
2010 *
2011 * @param r the runnable task requested to be executed
2012 * @param e the executor attempting to execute this task
2013 */
2014 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2015 if (!e.isShutdown()) {
2016 r.run();
2017 }
2018 }
2019 }
2020
2021 /**
2022 * A handler for rejected tasks that throws a
2023 * {@link RejectedExecutionException}.
2024 *
2025 * This is the default handler for {@link ThreadPoolExecutor} and
2026 * {@link ScheduledThreadPoolExecutor}.
2027 */
2028 public static class AbortPolicy implements RejectedExecutionHandler {
2029 /**
2030 * Creates an {@code AbortPolicy}.
2031 */
2032 public AbortPolicy() { }
2033
2034 /**
2035 * Always throws RejectedExecutionException.
2036 *
2037 * @param r the runnable task requested to be executed
2038 * @param e the executor attempting to execute this task
2039 * @throws RejectedExecutionException always
2040 */
2041 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2042 throw new RejectedExecutionException("Task " + r.toString() +
2043 " rejected from " +
2044 e.toString());
2045 }
2046 }
2047
2048 /**
2049 * A handler for rejected tasks that silently discards the
2050 * rejected task.
2051 */
2052 public static class DiscardPolicy implements RejectedExecutionHandler {
2053 /**
2054 * Creates a {@code DiscardPolicy}.
2055 */
2056 public DiscardPolicy() { }
2057
2058 /**
2059 * Does nothing, which has the effect of discarding task r.
2060 *
2061 * @param r the runnable task requested to be executed
2062 * @param e the executor attempting to execute this task
2063 */
2064 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2065 }
2066 }
2067
2068 /**
2069 * A handler for rejected tasks that discards the oldest unhandled
2070 * request and then retries {@code execute}, unless the executor
2071 * is shut down, in which case the task is discarded.
2072 */
2073 public static class DiscardOldestPolicy implements RejectedExecutionHandler {
2074 /**
2075 * Creates a {@code DiscardOldestPolicy} for the given executor.
2076 */
2077 public DiscardOldestPolicy() { }
2078
2079 /**
2080 * Obtains and ignores the next task that the executor
2081 * would otherwise execute, if one is immediately available,
2082 * and then retries execution of task r, unless the executor
2083 * is shut down, in which case task r is instead discarded.
2084 *
2085 * @param r the runnable task requested to be executed
2086 * @param e the executor attempting to execute this task
2087 */
2088 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2089 if (!e.isShutdown()) {
2090 e.getQueue().poll();
2091 e.execute(r);
2092 }
2093 }
2094 }
2095 }