ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ThreadPoolExecutor.java
Revision: 1.169
Committed: Sun Feb 19 00:40:50 2017 UTC (7 years, 3 months ago) by jsr166
Branch: MAIN
Changes since 1.168: +10 -11 lines
Log Message:
JDK-8173113: Javadoc for ThreadPoolExecutor is unclear wrt corePoolSize and running threads

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain, as explained at
4 * http://creativecommons.org/publicdomain/zero/1.0/
5 */
6
7 package java.util.concurrent;
8
9 import java.util.ArrayList;
10 import java.util.ConcurrentModificationException;
11 import java.util.HashSet;
12 import java.util.Iterator;
13 import java.util.List;
14 import java.util.concurrent.atomic.AtomicInteger;
15 import java.util.concurrent.locks.AbstractQueuedSynchronizer;
16 import java.util.concurrent.locks.Condition;
17 import java.util.concurrent.locks.ReentrantLock;
18
19 /**
20 * An {@link ExecutorService} that executes each submitted task using
21 * one of possibly several pooled threads, normally configured
22 * using {@link Executors} factory methods.
23 *
24 * <p>Thread pools address two different problems: they usually
25 * provide improved performance when executing large numbers of
26 * asynchronous tasks, due to reduced per-task invocation overhead,
27 * and they provide a means of bounding and managing the resources,
28 * including threads, consumed when executing a collection of tasks.
29 * Each {@code ThreadPoolExecutor} also maintains some basic
30 * statistics, such as the number of completed tasks.
31 *
32 * <p>To be useful across a wide range of contexts, this class
33 * provides many adjustable parameters and extensibility
34 * hooks. However, programmers are urged to use the more convenient
35 * {@link Executors} factory methods {@link
36 * Executors#newCachedThreadPool} (unbounded thread pool, with
37 * automatic thread reclamation), {@link Executors#newFixedThreadPool}
38 * (fixed size thread pool) and {@link
39 * Executors#newSingleThreadExecutor} (single background thread), that
40 * preconfigure settings for the most common usage
41 * scenarios. Otherwise, use the following guide when manually
42 * configuring and tuning this class:
43 *
44 * <dl>
45 *
46 * <dt>Core and maximum pool sizes</dt>
47 *
48 * <dd>A {@code ThreadPoolExecutor} will automatically adjust the
49 * pool size (see {@link #getPoolSize})
50 * according to the bounds set by
51 * corePoolSize (see {@link #getCorePoolSize}) and
52 * maximumPoolSize (see {@link #getMaximumPoolSize}).
53 *
54 * When a new task is submitted in method {@link #execute(Runnable)},
55 * if fewer than corePoolSize threads are running, a new thread is
56 * created to handle the request, even if other worker threads are
57 * idle. Else if fewer than maximumPoolSize threads are running, a
58 * new thread will be created to handle the request only if the queue
59 * is full. By setting corePoolSize and maximumPoolSize the same, you
60 * create a fixed-size thread pool. By setting maximumPoolSize to an
61 * essentially unbounded value such as {@code Integer.MAX_VALUE}, you
62 * allow the pool to accommodate an arbitrary number of concurrent
63 * tasks. Most typically, core and maximum pool sizes are set only
64 * upon construction, but they may also be changed dynamically using
65 * {@link #setCorePoolSize} and {@link #setMaximumPoolSize}. </dd>
66 *
67 * <dt>On-demand construction</dt>
68 *
69 * <dd>By default, even core threads are initially created and
70 * started only when new tasks arrive, but this can be overridden
71 * dynamically using method {@link #prestartCoreThread} or {@link
72 * #prestartAllCoreThreads}. You probably want to prestart threads if
73 * you construct the pool with a non-empty queue. </dd>
74 *
75 * <dt>Creating new threads</dt>
76 *
77 * <dd>New threads are created using a {@link ThreadFactory}. If not
78 * otherwise specified, a {@link Executors#defaultThreadFactory} is
79 * used, that creates threads to all be in the same {@link
80 * ThreadGroup} and with the same {@code NORM_PRIORITY} priority and
81 * non-daemon status. By supplying a different ThreadFactory, you can
82 * alter the thread's name, thread group, priority, daemon status,
83 * etc. If a {@code ThreadFactory} fails to create a thread when asked
84 * by returning null from {@code newThread}, the executor will
85 * continue, but might not be able to execute any tasks. Threads
86 * should possess the "modifyThread" {@code RuntimePermission}. If
87 * worker threads or other threads using the pool do not possess this
88 * permission, service may be degraded: configuration changes may not
89 * take effect in a timely manner, and a shutdown pool may remain in a
90 * state in which termination is possible but not completed.</dd>
91 *
92 * <dt>Keep-alive times</dt>
93 *
94 * <dd>If the pool currently has more than corePoolSize threads,
95 * excess threads will be terminated if they have been idle for more
96 * than the keepAliveTime (see {@link #getKeepAliveTime(TimeUnit)}).
97 * This provides a means of reducing resource consumption when the
98 * pool is not being actively used. If the pool becomes more active
99 * later, new threads will be constructed. This parameter can also be
100 * changed dynamically using method {@link #setKeepAliveTime(long,
101 * TimeUnit)}. Using a value of {@code Long.MAX_VALUE} {@link
102 * TimeUnit#NANOSECONDS} effectively disables idle threads from ever
103 * terminating prior to shut down. By default, the keep-alive policy
104 * applies only when there are more than corePoolSize threads, but
105 * method {@link #allowCoreThreadTimeOut(boolean)} can be used to
106 * apply this time-out policy to core threads as well, so long as the
107 * keepAliveTime value is non-zero. </dd>
108 *
109 * <dt>Queuing</dt>
110 *
111 * <dd>Any {@link BlockingQueue} may be used to transfer and hold
112 * submitted tasks. The use of this queue interacts with pool sizing:
113 *
114 * <ul>
115 *
116 * <li>If fewer than corePoolSize threads are running, the Executor
117 * always prefers adding a new thread
118 * rather than queuing.
119 *
120 * <li>If corePoolSize or more threads are running, the Executor
121 * always prefers queuing a request rather than adding a new
122 * thread.
123 *
124 * <li>If a request cannot be queued, a new thread is created unless
125 * this would exceed maximumPoolSize, in which case, the task will be
126 * rejected.
127 *
128 * </ul>
129 *
130 * There are three general strategies for queuing:
131 * <ol>
132 *
133 * <li><em> Direct handoffs.</em> A good default choice for a work
134 * queue is a {@link SynchronousQueue} that hands off tasks to threads
135 * without otherwise holding them. Here, an attempt to queue a task
136 * will fail if no threads are immediately available to run it, so a
137 * new thread will be constructed. This policy avoids lockups when
138 * handling sets of requests that might have internal dependencies.
139 * Direct handoffs generally require unbounded maximumPoolSizes to
140 * avoid rejection of new submitted tasks. This in turn admits the
141 * possibility of unbounded thread growth when commands continue to
142 * arrive on average faster than they can be processed.
143 *
144 * <li><em> Unbounded queues.</em> Using an unbounded queue (for
145 * example a {@link LinkedBlockingQueue} without a predefined
146 * capacity) will cause new tasks to wait in the queue when all
147 * corePoolSize threads are busy. Thus, no more than corePoolSize
148 * threads will ever be created. (And the value of the maximumPoolSize
149 * therefore doesn't have any effect.) This may be appropriate when
150 * each task is completely independent of others, so tasks cannot
151 * affect each others execution; for example, in a web page server.
152 * While this style of queuing can be useful in smoothing out
153 * transient bursts of requests, it admits the possibility of
154 * unbounded work queue growth when commands continue to arrive on
155 * average faster than they can be processed.
156 *
157 * <li><em>Bounded queues.</em> A bounded queue (for example, an
158 * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
159 * used with finite maximumPoolSizes, but can be more difficult to
160 * tune and control. Queue sizes and maximum pool sizes may be traded
161 * off for each other: Using large queues and small pools minimizes
162 * CPU usage, OS resources, and context-switching overhead, but can
163 * lead to artificially low throughput. If tasks frequently block (for
164 * example if they are I/O bound), a system may be able to schedule
165 * time for more threads than you otherwise allow. Use of small queues
166 * generally requires larger pool sizes, which keeps CPUs busier but
167 * may encounter unacceptable scheduling overhead, which also
168 * decreases throughput.
169 *
170 * </ol>
171 *
172 * </dd>
173 *
174 * <dt>Rejected tasks</dt>
175 *
176 * <dd>New tasks submitted in method {@link #execute(Runnable)} will be
177 * <em>rejected</em> when the Executor has been shut down, and also when
178 * the Executor uses finite bounds for both maximum threads and work queue
179 * capacity, and is saturated. In either case, the {@code execute} method
180 * invokes the {@link
181 * RejectedExecutionHandler#rejectedExecution(Runnable, ThreadPoolExecutor)}
182 * method of its {@link RejectedExecutionHandler}. Four predefined handler
183 * policies are provided:
184 *
185 * <ol>
186 *
187 * <li>In the default {@link ThreadPoolExecutor.AbortPolicy}, the
188 * handler throws a runtime {@link RejectedExecutionException} upon
189 * rejection.
190 *
191 * <li>In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread
192 * that invokes {@code execute} itself runs the task. This provides a
193 * simple feedback control mechanism that will slow down the rate that
194 * new tasks are submitted.
195 *
196 * <li>In {@link ThreadPoolExecutor.DiscardPolicy}, a task that
197 * cannot be executed is simply dropped.
198 *
199 * <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the
200 * executor is not shut down, the task at the head of the work queue
201 * is dropped, and then execution is retried (which can fail again,
202 * causing this to be repeated.)
203 *
204 * </ol>
205 *
206 * It is possible to define and use other kinds of {@link
207 * RejectedExecutionHandler} classes. Doing so requires some care
208 * especially when policies are designed to work only under particular
209 * capacity or queuing policies. </dd>
210 *
211 * <dt>Hook methods</dt>
212 *
213 * <dd>This class provides {@code protected} overridable
214 * {@link #beforeExecute(Thread, Runnable)} and
215 * {@link #afterExecute(Runnable, Throwable)} methods that are called
216 * before and after execution of each task. These can be used to
217 * manipulate the execution environment; for example, reinitializing
218 * ThreadLocals, gathering statistics, or adding log entries.
219 * Additionally, method {@link #terminated} can be overridden to perform
220 * any special processing that needs to be done once the Executor has
221 * fully terminated.
222 *
223 * <p>If hook, callback, or BlockingQueue methods throw exceptions,
224 * internal worker threads may in turn fail, abruptly terminate, and
225 * possibly be replaced.</dd>
226 *
227 * <dt>Queue maintenance</dt>
228 *
229 * <dd>Method {@link #getQueue()} allows access to the work queue
230 * for purposes of monitoring and debugging. Use of this method for
231 * any other purpose is strongly discouraged. Two supplied methods,
232 * {@link #remove(Runnable)} and {@link #purge} are available to
233 * assist in storage reclamation when large numbers of queued tasks
234 * become cancelled.</dd>
235 *
236 * <dt>Finalization</dt>
237 *
238 * <dd>A pool that is no longer referenced in a program <em>AND</em>
239 * has no remaining threads will be {@code shutdown} automatically. If
240 * you would like to ensure that unreferenced pools are reclaimed even
241 * if users forget to call {@link #shutdown}, then you must arrange
242 * that unused threads eventually die, by setting appropriate
243 * keep-alive times, using a lower bound of zero core threads and/or
244 * setting {@link #allowCoreThreadTimeOut(boolean)}. </dd>
245 *
246 * </dl>
247 *
248 * <p><b>Extension example</b>. Most extensions of this class
249 * override one or more of the protected hook methods. For example,
250 * here is a subclass that adds a simple pause/resume feature:
251 *
252 * <pre> {@code
253 * class PausableThreadPoolExecutor extends ThreadPoolExecutor {
254 * private boolean isPaused;
255 * private ReentrantLock pauseLock = new ReentrantLock();
256 * private Condition unpaused = pauseLock.newCondition();
257 *
258 * public PausableThreadPoolExecutor(...) { super(...); }
259 *
260 * protected void beforeExecute(Thread t, Runnable r) {
261 * super.beforeExecute(t, r);
262 * pauseLock.lock();
263 * try {
264 * while (isPaused) unpaused.await();
265 * } catch (InterruptedException ie) {
266 * t.interrupt();
267 * } finally {
268 * pauseLock.unlock();
269 * }
270 * }
271 *
272 * public void pause() {
273 * pauseLock.lock();
274 * try {
275 * isPaused = true;
276 * } finally {
277 * pauseLock.unlock();
278 * }
279 * }
280 *
281 * public void resume() {
282 * pauseLock.lock();
283 * try {
284 * isPaused = false;
285 * unpaused.signalAll();
286 * } finally {
287 * pauseLock.unlock();
288 * }
289 * }
290 * }}</pre>
291 *
292 * @since 1.5
293 * @author Doug Lea
294 */
295 public class ThreadPoolExecutor extends AbstractExecutorService {
296 /**
297 * The main pool control state, ctl, is an atomic integer packing
298 * two conceptual fields
299 * workerCount, indicating the effective number of threads
300 * runState, indicating whether running, shutting down etc
301 *
302 * In order to pack them into one int, we limit workerCount to
303 * (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2
304 * billion) otherwise representable. If this is ever an issue in
305 * the future, the variable can be changed to be an AtomicLong,
306 * and the shift/mask constants below adjusted. But until the need
307 * arises, this code is a bit faster and simpler using an int.
308 *
309 * The workerCount is the number of workers that have been
310 * permitted to start and not permitted to stop. The value may be
311 * transiently different from the actual number of live threads,
312 * for example when a ThreadFactory fails to create a thread when
313 * asked, and when exiting threads are still performing
314 * bookkeeping before terminating. The user-visible pool size is
315 * reported as the current size of the workers set.
316 *
317 * The runState provides the main lifecycle control, taking on values:
318 *
319 * RUNNING: Accept new tasks and process queued tasks
320 * SHUTDOWN: Don't accept new tasks, but process queued tasks
321 * STOP: Don't accept new tasks, don't process queued tasks,
322 * and interrupt in-progress tasks
323 * TIDYING: All tasks have terminated, workerCount is zero,
324 * the thread transitioning to state TIDYING
325 * will run the terminated() hook method
326 * TERMINATED: terminated() has completed
327 *
328 * The numerical order among these values matters, to allow
329 * ordered comparisons. The runState monotonically increases over
330 * time, but need not hit each state. The transitions are:
331 *
332 * RUNNING -> SHUTDOWN
333 * On invocation of shutdown(), perhaps implicitly in finalize()
334 * (RUNNING or SHUTDOWN) -> STOP
335 * On invocation of shutdownNow()
336 * SHUTDOWN -> TIDYING
337 * When both queue and pool are empty
338 * STOP -> TIDYING
339 * When pool is empty
340 * TIDYING -> TERMINATED
341 * When the terminated() hook method has completed
342 *
343 * Threads waiting in awaitTermination() will return when the
344 * state reaches TERMINATED.
345 *
346 * Detecting the transition from SHUTDOWN to TIDYING is less
347 * straightforward than you'd like because the queue may become
348 * empty after non-empty and vice versa during SHUTDOWN state, but
349 * we can only terminate if, after seeing that it is empty, we see
350 * that workerCount is 0 (which sometimes entails a recheck -- see
351 * below).
352 */
353 private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
354 private static final int COUNT_BITS = Integer.SIZE - 3;
355 private static final int CAPACITY = (1 << COUNT_BITS) - 1;
356
357 // runState is stored in the high-order bits
358 private static final int RUNNING = -1 << COUNT_BITS;
359 private static final int SHUTDOWN = 0 << COUNT_BITS;
360 private static final int STOP = 1 << COUNT_BITS;
361 private static final int TIDYING = 2 << COUNT_BITS;
362 private static final int TERMINATED = 3 << COUNT_BITS;
363
364 // Packing and unpacking ctl
365 private static int runStateOf(int c) { return c & ~CAPACITY; }
366 private static int workerCountOf(int c) { return c & CAPACITY; }
367 private static int ctlOf(int rs, int wc) { return rs | wc; }
368
369 /*
370 * Bit field accessors that don't require unpacking ctl.
371 * These depend on the bit layout and on workerCount being never negative.
372 */
373
374 private static boolean runStateLessThan(int c, int s) {
375 return c < s;
376 }
377
378 private static boolean runStateAtLeast(int c, int s) {
379 return c >= s;
380 }
381
382 private static boolean isRunning(int c) {
383 return c < SHUTDOWN;
384 }
385
386 /**
387 * Attempts to CAS-increment the workerCount field of ctl.
388 */
389 private boolean compareAndIncrementWorkerCount(int expect) {
390 return ctl.compareAndSet(expect, expect + 1);
391 }
392
393 /**
394 * Attempts to CAS-decrement the workerCount field of ctl.
395 */
396 private boolean compareAndDecrementWorkerCount(int expect) {
397 return ctl.compareAndSet(expect, expect - 1);
398 }
399
400 /**
401 * Decrements the workerCount field of ctl. This is called only on
402 * abrupt termination of a thread (see processWorkerExit). Other
403 * decrements are performed within getTask.
404 */
405 private void decrementWorkerCount() {
406 do {} while (! compareAndDecrementWorkerCount(ctl.get()));
407 }
408
409 /**
410 * The queue used for holding tasks and handing off to worker
411 * threads. We do not require that workQueue.poll() returning
412 * null necessarily means that workQueue.isEmpty(), so rely
413 * solely on isEmpty to see if the queue is empty (which we must
414 * do for example when deciding whether to transition from
415 * SHUTDOWN to TIDYING). This accommodates special-purpose
416 * queues such as DelayQueues for which poll() is allowed to
417 * return null even if it may later return non-null when delays
418 * expire.
419 */
420 private final BlockingQueue<Runnable> workQueue;
421
422 /**
423 * Lock held on access to workers set and related bookkeeping.
424 * While we could use a concurrent set of some sort, it turns out
425 * to be generally preferable to use a lock. Among the reasons is
426 * that this serializes interruptIdleWorkers, which avoids
427 * unnecessary interrupt storms, especially during shutdown.
428 * Otherwise exiting threads would concurrently interrupt those
429 * that have not yet interrupted. It also simplifies some of the
430 * associated statistics bookkeeping of largestPoolSize etc. We
431 * also hold mainLock on shutdown and shutdownNow, for the sake of
432 * ensuring workers set is stable while separately checking
433 * permission to interrupt and actually interrupting.
434 */
435 private final ReentrantLock mainLock = new ReentrantLock();
436
437 /**
438 * Set containing all worker threads in pool. Accessed only when
439 * holding mainLock.
440 */
441 private final HashSet<Worker> workers = new HashSet<>();
442
443 /**
444 * Wait condition to support awaitTermination.
445 */
446 private final Condition termination = mainLock.newCondition();
447
448 /**
449 * Tracks largest attained pool size. Accessed only under
450 * mainLock.
451 */
452 private int largestPoolSize;
453
454 /**
455 * Counter for completed tasks. Updated only on termination of
456 * worker threads. Accessed only under mainLock.
457 */
458 private long completedTaskCount;
459
460 /*
461 * All user control parameters are declared as volatiles so that
462 * ongoing actions are based on freshest values, but without need
463 * for locking, since no internal invariants depend on them
464 * changing synchronously with respect to other actions.
465 */
466
467 /**
468 * Factory for new threads. All threads are created using this
469 * factory (via method addWorker). All callers must be prepared
470 * for addWorker to fail, which may reflect a system or user's
471 * policy limiting the number of threads. Even though it is not
472 * treated as an error, failure to create threads may result in
473 * new tasks being rejected or existing ones remaining stuck in
474 * the queue.
475 *
476 * We go further and preserve pool invariants even in the face of
477 * errors such as OutOfMemoryError, that might be thrown while
478 * trying to create threads. Such errors are rather common due to
479 * the need to allocate a native stack in Thread.start, and users
480 * will want to perform clean pool shutdown to clean up. There
481 * will likely be enough memory available for the cleanup code to
482 * complete without encountering yet another OutOfMemoryError.
483 */
484 private volatile ThreadFactory threadFactory;
485
486 /**
487 * Handler called when saturated or shutdown in execute.
488 */
489 private volatile RejectedExecutionHandler handler;
490
491 /**
492 * Timeout in nanoseconds for idle threads waiting for work.
493 * Threads use this timeout when there are more than corePoolSize
494 * present or if allowCoreThreadTimeOut. Otherwise they wait
495 * forever for new work.
496 */
497 private volatile long keepAliveTime;
498
499 /**
500 * If false (default), core threads stay alive even when idle.
501 * If true, core threads use keepAliveTime to time out waiting
502 * for work.
503 */
504 private volatile boolean allowCoreThreadTimeOut;
505
506 /**
507 * Core pool size is the minimum number of workers to keep alive
508 * (and not allow to time out etc) unless allowCoreThreadTimeOut
509 * is set, in which case the minimum is zero.
510 */
511 private volatile int corePoolSize;
512
513 /**
514 * Maximum pool size. Note that the actual maximum is internally
515 * bounded by CAPACITY.
516 */
517 private volatile int maximumPoolSize;
518
519 /**
520 * The default rejected execution handler.
521 */
522 private static final RejectedExecutionHandler defaultHandler =
523 new AbortPolicy();
524
525 /**
526 * Permission required for callers of shutdown and shutdownNow.
527 * We additionally require (see checkShutdownAccess) that callers
528 * have permission to actually interrupt threads in the worker set
529 * (as governed by Thread.interrupt, which relies on
530 * ThreadGroup.checkAccess, which in turn relies on
531 * SecurityManager.checkAccess). Shutdowns are attempted only if
532 * these checks pass.
533 *
534 * All actual invocations of Thread.interrupt (see
535 * interruptIdleWorkers and interruptWorkers) ignore
536 * SecurityExceptions, meaning that the attempted interrupts
537 * silently fail. In the case of shutdown, they should not fail
538 * unless the SecurityManager has inconsistent policies, sometimes
539 * allowing access to a thread and sometimes not. In such cases,
540 * failure to actually interrupt threads may disable or delay full
541 * termination. Other uses of interruptIdleWorkers are advisory,
542 * and failure to actually interrupt will merely delay response to
543 * configuration changes so is not handled exceptionally.
544 */
545 private static final RuntimePermission shutdownPerm =
546 new RuntimePermission("modifyThread");
547
548 /**
549 * Class Worker mainly maintains interrupt control state for
550 * threads running tasks, along with other minor bookkeeping.
551 * This class opportunistically extends AbstractQueuedSynchronizer
552 * to simplify acquiring and releasing a lock surrounding each
553 * task execution. This protects against interrupts that are
554 * intended to wake up a worker thread waiting for a task from
555 * instead interrupting a task being run. We implement a simple
556 * non-reentrant mutual exclusion lock rather than use
557 * ReentrantLock because we do not want worker tasks to be able to
558 * reacquire the lock when they invoke pool control methods like
559 * setCorePoolSize. Additionally, to suppress interrupts until
560 * the thread actually starts running tasks, we initialize lock
561 * state to a negative value, and clear it upon start (in
562 * runWorker).
563 */
564 private final class Worker
565 extends AbstractQueuedSynchronizer
566 implements Runnable
567 {
568 /**
569 * This class will never be serialized, but we provide a
570 * serialVersionUID to suppress a javac warning.
571 */
572 private static final long serialVersionUID = 6138294804551838833L;
573
574 /** Thread this worker is running in. Null if factory fails. */
575 final Thread thread;
576 /** Initial task to run. Possibly null. */
577 Runnable firstTask;
578 /** Per-thread task counter */
579 volatile long completedTasks;
580
581 // TODO: switch to AbstractQueuedLongSynchronizer and move
582 // completedTasks into the lock word.
583
584 /**
585 * Creates with given first task and thread from ThreadFactory.
586 * @param firstTask the first task (null if none)
587 */
588 Worker(Runnable firstTask) {
589 setState(-1); // inhibit interrupts until runWorker
590 this.firstTask = firstTask;
591 this.thread = getThreadFactory().newThread(this);
592 }
593
594 /** Delegates main run loop to outer runWorker. */
595 public void run() {
596 runWorker(this);
597 }
598
599 // Lock methods
600 //
601 // The value 0 represents the unlocked state.
602 // The value 1 represents the locked state.
603
604 protected boolean isHeldExclusively() {
605 return getState() != 0;
606 }
607
608 protected boolean tryAcquire(int unused) {
609 if (compareAndSetState(0, 1)) {
610 setExclusiveOwnerThread(Thread.currentThread());
611 return true;
612 }
613 return false;
614 }
615
616 protected boolean tryRelease(int unused) {
617 setExclusiveOwnerThread(null);
618 setState(0);
619 return true;
620 }
621
622 public void lock() { acquire(1); }
623 public boolean tryLock() { return tryAcquire(1); }
624 public void unlock() { release(1); }
625 public boolean isLocked() { return isHeldExclusively(); }
626
627 void interruptIfStarted() {
628 Thread t;
629 if (getState() >= 0 && (t = thread) != null && !t.isInterrupted()) {
630 try {
631 t.interrupt();
632 } catch (SecurityException ignore) {
633 }
634 }
635 }
636 }
637
638 /*
639 * Methods for setting control state
640 */
641
642 /**
643 * Transitions runState to given target, or leaves it alone if
644 * already at least the given target.
645 *
646 * @param targetState the desired state, either SHUTDOWN or STOP
647 * (but not TIDYING or TERMINATED -- use tryTerminate for that)
648 */
649 private void advanceRunState(int targetState) {
650 // assert targetState == SHUTDOWN || targetState == STOP;
651 for (;;) {
652 int c = ctl.get();
653 if (runStateAtLeast(c, targetState) ||
654 ctl.compareAndSet(c, ctlOf(targetState, workerCountOf(c))))
655 break;
656 }
657 }
658
659 /**
660 * Transitions to TERMINATED state if either (SHUTDOWN and pool
661 * and queue empty) or (STOP and pool empty). If otherwise
662 * eligible to terminate but workerCount is nonzero, interrupts an
663 * idle worker to ensure that shutdown signals propagate. This
664 * method must be called following any action that might make
665 * termination possible -- reducing worker count or removing tasks
666 * from the queue during shutdown. The method is non-private to
667 * allow access from ScheduledThreadPoolExecutor.
668 */
669 final void tryTerminate() {
670 for (;;) {
671 int c = ctl.get();
672 if (isRunning(c) ||
673 runStateAtLeast(c, TIDYING) ||
674 (runStateOf(c) == SHUTDOWN && ! workQueue.isEmpty()))
675 return;
676 if (workerCountOf(c) != 0) { // Eligible to terminate
677 interruptIdleWorkers(ONLY_ONE);
678 return;
679 }
680
681 final ReentrantLock mainLock = this.mainLock;
682 mainLock.lock();
683 try {
684 if (ctl.compareAndSet(c, ctlOf(TIDYING, 0))) {
685 try {
686 terminated();
687 } finally {
688 ctl.set(ctlOf(TERMINATED, 0));
689 termination.signalAll();
690 }
691 return;
692 }
693 } finally {
694 mainLock.unlock();
695 }
696 // else retry on failed CAS
697 }
698 }
699
700 /*
701 * Methods for controlling interrupts to worker threads.
702 */
703
704 /**
705 * If there is a security manager, makes sure caller has
706 * permission to shut down threads in general (see shutdownPerm).
707 * If this passes, additionally makes sure the caller is allowed
708 * to interrupt each worker thread. This might not be true even if
709 * first check passed, if the SecurityManager treats some threads
710 * specially.
711 */
712 private void checkShutdownAccess() {
713 SecurityManager security = System.getSecurityManager();
714 if (security != null) {
715 security.checkPermission(shutdownPerm);
716 final ReentrantLock mainLock = this.mainLock;
717 mainLock.lock();
718 try {
719 for (Worker w : workers)
720 security.checkAccess(w.thread);
721 } finally {
722 mainLock.unlock();
723 }
724 }
725 }
726
727 /**
728 * Interrupts all threads, even if active. Ignores SecurityExceptions
729 * (in which case some threads may remain uninterrupted).
730 */
731 private void interruptWorkers() {
732 final ReentrantLock mainLock = this.mainLock;
733 mainLock.lock();
734 try {
735 for (Worker w : workers)
736 w.interruptIfStarted();
737 } finally {
738 mainLock.unlock();
739 }
740 }
741
742 /**
743 * Interrupts threads that might be waiting for tasks (as
744 * indicated by not being locked) so they can check for
745 * termination or configuration changes. Ignores
746 * SecurityExceptions (in which case some threads may remain
747 * uninterrupted).
748 *
749 * @param onlyOne If true, interrupt at most one worker. This is
750 * called only from tryTerminate when termination is otherwise
751 * enabled but there are still other workers. In this case, at
752 * most one waiting worker is interrupted to propagate shutdown
753 * signals in case all threads are currently waiting.
754 * Interrupting any arbitrary thread ensures that newly arriving
755 * workers since shutdown began will also eventually exit.
756 * To guarantee eventual termination, it suffices to always
757 * interrupt only one idle worker, but shutdown() interrupts all
758 * idle workers so that redundant workers exit promptly, not
759 * waiting for a straggler task to finish.
760 */
761 private void interruptIdleWorkers(boolean onlyOne) {
762 final ReentrantLock mainLock = this.mainLock;
763 mainLock.lock();
764 try {
765 for (Worker w : workers) {
766 Thread t = w.thread;
767 if (!t.isInterrupted() && w.tryLock()) {
768 try {
769 t.interrupt();
770 } catch (SecurityException ignore) {
771 } finally {
772 w.unlock();
773 }
774 }
775 if (onlyOne)
776 break;
777 }
778 } finally {
779 mainLock.unlock();
780 }
781 }
782
783 /**
784 * Common form of interruptIdleWorkers, to avoid having to
785 * remember what the boolean argument means.
786 */
787 private void interruptIdleWorkers() {
788 interruptIdleWorkers(false);
789 }
790
791 private static final boolean ONLY_ONE = true;
792
793 /*
794 * Misc utilities, most of which are also exported to
795 * ScheduledThreadPoolExecutor
796 */
797
798 /**
799 * Invokes the rejected execution handler for the given command.
800 * Package-protected for use by ScheduledThreadPoolExecutor.
801 */
802 final void reject(Runnable command) {
803 handler.rejectedExecution(command, this);
804 }
805
806 /**
807 * Performs any further cleanup following run state transition on
808 * invocation of shutdown. A no-op here, but used by
809 * ScheduledThreadPoolExecutor to cancel delayed tasks.
810 */
811 void onShutdown() {
812 }
813
814 /**
815 * State check needed by ScheduledThreadPoolExecutor to
816 * enable running tasks during shutdown.
817 *
818 * @param shutdownOK true if should return true if SHUTDOWN
819 */
820 final boolean isRunningOrShutdown(boolean shutdownOK) {
821 int rs = runStateOf(ctl.get());
822 return rs == RUNNING || (rs == SHUTDOWN && shutdownOK);
823 }
824
825 /**
826 * Drains the task queue into a new list, normally using
827 * drainTo. But if the queue is a DelayQueue or any other kind of
828 * queue for which poll or drainTo may fail to remove some
829 * elements, it deletes them one by one.
830 */
831 private List<Runnable> drainQueue() {
832 BlockingQueue<Runnable> q = workQueue;
833 ArrayList<Runnable> taskList = new ArrayList<>();
834 q.drainTo(taskList);
835 if (!q.isEmpty()) {
836 for (Runnable r : q.toArray(new Runnable[0])) {
837 if (q.remove(r))
838 taskList.add(r);
839 }
840 }
841 return taskList;
842 }
843
844 /*
845 * Methods for creating, running and cleaning up after workers
846 */
847
848 /**
849 * Checks if a new worker can be added with respect to current
850 * pool state and the given bound (either core or maximum). If so,
851 * the worker count is adjusted accordingly, and, if possible, a
852 * new worker is created and started, running firstTask as its
853 * first task. This method returns false if the pool is stopped or
854 * eligible to shut down. It also returns false if the thread
855 * factory fails to create a thread when asked. If the thread
856 * creation fails, either due to the thread factory returning
857 * null, or due to an exception (typically OutOfMemoryError in
858 * Thread.start()), we roll back cleanly.
859 *
860 * @param firstTask the task the new thread should run first (or
861 * null if none). Workers are created with an initial first task
862 * (in method execute()) to bypass queuing when there are fewer
863 * than corePoolSize threads (in which case we always start one),
864 * or when the queue is full (in which case we must bypass queue).
865 * Initially idle threads are usually created via
866 * prestartCoreThread or to replace other dying workers.
867 *
868 * @param core if true use corePoolSize as bound, else
869 * maximumPoolSize. (A boolean indicator is used here rather than a
870 * value to ensure reads of fresh values after checking other pool
871 * state).
872 * @return true if successful
873 */
874 private boolean addWorker(Runnable firstTask, boolean core) {
875 retry:
876 for (;;) {
877 int c = ctl.get();
878 int rs = runStateOf(c);
879
880 // Check if queue empty only if necessary.
881 if (rs >= SHUTDOWN &&
882 ! (rs == SHUTDOWN &&
883 firstTask == null &&
884 ! workQueue.isEmpty()))
885 return false;
886
887 for (;;) {
888 int wc = workerCountOf(c);
889 if (wc >= CAPACITY ||
890 wc >= (core ? corePoolSize : maximumPoolSize))
891 return false;
892 if (compareAndIncrementWorkerCount(c))
893 break retry;
894 c = ctl.get(); // Re-read ctl
895 if (runStateOf(c) != rs)
896 continue retry;
897 // else CAS failed due to workerCount change; retry inner loop
898 }
899 }
900
901 boolean workerStarted = false;
902 boolean workerAdded = false;
903 Worker w = null;
904 try {
905 w = new Worker(firstTask);
906 final Thread t = w.thread;
907 if (t != null) {
908 final ReentrantLock mainLock = this.mainLock;
909 mainLock.lock();
910 try {
911 // Recheck while holding lock.
912 // Back out on ThreadFactory failure or if
913 // shut down before lock acquired.
914 int rs = runStateOf(ctl.get());
915
916 if (rs < SHUTDOWN ||
917 (rs == SHUTDOWN && firstTask == null)) {
918 if (t.isAlive()) // precheck that t is startable
919 throw new IllegalThreadStateException();
920 workers.add(w);
921 int s = workers.size();
922 if (s > largestPoolSize)
923 largestPoolSize = s;
924 workerAdded = true;
925 }
926 } finally {
927 mainLock.unlock();
928 }
929 if (workerAdded) {
930 t.start();
931 workerStarted = true;
932 }
933 }
934 } finally {
935 if (! workerStarted)
936 addWorkerFailed(w);
937 }
938 return workerStarted;
939 }
940
941 /**
942 * Rolls back the worker thread creation.
943 * - removes worker from workers, if present
944 * - decrements worker count
945 * - rechecks for termination, in case the existence of this
946 * worker was holding up termination
947 */
948 private void addWorkerFailed(Worker w) {
949 final ReentrantLock mainLock = this.mainLock;
950 mainLock.lock();
951 try {
952 if (w != null)
953 workers.remove(w);
954 decrementWorkerCount();
955 tryTerminate();
956 } finally {
957 mainLock.unlock();
958 }
959 }
960
961 /**
962 * Performs cleanup and bookkeeping for a dying worker. Called
963 * only from worker threads. Unless completedAbruptly is set,
964 * assumes that workerCount has already been adjusted to account
965 * for exit. This method removes thread from worker set, and
966 * possibly terminates the pool or replaces the worker if either
967 * it exited due to user task exception or if fewer than
968 * corePoolSize workers are running or queue is non-empty but
969 * there are no workers.
970 *
971 * @param w the worker
972 * @param completedAbruptly if the worker died due to user exception
973 */
974 private void processWorkerExit(Worker w, boolean completedAbruptly) {
975 if (completedAbruptly) // If abrupt, then workerCount wasn't adjusted
976 decrementWorkerCount();
977
978 final ReentrantLock mainLock = this.mainLock;
979 mainLock.lock();
980 try {
981 completedTaskCount += w.completedTasks;
982 workers.remove(w);
983 } finally {
984 mainLock.unlock();
985 }
986
987 tryTerminate();
988
989 int c = ctl.get();
990 if (runStateLessThan(c, STOP)) {
991 if (!completedAbruptly) {
992 int min = allowCoreThreadTimeOut ? 0 : corePoolSize;
993 if (min == 0 && ! workQueue.isEmpty())
994 min = 1;
995 if (workerCountOf(c) >= min)
996 return; // replacement not needed
997 }
998 addWorker(null, false);
999 }
1000 }
1001
1002 /**
1003 * Performs blocking or timed wait for a task, depending on
1004 * current configuration settings, or returns null if this worker
1005 * must exit because of any of:
1006 * 1. There are more than maximumPoolSize workers (due to
1007 * a call to setMaximumPoolSize).
1008 * 2. The pool is stopped.
1009 * 3. The pool is shutdown and the queue is empty.
1010 * 4. This worker timed out waiting for a task, and timed-out
1011 * workers are subject to termination (that is,
1012 * {@code allowCoreThreadTimeOut || workerCount > corePoolSize})
1013 * both before and after the timed wait, and if the queue is
1014 * non-empty, this worker is not the last thread in the pool.
1015 *
1016 * @return task, or null if the worker must exit, in which case
1017 * workerCount is decremented
1018 */
1019 private Runnable getTask() {
1020 boolean timedOut = false; // Did the last poll() time out?
1021
1022 for (;;) {
1023 int c = ctl.get();
1024 int rs = runStateOf(c);
1025
1026 // Check if queue empty only if necessary.
1027 if (rs >= SHUTDOWN && (rs >= STOP || workQueue.isEmpty())) {
1028 decrementWorkerCount();
1029 return null;
1030 }
1031
1032 int wc = workerCountOf(c);
1033
1034 // Are workers subject to culling?
1035 boolean timed = allowCoreThreadTimeOut || wc > corePoolSize;
1036
1037 if ((wc > maximumPoolSize || (timed && timedOut))
1038 && (wc > 1 || workQueue.isEmpty())) {
1039 if (compareAndDecrementWorkerCount(c))
1040 return null;
1041 continue;
1042 }
1043
1044 try {
1045 Runnable r = timed ?
1046 workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
1047 workQueue.take();
1048 if (r != null)
1049 return r;
1050 timedOut = true;
1051 } catch (InterruptedException retry) {
1052 timedOut = false;
1053 }
1054 }
1055 }
1056
1057 /**
1058 * Main worker run loop. Repeatedly gets tasks from queue and
1059 * executes them, while coping with a number of issues:
1060 *
1061 * 1. We may start out with an initial task, in which case we
1062 * don't need to get the first one. Otherwise, as long as pool is
1063 * running, we get tasks from getTask. If it returns null then the
1064 * worker exits due to changed pool state or configuration
1065 * parameters. Other exits result from exception throws in
1066 * external code, in which case completedAbruptly holds, which
1067 * usually leads processWorkerExit to replace this thread.
1068 *
1069 * 2. Before running any task, the lock is acquired to prevent
1070 * other pool interrupts while the task is executing, and then we
1071 * ensure that unless pool is stopping, this thread does not have
1072 * its interrupt set.
1073 *
1074 * 3. Each task run is preceded by a call to beforeExecute, which
1075 * might throw an exception, in which case we cause thread to die
1076 * (breaking loop with completedAbruptly true) without processing
1077 * the task.
1078 *
1079 * 4. Assuming beforeExecute completes normally, we run the task,
1080 * gathering any of its thrown exceptions to send to afterExecute.
1081 * We separately handle RuntimeException, Error (both of which the
1082 * specs guarantee that we trap) and arbitrary Throwables.
1083 * Because we cannot rethrow Throwables within Runnable.run, we
1084 * wrap them within Errors on the way out (to the thread's
1085 * UncaughtExceptionHandler). Any thrown exception also
1086 * conservatively causes thread to die.
1087 *
1088 * 5. After task.run completes, we call afterExecute, which may
1089 * also throw an exception, which will also cause thread to
1090 * die. According to JLS Sec 14.20, this exception is the one that
1091 * will be in effect even if task.run throws.
1092 *
1093 * The net effect of the exception mechanics is that afterExecute
1094 * and the thread's UncaughtExceptionHandler have as accurate
1095 * information as we can provide about any problems encountered by
1096 * user code.
1097 *
1098 * @param w the worker
1099 */
1100 final void runWorker(Worker w) {
1101 Thread wt = Thread.currentThread();
1102 Runnable task = w.firstTask;
1103 w.firstTask = null;
1104 w.unlock(); // allow interrupts
1105 boolean completedAbruptly = true;
1106 try {
1107 while (task != null || (task = getTask()) != null) {
1108 w.lock();
1109 // If pool is stopping, ensure thread is interrupted;
1110 // if not, ensure thread is not interrupted. This
1111 // requires a recheck in second case to deal with
1112 // shutdownNow race while clearing interrupt
1113 if ((runStateAtLeast(ctl.get(), STOP) ||
1114 (Thread.interrupted() &&
1115 runStateAtLeast(ctl.get(), STOP))) &&
1116 !wt.isInterrupted())
1117 wt.interrupt();
1118 try {
1119 beforeExecute(wt, task);
1120 Throwable thrown = null;
1121 try {
1122 task.run();
1123 } catch (RuntimeException x) {
1124 thrown = x; throw x;
1125 } catch (Error x) {
1126 thrown = x; throw x;
1127 } catch (Throwable x) {
1128 thrown = x; throw new Error(x);
1129 } finally {
1130 afterExecute(task, thrown);
1131 }
1132 } finally {
1133 task = null;
1134 w.completedTasks++;
1135 w.unlock();
1136 }
1137 }
1138 completedAbruptly = false;
1139 } finally {
1140 processWorkerExit(w, completedAbruptly);
1141 }
1142 }
1143
1144 // Public constructors and methods
1145
1146 /**
1147 * Creates a new {@code ThreadPoolExecutor} with the given initial
1148 * parameters and default thread factory and rejected execution handler.
1149 * It may be more convenient to use one of the {@link Executors} factory
1150 * methods instead of this general purpose constructor.
1151 *
1152 * @param corePoolSize the number of threads to keep in the pool, even
1153 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1154 * @param maximumPoolSize the maximum number of threads to allow in the
1155 * pool
1156 * @param keepAliveTime when the number of threads is greater than
1157 * the core, this is the maximum time that excess idle threads
1158 * will wait for new tasks before terminating.
1159 * @param unit the time unit for the {@code keepAliveTime} argument
1160 * @param workQueue the queue to use for holding tasks before they are
1161 * executed. This queue will hold only the {@code Runnable}
1162 * tasks submitted by the {@code execute} method.
1163 * @throws IllegalArgumentException if one of the following holds:<br>
1164 * {@code corePoolSize < 0}<br>
1165 * {@code keepAliveTime < 0}<br>
1166 * {@code maximumPoolSize <= 0}<br>
1167 * {@code maximumPoolSize < corePoolSize}
1168 * @throws NullPointerException if {@code workQueue} is null
1169 */
1170 public ThreadPoolExecutor(int corePoolSize,
1171 int maximumPoolSize,
1172 long keepAliveTime,
1173 TimeUnit unit,
1174 BlockingQueue<Runnable> workQueue) {
1175 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1176 Executors.defaultThreadFactory(), defaultHandler);
1177 }
1178
1179 /**
1180 * Creates a new {@code ThreadPoolExecutor} with the given initial
1181 * parameters and default rejected execution handler.
1182 *
1183 * @param corePoolSize the number of threads to keep in the pool, even
1184 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1185 * @param maximumPoolSize the maximum number of threads to allow in the
1186 * pool
1187 * @param keepAliveTime when the number of threads is greater than
1188 * the core, this is the maximum time that excess idle threads
1189 * will wait for new tasks before terminating.
1190 * @param unit the time unit for the {@code keepAliveTime} argument
1191 * @param workQueue the queue to use for holding tasks before they are
1192 * executed. This queue will hold only the {@code Runnable}
1193 * tasks submitted by the {@code execute} method.
1194 * @param threadFactory the factory to use when the executor
1195 * creates a new thread
1196 * @throws IllegalArgumentException if one of the following holds:<br>
1197 * {@code corePoolSize < 0}<br>
1198 * {@code keepAliveTime < 0}<br>
1199 * {@code maximumPoolSize <= 0}<br>
1200 * {@code maximumPoolSize < corePoolSize}
1201 * @throws NullPointerException if {@code workQueue}
1202 * or {@code threadFactory} is null
1203 */
1204 public ThreadPoolExecutor(int corePoolSize,
1205 int maximumPoolSize,
1206 long keepAliveTime,
1207 TimeUnit unit,
1208 BlockingQueue<Runnable> workQueue,
1209 ThreadFactory threadFactory) {
1210 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1211 threadFactory, defaultHandler);
1212 }
1213
1214 /**
1215 * Creates a new {@code ThreadPoolExecutor} with the given initial
1216 * parameters and default thread factory.
1217 *
1218 * @param corePoolSize the number of threads to keep in the pool, even
1219 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1220 * @param maximumPoolSize the maximum number of threads to allow in the
1221 * pool
1222 * @param keepAliveTime when the number of threads is greater than
1223 * the core, this is the maximum time that excess idle threads
1224 * will wait for new tasks before terminating.
1225 * @param unit the time unit for the {@code keepAliveTime} argument
1226 * @param workQueue the queue to use for holding tasks before they are
1227 * executed. This queue will hold only the {@code Runnable}
1228 * tasks submitted by the {@code execute} method.
1229 * @param handler the handler to use when execution is blocked
1230 * because the thread bounds and queue capacities are reached
1231 * @throws IllegalArgumentException if one of the following holds:<br>
1232 * {@code corePoolSize < 0}<br>
1233 * {@code keepAliveTime < 0}<br>
1234 * {@code maximumPoolSize <= 0}<br>
1235 * {@code maximumPoolSize < corePoolSize}
1236 * @throws NullPointerException if {@code workQueue}
1237 * or {@code handler} is null
1238 */
1239 public ThreadPoolExecutor(int corePoolSize,
1240 int maximumPoolSize,
1241 long keepAliveTime,
1242 TimeUnit unit,
1243 BlockingQueue<Runnable> workQueue,
1244 RejectedExecutionHandler handler) {
1245 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1246 Executors.defaultThreadFactory(), handler);
1247 }
1248
1249 /**
1250 * Creates a new {@code ThreadPoolExecutor} with the given initial
1251 * parameters.
1252 *
1253 * @param corePoolSize the number of threads to keep in the pool, even
1254 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1255 * @param maximumPoolSize the maximum number of threads to allow in the
1256 * pool
1257 * @param keepAliveTime when the number of threads is greater than
1258 * the core, this is the maximum time that excess idle threads
1259 * will wait for new tasks before terminating.
1260 * @param unit the time unit for the {@code keepAliveTime} argument
1261 * @param workQueue the queue to use for holding tasks before they are
1262 * executed. This queue will hold only the {@code Runnable}
1263 * tasks submitted by the {@code execute} method.
1264 * @param threadFactory the factory to use when the executor
1265 * creates a new thread
1266 * @param handler the handler to use when execution is blocked
1267 * because the thread bounds and queue capacities are reached
1268 * @throws IllegalArgumentException if one of the following holds:<br>
1269 * {@code corePoolSize < 0}<br>
1270 * {@code keepAliveTime < 0}<br>
1271 * {@code maximumPoolSize <= 0}<br>
1272 * {@code maximumPoolSize < corePoolSize}
1273 * @throws NullPointerException if {@code workQueue}
1274 * or {@code threadFactory} or {@code handler} is null
1275 */
1276 public ThreadPoolExecutor(int corePoolSize,
1277 int maximumPoolSize,
1278 long keepAliveTime,
1279 TimeUnit unit,
1280 BlockingQueue<Runnable> workQueue,
1281 ThreadFactory threadFactory,
1282 RejectedExecutionHandler handler) {
1283 if (corePoolSize < 0 ||
1284 maximumPoolSize <= 0 ||
1285 maximumPoolSize < corePoolSize ||
1286 keepAliveTime < 0)
1287 throw new IllegalArgumentException();
1288 if (workQueue == null || threadFactory == null || handler == null)
1289 throw new NullPointerException();
1290 this.corePoolSize = corePoolSize;
1291 this.maximumPoolSize = maximumPoolSize;
1292 this.workQueue = workQueue;
1293 this.keepAliveTime = unit.toNanos(keepAliveTime);
1294 this.threadFactory = threadFactory;
1295 this.handler = handler;
1296 }
1297
1298 /**
1299 * Executes the given task sometime in the future. The task
1300 * may execute in a new thread or in an existing pooled thread.
1301 *
1302 * If the task cannot be submitted for execution, either because this
1303 * executor has been shutdown or because its capacity has been reached,
1304 * the task is handled by the current {@code RejectedExecutionHandler}.
1305 *
1306 * @param command the task to execute
1307 * @throws RejectedExecutionException at discretion of
1308 * {@code RejectedExecutionHandler}, if the task
1309 * cannot be accepted for execution
1310 * @throws NullPointerException if {@code command} is null
1311 */
1312 public void execute(Runnable command) {
1313 if (command == null)
1314 throw new NullPointerException();
1315 /*
1316 * Proceed in 3 steps:
1317 *
1318 * 1. If fewer than corePoolSize threads are running, try to
1319 * start a new thread with the given command as its first
1320 * task. The call to addWorker atomically checks runState and
1321 * workerCount, and so prevents false alarms that would add
1322 * threads when it shouldn't, by returning false.
1323 *
1324 * 2. If a task can be successfully queued, then we still need
1325 * to double-check whether we should have added a thread
1326 * (because existing ones died since last checking) or that
1327 * the pool shut down since entry into this method. So we
1328 * recheck state and if necessary roll back the enqueuing if
1329 * stopped, or start a new thread if there are none.
1330 *
1331 * 3. If we cannot queue task, then we try to add a new
1332 * thread. If it fails, we know we are shut down or saturated
1333 * and so reject the task.
1334 */
1335 int c = ctl.get();
1336 if (workerCountOf(c) < corePoolSize) {
1337 if (addWorker(command, true))
1338 return;
1339 c = ctl.get();
1340 }
1341 if (isRunning(c) && workQueue.offer(command)) {
1342 int recheck = ctl.get();
1343 if (! isRunning(recheck) && remove(command))
1344 reject(command);
1345 else if (workerCountOf(recheck) == 0)
1346 addWorker(null, false);
1347 }
1348 else if (!addWorker(command, false))
1349 reject(command);
1350 }
1351
1352 /**
1353 * Initiates an orderly shutdown in which previously submitted
1354 * tasks are executed, but no new tasks will be accepted.
1355 * Invocation has no additional effect if already shut down.
1356 *
1357 * <p>This method does not wait for previously submitted tasks to
1358 * complete execution. Use {@link #awaitTermination awaitTermination}
1359 * to do that.
1360 *
1361 * @throws SecurityException {@inheritDoc}
1362 */
1363 public void shutdown() {
1364 final ReentrantLock mainLock = this.mainLock;
1365 mainLock.lock();
1366 try {
1367 checkShutdownAccess();
1368 advanceRunState(SHUTDOWN);
1369 interruptIdleWorkers();
1370 onShutdown(); // hook for ScheduledThreadPoolExecutor
1371 } finally {
1372 mainLock.unlock();
1373 }
1374 tryTerminate();
1375 }
1376
1377 /**
1378 * Attempts to stop all actively executing tasks, halts the
1379 * processing of waiting tasks, and returns a list of the tasks
1380 * that were awaiting execution. These tasks are drained (removed)
1381 * from the task queue upon return from this method.
1382 *
1383 * <p>This method does not wait for actively executing tasks to
1384 * terminate. Use {@link #awaitTermination awaitTermination} to
1385 * do that.
1386 *
1387 * <p>There are no guarantees beyond best-effort attempts to stop
1388 * processing actively executing tasks. This implementation
1389 * interrupts tasks via {@link Thread#interrupt}; any task that
1390 * fails to respond to interrupts may never terminate.
1391 *
1392 * @throws SecurityException {@inheritDoc}
1393 */
1394 public List<Runnable> shutdownNow() {
1395 List<Runnable> tasks;
1396 final ReentrantLock mainLock = this.mainLock;
1397 mainLock.lock();
1398 try {
1399 checkShutdownAccess();
1400 advanceRunState(STOP);
1401 interruptWorkers();
1402 tasks = drainQueue();
1403 } finally {
1404 mainLock.unlock();
1405 }
1406 tryTerminate();
1407 return tasks;
1408 }
1409
1410 public boolean isShutdown() {
1411 return ! isRunning(ctl.get());
1412 }
1413
1414 /**
1415 * Returns true if this executor is in the process of terminating
1416 * after {@link #shutdown} or {@link #shutdownNow} but has not
1417 * completely terminated. This method may be useful for
1418 * debugging. A return of {@code true} reported a sufficient
1419 * period after shutdown may indicate that submitted tasks have
1420 * ignored or suppressed interruption, causing this executor not
1421 * to properly terminate.
1422 *
1423 * @return {@code true} if terminating but not yet terminated
1424 */
1425 public boolean isTerminating() {
1426 int c = ctl.get();
1427 return ! isRunning(c) && runStateLessThan(c, TERMINATED);
1428 }
1429
1430 public boolean isTerminated() {
1431 return runStateAtLeast(ctl.get(), TERMINATED);
1432 }
1433
1434 public boolean awaitTermination(long timeout, TimeUnit unit)
1435 throws InterruptedException {
1436 long nanos = unit.toNanos(timeout);
1437 final ReentrantLock mainLock = this.mainLock;
1438 mainLock.lock();
1439 try {
1440 while (!runStateAtLeast(ctl.get(), TERMINATED)) {
1441 if (nanos <= 0L)
1442 return false;
1443 nanos = termination.awaitNanos(nanos);
1444 }
1445 return true;
1446 } finally {
1447 mainLock.unlock();
1448 }
1449 }
1450
1451 /**
1452 * Invokes {@code shutdown} when this executor is no longer
1453 * referenced and it has no threads.
1454 */
1455 protected void finalize() {
1456 shutdown();
1457 }
1458
1459 /**
1460 * Sets the thread factory used to create new threads.
1461 *
1462 * @param threadFactory the new thread factory
1463 * @throws NullPointerException if threadFactory is null
1464 * @see #getThreadFactory
1465 */
1466 public void setThreadFactory(ThreadFactory threadFactory) {
1467 if (threadFactory == null)
1468 throw new NullPointerException();
1469 this.threadFactory = threadFactory;
1470 }
1471
1472 /**
1473 * Returns the thread factory used to create new threads.
1474 *
1475 * @return the current thread factory
1476 * @see #setThreadFactory(ThreadFactory)
1477 */
1478 public ThreadFactory getThreadFactory() {
1479 return threadFactory;
1480 }
1481
1482 /**
1483 * Sets a new handler for unexecutable tasks.
1484 *
1485 * @param handler the new handler
1486 * @throws NullPointerException if handler is null
1487 * @see #getRejectedExecutionHandler
1488 */
1489 public void setRejectedExecutionHandler(RejectedExecutionHandler handler) {
1490 if (handler == null)
1491 throw new NullPointerException();
1492 this.handler = handler;
1493 }
1494
1495 /**
1496 * Returns the current handler for unexecutable tasks.
1497 *
1498 * @return the current handler
1499 * @see #setRejectedExecutionHandler(RejectedExecutionHandler)
1500 */
1501 public RejectedExecutionHandler getRejectedExecutionHandler() {
1502 return handler;
1503 }
1504
1505 /**
1506 * Sets the core number of threads. This overrides any value set
1507 * in the constructor. If the new value is smaller than the
1508 * current value, excess existing threads will be terminated when
1509 * they next become idle. If larger, new threads will, if needed,
1510 * be started to execute any queued tasks.
1511 *
1512 * @param corePoolSize the new core size
1513 * @throws IllegalArgumentException if {@code corePoolSize < 0}
1514 * or {@code corePoolSize} is greater than the {@linkplain
1515 * #getMaximumPoolSize() maximum pool size}
1516 * @see #getCorePoolSize
1517 */
1518 public void setCorePoolSize(int corePoolSize) {
1519 if (corePoolSize < 0 || maximumPoolSize < corePoolSize)
1520 throw new IllegalArgumentException();
1521 int delta = corePoolSize - this.corePoolSize;
1522 this.corePoolSize = corePoolSize;
1523 if (workerCountOf(ctl.get()) > corePoolSize)
1524 interruptIdleWorkers();
1525 else if (delta > 0) {
1526 // We don't really know how many new threads are "needed".
1527 // As a heuristic, prestart enough new workers (up to new
1528 // core size) to handle the current number of tasks in
1529 // queue, but stop if queue becomes empty while doing so.
1530 int k = Math.min(delta, workQueue.size());
1531 while (k-- > 0 && addWorker(null, true)) {
1532 if (workQueue.isEmpty())
1533 break;
1534 }
1535 }
1536 }
1537
1538 /**
1539 * Returns the core number of threads.
1540 *
1541 * @return the core number of threads
1542 * @see #setCorePoolSize
1543 */
1544 public int getCorePoolSize() {
1545 return corePoolSize;
1546 }
1547
1548 /**
1549 * Starts a core thread, causing it to idly wait for work. This
1550 * overrides the default policy of starting core threads only when
1551 * new tasks are executed. This method will return {@code false}
1552 * if all core threads have already been started.
1553 *
1554 * @return {@code true} if a thread was started
1555 */
1556 public boolean prestartCoreThread() {
1557 return workerCountOf(ctl.get()) < corePoolSize &&
1558 addWorker(null, true);
1559 }
1560
1561 /**
1562 * Same as prestartCoreThread except arranges that at least one
1563 * thread is started even if corePoolSize is 0.
1564 */
1565 void ensurePrestart() {
1566 int wc = workerCountOf(ctl.get());
1567 if (wc < corePoolSize)
1568 addWorker(null, true);
1569 else if (wc == 0)
1570 addWorker(null, false);
1571 }
1572
1573 /**
1574 * Starts all core threads, causing them to idly wait for work. This
1575 * overrides the default policy of starting core threads only when
1576 * new tasks are executed.
1577 *
1578 * @return the number of threads started
1579 */
1580 public int prestartAllCoreThreads() {
1581 int n = 0;
1582 while (addWorker(null, true))
1583 ++n;
1584 return n;
1585 }
1586
1587 /**
1588 * Returns true if this pool allows core threads to time out and
1589 * terminate if no tasks arrive within the keepAlive time, being
1590 * replaced if needed when new tasks arrive. When true, the same
1591 * keep-alive policy applying to non-core threads applies also to
1592 * core threads. When false (the default), core threads are never
1593 * terminated due to lack of incoming tasks.
1594 *
1595 * @return {@code true} if core threads are allowed to time out,
1596 * else {@code false}
1597 *
1598 * @since 1.6
1599 */
1600 public boolean allowsCoreThreadTimeOut() {
1601 return allowCoreThreadTimeOut;
1602 }
1603
1604 /**
1605 * Sets the policy governing whether core threads may time out and
1606 * terminate if no tasks arrive within the keep-alive time, being
1607 * replaced if needed when new tasks arrive. When false, core
1608 * threads are never terminated due to lack of incoming
1609 * tasks. When true, the same keep-alive policy applying to
1610 * non-core threads applies also to core threads. To avoid
1611 * continual thread replacement, the keep-alive time must be
1612 * greater than zero when setting {@code true}. This method
1613 * should in general be called before the pool is actively used.
1614 *
1615 * @param value {@code true} if should time out, else {@code false}
1616 * @throws IllegalArgumentException if value is {@code true}
1617 * and the current keep-alive time is not greater than zero
1618 *
1619 * @since 1.6
1620 */
1621 public void allowCoreThreadTimeOut(boolean value) {
1622 if (value && keepAliveTime <= 0)
1623 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1624 if (value != allowCoreThreadTimeOut) {
1625 allowCoreThreadTimeOut = value;
1626 if (value)
1627 interruptIdleWorkers();
1628 }
1629 }
1630
1631 /**
1632 * Sets the maximum allowed number of threads. This overrides any
1633 * value set in the constructor. If the new value is smaller than
1634 * the current value, excess existing threads will be
1635 * terminated when they next become idle.
1636 *
1637 * @param maximumPoolSize the new maximum
1638 * @throws IllegalArgumentException if the new maximum is
1639 * less than or equal to zero, or
1640 * less than the {@linkplain #getCorePoolSize core pool size}
1641 * @see #getMaximumPoolSize
1642 */
1643 public void setMaximumPoolSize(int maximumPoolSize) {
1644 if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize)
1645 throw new IllegalArgumentException();
1646 this.maximumPoolSize = maximumPoolSize;
1647 if (workerCountOf(ctl.get()) > maximumPoolSize)
1648 interruptIdleWorkers();
1649 }
1650
1651 /**
1652 * Returns the maximum allowed number of threads.
1653 *
1654 * @return the maximum allowed number of threads
1655 * @see #setMaximumPoolSize
1656 */
1657 public int getMaximumPoolSize() {
1658 return maximumPoolSize;
1659 }
1660
1661 /**
1662 * Sets the thread keep-alive time, which is the amount of time
1663 * that threads may remain idle before being terminated.
1664 * Threads that wait this amount of time without processing a
1665 * task will be terminated if there are more than the core
1666 * number of threads currently in the pool, or if this pool
1667 * {@linkplain #allowsCoreThreadTimeOut() allows core thread timeout}.
1668 * This overrides any value set in the constructor.
1669 *
1670 * @param time the time to wait. A time value of zero will cause
1671 * excess threads to terminate immediately after executing tasks.
1672 * @param unit the time unit of the {@code time} argument
1673 * @throws IllegalArgumentException if {@code time} less than zero or
1674 * if {@code time} is zero and {@code allowsCoreThreadTimeOut}
1675 * @see #getKeepAliveTime(TimeUnit)
1676 */
1677 public void setKeepAliveTime(long time, TimeUnit unit) {
1678 if (time < 0)
1679 throw new IllegalArgumentException();
1680 if (time == 0 && allowsCoreThreadTimeOut())
1681 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1682 long keepAliveTime = unit.toNanos(time);
1683 long delta = keepAliveTime - this.keepAliveTime;
1684 this.keepAliveTime = keepAliveTime;
1685 if (delta < 0)
1686 interruptIdleWorkers();
1687 }
1688
1689 /**
1690 * Returns the thread keep-alive time, which is the amount of time
1691 * that threads may remain idle before being terminated.
1692 * Threads that wait this amount of time without processing a
1693 * task will be terminated if there are more than the core
1694 * number of threads currently in the pool, or if this pool
1695 * {@linkplain #allowsCoreThreadTimeOut() allows core thread timeout}.
1696 *
1697 * @param unit the desired time unit of the result
1698 * @return the time limit
1699 * @see #setKeepAliveTime(long, TimeUnit)
1700 */
1701 public long getKeepAliveTime(TimeUnit unit) {
1702 return unit.convert(keepAliveTime, TimeUnit.NANOSECONDS);
1703 }
1704
1705 /* User-level queue utilities */
1706
1707 /**
1708 * Returns the task queue used by this executor. Access to the
1709 * task queue is intended primarily for debugging and monitoring.
1710 * This queue may be in active use. Retrieving the task queue
1711 * does not prevent queued tasks from executing.
1712 *
1713 * @return the task queue
1714 */
1715 public BlockingQueue<Runnable> getQueue() {
1716 return workQueue;
1717 }
1718
1719 /**
1720 * Removes this task from the executor's internal queue if it is
1721 * present, thus causing it not to be run if it has not already
1722 * started.
1723 *
1724 * <p>This method may be useful as one part of a cancellation
1725 * scheme. It may fail to remove tasks that have been converted
1726 * into other forms before being placed on the internal queue.
1727 * For example, a task entered using {@code submit} might be
1728 * converted into a form that maintains {@code Future} status.
1729 * However, in such cases, method {@link #purge} may be used to
1730 * remove those Futures that have been cancelled.
1731 *
1732 * @param task the task to remove
1733 * @return {@code true} if the task was removed
1734 */
1735 public boolean remove(Runnable task) {
1736 boolean removed = workQueue.remove(task);
1737 tryTerminate(); // In case SHUTDOWN and now empty
1738 return removed;
1739 }
1740
1741 /**
1742 * Tries to remove from the work queue all {@link Future}
1743 * tasks that have been cancelled. This method can be useful as a
1744 * storage reclamation operation, that has no other impact on
1745 * functionality. Cancelled tasks are never executed, but may
1746 * accumulate in work queues until worker threads can actively
1747 * remove them. Invoking this method instead tries to remove them now.
1748 * However, this method may fail to remove tasks in
1749 * the presence of interference by other threads.
1750 */
1751 public void purge() {
1752 final BlockingQueue<Runnable> q = workQueue;
1753 try {
1754 Iterator<Runnable> it = q.iterator();
1755 while (it.hasNext()) {
1756 Runnable r = it.next();
1757 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1758 it.remove();
1759 }
1760 } catch (ConcurrentModificationException fallThrough) {
1761 // Take slow path if we encounter interference during traversal.
1762 // Make copy for traversal and call remove for cancelled entries.
1763 // The slow path is more likely to be O(N*N).
1764 for (Object r : q.toArray())
1765 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1766 q.remove(r);
1767 }
1768
1769 tryTerminate(); // In case SHUTDOWN and now empty
1770 }
1771
1772 /* Statistics */
1773
1774 /**
1775 * Returns the current number of threads in the pool.
1776 *
1777 * @return the number of threads
1778 */
1779 public int getPoolSize() {
1780 final ReentrantLock mainLock = this.mainLock;
1781 mainLock.lock();
1782 try {
1783 // Remove rare and surprising possibility of
1784 // isTerminated() && getPoolSize() > 0
1785 return runStateAtLeast(ctl.get(), TIDYING) ? 0
1786 : workers.size();
1787 } finally {
1788 mainLock.unlock();
1789 }
1790 }
1791
1792 /**
1793 * Returns the approximate number of threads that are actively
1794 * executing tasks.
1795 *
1796 * @return the number of threads
1797 */
1798 public int getActiveCount() {
1799 final ReentrantLock mainLock = this.mainLock;
1800 mainLock.lock();
1801 try {
1802 int n = 0;
1803 for (Worker w : workers)
1804 if (w.isLocked())
1805 ++n;
1806 return n;
1807 } finally {
1808 mainLock.unlock();
1809 }
1810 }
1811
1812 /**
1813 * Returns the largest number of threads that have ever
1814 * simultaneously been in the pool.
1815 *
1816 * @return the number of threads
1817 */
1818 public int getLargestPoolSize() {
1819 final ReentrantLock mainLock = this.mainLock;
1820 mainLock.lock();
1821 try {
1822 return largestPoolSize;
1823 } finally {
1824 mainLock.unlock();
1825 }
1826 }
1827
1828 /**
1829 * Returns the approximate total number of tasks that have ever been
1830 * scheduled for execution. Because the states of tasks and
1831 * threads may change dynamically during computation, the returned
1832 * value is only an approximation.
1833 *
1834 * @return the number of tasks
1835 */
1836 public long getTaskCount() {
1837 final ReentrantLock mainLock = this.mainLock;
1838 mainLock.lock();
1839 try {
1840 long n = completedTaskCount;
1841 for (Worker w : workers) {
1842 n += w.completedTasks;
1843 if (w.isLocked())
1844 ++n;
1845 }
1846 return n + workQueue.size();
1847 } finally {
1848 mainLock.unlock();
1849 }
1850 }
1851
1852 /**
1853 * Returns the approximate total number of tasks that have
1854 * completed execution. Because the states of tasks and threads
1855 * may change dynamically during computation, the returned value
1856 * is only an approximation, but one that does not ever decrease
1857 * across successive calls.
1858 *
1859 * @return the number of tasks
1860 */
1861 public long getCompletedTaskCount() {
1862 final ReentrantLock mainLock = this.mainLock;
1863 mainLock.lock();
1864 try {
1865 long n = completedTaskCount;
1866 for (Worker w : workers)
1867 n += w.completedTasks;
1868 return n;
1869 } finally {
1870 mainLock.unlock();
1871 }
1872 }
1873
1874 /**
1875 * Returns a string identifying this pool, as well as its state,
1876 * including indications of run state and estimated worker and
1877 * task counts.
1878 *
1879 * @return a string identifying this pool, as well as its state
1880 */
1881 public String toString() {
1882 long ncompleted;
1883 int nworkers, nactive;
1884 final ReentrantLock mainLock = this.mainLock;
1885 mainLock.lock();
1886 try {
1887 ncompleted = completedTaskCount;
1888 nactive = 0;
1889 nworkers = workers.size();
1890 for (Worker w : workers) {
1891 ncompleted += w.completedTasks;
1892 if (w.isLocked())
1893 ++nactive;
1894 }
1895 } finally {
1896 mainLock.unlock();
1897 }
1898 int c = ctl.get();
1899 String runState =
1900 runStateLessThan(c, SHUTDOWN) ? "Running" :
1901 runStateAtLeast(c, TERMINATED) ? "Terminated" :
1902 "Shutting down";
1903 return super.toString() +
1904 "[" + runState +
1905 ", pool size = " + nworkers +
1906 ", active threads = " + nactive +
1907 ", queued tasks = " + workQueue.size() +
1908 ", completed tasks = " + ncompleted +
1909 "]";
1910 }
1911
1912 /* Extension hooks */
1913
1914 /**
1915 * Method invoked prior to executing the given Runnable in the
1916 * given thread. This method is invoked by thread {@code t} that
1917 * will execute task {@code r}, and may be used to re-initialize
1918 * ThreadLocals, or to perform logging.
1919 *
1920 * <p>This implementation does nothing, but may be customized in
1921 * subclasses. Note: To properly nest multiple overridings, subclasses
1922 * should generally invoke {@code super.beforeExecute} at the end of
1923 * this method.
1924 *
1925 * @param t the thread that will run task {@code r}
1926 * @param r the task that will be executed
1927 */
1928 protected void beforeExecute(Thread t, Runnable r) { }
1929
1930 /**
1931 * Method invoked upon completion of execution of the given Runnable.
1932 * This method is invoked by the thread that executed the task. If
1933 * non-null, the Throwable is the uncaught {@code RuntimeException}
1934 * or {@code Error} that caused execution to terminate abruptly.
1935 *
1936 * <p>This implementation does nothing, but may be customized in
1937 * subclasses. Note: To properly nest multiple overridings, subclasses
1938 * should generally invoke {@code super.afterExecute} at the
1939 * beginning of this method.
1940 *
1941 * <p><b>Note:</b> When actions are enclosed in tasks (such as
1942 * {@link FutureTask}) either explicitly or via methods such as
1943 * {@code submit}, these task objects catch and maintain
1944 * computational exceptions, and so they do not cause abrupt
1945 * termination, and the internal exceptions are <em>not</em>
1946 * passed to this method. If you would like to trap both kinds of
1947 * failures in this method, you can further probe for such cases,
1948 * as in this sample subclass that prints either the direct cause
1949 * or the underlying exception if a task has been aborted:
1950 *
1951 * <pre> {@code
1952 * class ExtendedExecutor extends ThreadPoolExecutor {
1953 * // ...
1954 * protected void afterExecute(Runnable r, Throwable t) {
1955 * super.afterExecute(r, t);
1956 * if (t == null
1957 * && r instanceof Future<?>
1958 * && ((Future<?>)r).isDone()) {
1959 * try {
1960 * Object result = ((Future<?>) r).get();
1961 * } catch (CancellationException ce) {
1962 * t = ce;
1963 * } catch (ExecutionException ee) {
1964 * t = ee.getCause();
1965 * } catch (InterruptedException ie) {
1966 * // ignore/reset
1967 * Thread.currentThread().interrupt();
1968 * }
1969 * }
1970 * if (t != null)
1971 * System.out.println(t);
1972 * }
1973 * }}</pre>
1974 *
1975 * @param r the runnable that has completed
1976 * @param t the exception that caused termination, or null if
1977 * execution completed normally
1978 */
1979 protected void afterExecute(Runnable r, Throwable t) { }
1980
1981 /**
1982 * Method invoked when the Executor has terminated. Default
1983 * implementation does nothing. Note: To properly nest multiple
1984 * overridings, subclasses should generally invoke
1985 * {@code super.terminated} within this method.
1986 */
1987 protected void terminated() { }
1988
1989 /* Predefined RejectedExecutionHandlers */
1990
1991 /**
1992 * A handler for rejected tasks that runs the rejected task
1993 * directly in the calling thread of the {@code execute} method,
1994 * unless the executor has been shut down, in which case the task
1995 * is discarded.
1996 */
1997 public static class CallerRunsPolicy implements RejectedExecutionHandler {
1998 /**
1999 * Creates a {@code CallerRunsPolicy}.
2000 */
2001 public CallerRunsPolicy() { }
2002
2003 /**
2004 * Executes task r in the caller's thread, unless the executor
2005 * has been shut down, in which case the task is discarded.
2006 *
2007 * @param r the runnable task requested to be executed
2008 * @param e the executor attempting to execute this task
2009 */
2010 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2011 if (!e.isShutdown()) {
2012 r.run();
2013 }
2014 }
2015 }
2016
2017 /**
2018 * A handler for rejected tasks that throws a
2019 * {@code RejectedExecutionException}.
2020 */
2021 public static class AbortPolicy implements RejectedExecutionHandler {
2022 /**
2023 * Creates an {@code AbortPolicy}.
2024 */
2025 public AbortPolicy() { }
2026
2027 /**
2028 * Always throws RejectedExecutionException.
2029 *
2030 * @param r the runnable task requested to be executed
2031 * @param e the executor attempting to execute this task
2032 * @throws RejectedExecutionException always
2033 */
2034 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2035 throw new RejectedExecutionException("Task " + r.toString() +
2036 " rejected from " +
2037 e.toString());
2038 }
2039 }
2040
2041 /**
2042 * A handler for rejected tasks that silently discards the
2043 * rejected task.
2044 */
2045 public static class DiscardPolicy implements RejectedExecutionHandler {
2046 /**
2047 * Creates a {@code DiscardPolicy}.
2048 */
2049 public DiscardPolicy() { }
2050
2051 /**
2052 * Does nothing, which has the effect of discarding task r.
2053 *
2054 * @param r the runnable task requested to be executed
2055 * @param e the executor attempting to execute this task
2056 */
2057 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2058 }
2059 }
2060
2061 /**
2062 * A handler for rejected tasks that discards the oldest unhandled
2063 * request and then retries {@code execute}, unless the executor
2064 * is shut down, in which case the task is discarded.
2065 */
2066 public static class DiscardOldestPolicy implements RejectedExecutionHandler {
2067 /**
2068 * Creates a {@code DiscardOldestPolicy} for the given executor.
2069 */
2070 public DiscardOldestPolicy() { }
2071
2072 /**
2073 * Obtains and ignores the next task that the executor
2074 * would otherwise execute, if one is immediately available,
2075 * and then retries execution of task r, unless the executor
2076 * is shut down, in which case task r is instead discarded.
2077 *
2078 * @param r the runnable task requested to be executed
2079 * @param e the executor attempting to execute this task
2080 */
2081 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2082 if (!e.isShutdown()) {
2083 e.getQueue().poll();
2084 e.execute(r);
2085 }
2086 }
2087 }
2088 }