ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ThreadPoolExecutor.java
Revision: 1.191
Committed: Thu Oct 17 01:51:38 2019 UTC (4 years, 7 months ago) by jsr166
Branch: MAIN
Changes since 1.190: +2 -0 lines
Log Message:
8232230: Suppress warnings on non-serializable non-transient instance fields in java.util.concurrent

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain, as explained at
4 * http://creativecommons.org/publicdomain/zero/1.0/
5 */
6
7 package java.util.concurrent;
8
9 import java.util.ArrayList;
10 import java.util.ConcurrentModificationException;
11 import java.util.HashSet;
12 import java.util.Iterator;
13 import java.util.List;
14 import java.util.concurrent.atomic.AtomicInteger;
15 import java.util.concurrent.locks.AbstractQueuedSynchronizer;
16 import java.util.concurrent.locks.Condition;
17 import java.util.concurrent.locks.ReentrantLock;
18
19 /**
20 * An {@link ExecutorService} that executes each submitted task using
21 * one of possibly several pooled threads, normally configured
22 * using {@link Executors} factory methods.
23 *
24 * <p>Thread pools address two different problems: they usually
25 * provide improved performance when executing large numbers of
26 * asynchronous tasks, due to reduced per-task invocation overhead,
27 * and they provide a means of bounding and managing the resources,
28 * including threads, consumed when executing a collection of tasks.
29 * Each {@code ThreadPoolExecutor} also maintains some basic
30 * statistics, such as the number of completed tasks.
31 *
32 * <p>To be useful across a wide range of contexts, this class
33 * provides many adjustable parameters and extensibility
34 * hooks. However, programmers are urged to use the more convenient
35 * {@link Executors} factory methods {@link
36 * Executors#newCachedThreadPool} (unbounded thread pool, with
37 * automatic thread reclamation), {@link Executors#newFixedThreadPool}
38 * (fixed size thread pool) and {@link
39 * Executors#newSingleThreadExecutor} (single background thread), that
40 * preconfigure settings for the most common usage
41 * scenarios. Otherwise, use the following guide when manually
42 * configuring and tuning this class:
43 *
44 * <dl>
45 *
46 * <dt>Core and maximum pool sizes</dt>
47 *
48 * <dd>A {@code ThreadPoolExecutor} will automatically adjust the
49 * pool size (see {@link #getPoolSize})
50 * according to the bounds set by
51 * corePoolSize (see {@link #getCorePoolSize}) and
52 * maximumPoolSize (see {@link #getMaximumPoolSize}).
53 *
54 * When a new task is submitted in method {@link #execute(Runnable)},
55 * if fewer than corePoolSize threads are running, a new thread is
56 * created to handle the request, even if other worker threads are
57 * idle. Else if fewer than maximumPoolSize threads are running, a
58 * new thread will be created to handle the request only if the queue
59 * is full. By setting corePoolSize and maximumPoolSize the same, you
60 * create a fixed-size thread pool. By setting maximumPoolSize to an
61 * essentially unbounded value such as {@code Integer.MAX_VALUE}, you
62 * allow the pool to accommodate an arbitrary number of concurrent
63 * tasks. Most typically, core and maximum pool sizes are set only
64 * upon construction, but they may also be changed dynamically using
65 * {@link #setCorePoolSize} and {@link #setMaximumPoolSize}. </dd>
66 *
67 * <dt>On-demand construction</dt>
68 *
69 * <dd>By default, even core threads are initially created and
70 * started only when new tasks arrive, but this can be overridden
71 * dynamically using method {@link #prestartCoreThread} or {@link
72 * #prestartAllCoreThreads}. You probably want to prestart threads if
73 * you construct the pool with a non-empty queue. </dd>
74 *
75 * <dt>Creating new threads</dt>
76 *
77 * <dd>New threads are created using a {@link ThreadFactory}. If not
78 * otherwise specified, a {@link Executors#defaultThreadFactory} is
79 * used, that creates threads to all be in the same {@link
80 * ThreadGroup} and with the same {@code NORM_PRIORITY} priority and
81 * non-daemon status. By supplying a different ThreadFactory, you can
82 * alter the thread's name, thread group, priority, daemon status,
83 * etc. If a {@code ThreadFactory} fails to create a thread when asked
84 * by returning null from {@code newThread}, the executor will
85 * continue, but might not be able to execute any tasks. Threads
86 * should possess the "modifyThread" {@code RuntimePermission}. If
87 * worker threads or other threads using the pool do not possess this
88 * permission, service may be degraded: configuration changes may not
89 * take effect in a timely manner, and a shutdown pool may remain in a
90 * state in which termination is possible but not completed.</dd>
91 *
92 * <dt>Keep-alive times</dt>
93 *
94 * <dd>If the pool currently has more than corePoolSize threads,
95 * excess threads will be terminated if they have been idle for more
96 * than the keepAliveTime (see {@link #getKeepAliveTime(TimeUnit)}).
97 * This provides a means of reducing resource consumption when the
98 * pool is not being actively used. If the pool becomes more active
99 * later, new threads will be constructed. This parameter can also be
100 * changed dynamically using method {@link #setKeepAliveTime(long,
101 * TimeUnit)}. Using a value of {@code Long.MAX_VALUE} {@link
102 * TimeUnit#NANOSECONDS} effectively disables idle threads from ever
103 * terminating prior to shut down. By default, the keep-alive policy
104 * applies only when there are more than corePoolSize threads, but
105 * method {@link #allowCoreThreadTimeOut(boolean)} can be used to
106 * apply this time-out policy to core threads as well, so long as the
107 * keepAliveTime value is non-zero. </dd>
108 *
109 * <dt>Queuing</dt>
110 *
111 * <dd>Any {@link BlockingQueue} may be used to transfer and hold
112 * submitted tasks. The use of this queue interacts with pool sizing:
113 *
114 * <ul>
115 *
116 * <li>If fewer than corePoolSize threads are running, the Executor
117 * always prefers adding a new thread
118 * rather than queuing.
119 *
120 * <li>If corePoolSize or more threads are running, the Executor
121 * always prefers queuing a request rather than adding a new
122 * thread.
123 *
124 * <li>If a request cannot be queued, a new thread is created unless
125 * this would exceed maximumPoolSize, in which case, the task will be
126 * rejected.
127 *
128 * </ul>
129 *
130 * There are three general strategies for queuing:
131 * <ol>
132 *
133 * <li><em> Direct handoffs.</em> A good default choice for a work
134 * queue is a {@link SynchronousQueue} that hands off tasks to threads
135 * without otherwise holding them. Here, an attempt to queue a task
136 * will fail if no threads are immediately available to run it, so a
137 * new thread will be constructed. This policy avoids lockups when
138 * handling sets of requests that might have internal dependencies.
139 * Direct handoffs generally require unbounded maximumPoolSizes to
140 * avoid rejection of new submitted tasks. This in turn admits the
141 * possibility of unbounded thread growth when commands continue to
142 * arrive on average faster than they can be processed.
143 *
144 * <li><em> Unbounded queues.</em> Using an unbounded queue (for
145 * example a {@link LinkedBlockingQueue} without a predefined
146 * capacity) will cause new tasks to wait in the queue when all
147 * corePoolSize threads are busy. Thus, no more than corePoolSize
148 * threads will ever be created. (And the value of the maximumPoolSize
149 * therefore doesn't have any effect.) This may be appropriate when
150 * each task is completely independent of others, so tasks cannot
151 * affect each others execution; for example, in a web page server.
152 * While this style of queuing can be useful in smoothing out
153 * transient bursts of requests, it admits the possibility of
154 * unbounded work queue growth when commands continue to arrive on
155 * average faster than they can be processed.
156 *
157 * <li><em>Bounded queues.</em> A bounded queue (for example, an
158 * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
159 * used with finite maximumPoolSizes, but can be more difficult to
160 * tune and control. Queue sizes and maximum pool sizes may be traded
161 * off for each other: Using large queues and small pools minimizes
162 * CPU usage, OS resources, and context-switching overhead, but can
163 * lead to artificially low throughput. If tasks frequently block (for
164 * example if they are I/O bound), a system may be able to schedule
165 * time for more threads than you otherwise allow. Use of small queues
166 * generally requires larger pool sizes, which keeps CPUs busier but
167 * may encounter unacceptable scheduling overhead, which also
168 * decreases throughput.
169 *
170 * </ol>
171 *
172 * </dd>
173 *
174 * <dt>Rejected tasks</dt>
175 *
176 * <dd>New tasks submitted in method {@link #execute(Runnable)} will be
177 * <em>rejected</em> when the Executor has been shut down, and also when
178 * the Executor uses finite bounds for both maximum threads and work queue
179 * capacity, and is saturated. In either case, the {@code execute} method
180 * invokes the {@link
181 * RejectedExecutionHandler#rejectedExecution(Runnable, ThreadPoolExecutor)}
182 * method of its {@link RejectedExecutionHandler}. Four predefined handler
183 * policies are provided:
184 *
185 * <ol>
186 *
187 * <li>In the default {@link ThreadPoolExecutor.AbortPolicy}, the handler
188 * throws a runtime {@link RejectedExecutionException} upon rejection.
189 *
190 * <li>In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread
191 * that invokes {@code execute} itself runs the task. This provides a
192 * simple feedback control mechanism that will slow down the rate that
193 * new tasks are submitted.
194 *
195 * <li>In {@link ThreadPoolExecutor.DiscardPolicy}, a task that
196 * cannot be executed is simply dropped.
197 *
198 * <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the
199 * executor is not shut down, the task at the head of the work queue
200 * is dropped, and then execution is retried (which can fail again,
201 * causing this to be repeated.)
202 *
203 * </ol>
204 *
205 * It is possible to define and use other kinds of {@link
206 * RejectedExecutionHandler} classes. Doing so requires some care
207 * especially when policies are designed to work only under particular
208 * capacity or queuing policies. </dd>
209 *
210 * <dt>Hook methods</dt>
211 *
212 * <dd>This class provides {@code protected} overridable
213 * {@link #beforeExecute(Thread, Runnable)} and
214 * {@link #afterExecute(Runnable, Throwable)} methods that are called
215 * before and after execution of each task. These can be used to
216 * manipulate the execution environment; for example, reinitializing
217 * ThreadLocals, gathering statistics, or adding log entries.
218 * Additionally, method {@link #terminated} can be overridden to perform
219 * any special processing that needs to be done once the Executor has
220 * fully terminated.
221 *
222 * <p>If hook, callback, or BlockingQueue methods throw exceptions,
223 * internal worker threads may in turn fail, abruptly terminate, and
224 * possibly be replaced.</dd>
225 *
226 * <dt>Queue maintenance</dt>
227 *
228 * <dd>Method {@link #getQueue()} allows access to the work queue
229 * for purposes of monitoring and debugging. Use of this method for
230 * any other purpose is strongly discouraged. Two supplied methods,
231 * {@link #remove(Runnable)} and {@link #purge} are available to
232 * assist in storage reclamation when large numbers of queued tasks
233 * become cancelled.</dd>
234 *
235 * <dt>Reclamation</dt>
236 *
237 * <dd>A pool that is no longer referenced in a program <em>AND</em>
238 * has no remaining threads may be reclaimed (garbage collected)
239 * without being explicitly shutdown. You can configure a pool to
240 * allow all unused threads to eventually die by setting appropriate
241 * keep-alive times, using a lower bound of zero core threads and/or
242 * setting {@link #allowCoreThreadTimeOut(boolean)}. </dd>
243 *
244 * </dl>
245 *
246 * <p><b>Extension example</b>. Most extensions of this class
247 * override one or more of the protected hook methods. For example,
248 * here is a subclass that adds a simple pause/resume feature:
249 *
250 * <pre> {@code
251 * class PausableThreadPoolExecutor extends ThreadPoolExecutor {
252 * private boolean isPaused;
253 * private ReentrantLock pauseLock = new ReentrantLock();
254 * private Condition unpaused = pauseLock.newCondition();
255 *
256 * public PausableThreadPoolExecutor(...) { super(...); }
257 *
258 * protected void beforeExecute(Thread t, Runnable r) {
259 * super.beforeExecute(t, r);
260 * pauseLock.lock();
261 * try {
262 * while (isPaused) unpaused.await();
263 * } catch (InterruptedException ie) {
264 * t.interrupt();
265 * } finally {
266 * pauseLock.unlock();
267 * }
268 * }
269 *
270 * public void pause() {
271 * pauseLock.lock();
272 * try {
273 * isPaused = true;
274 * } finally {
275 * pauseLock.unlock();
276 * }
277 * }
278 *
279 * public void resume() {
280 * pauseLock.lock();
281 * try {
282 * isPaused = false;
283 * unpaused.signalAll();
284 * } finally {
285 * pauseLock.unlock();
286 * }
287 * }
288 * }}</pre>
289 *
290 * @since 1.5
291 * @author Doug Lea
292 */
293 public class ThreadPoolExecutor extends AbstractExecutorService {
294 /**
295 * The main pool control state, ctl, is an atomic integer packing
296 * two conceptual fields
297 * workerCount, indicating the effective number of threads
298 * runState, indicating whether running, shutting down etc
299 *
300 * In order to pack them into one int, we limit workerCount to
301 * (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2
302 * billion) otherwise representable. If this is ever an issue in
303 * the future, the variable can be changed to be an AtomicLong,
304 * and the shift/mask constants below adjusted. But until the need
305 * arises, this code is a bit faster and simpler using an int.
306 *
307 * The workerCount is the number of workers that have been
308 * permitted to start and not permitted to stop. The value may be
309 * transiently different from the actual number of live threads,
310 * for example when a ThreadFactory fails to create a thread when
311 * asked, and when exiting threads are still performing
312 * bookkeeping before terminating. The user-visible pool size is
313 * reported as the current size of the workers set.
314 *
315 * The runState provides the main lifecycle control, taking on values:
316 *
317 * RUNNING: Accept new tasks and process queued tasks
318 * SHUTDOWN: Don't accept new tasks, but process queued tasks
319 * STOP: Don't accept new tasks, don't process queued tasks,
320 * and interrupt in-progress tasks
321 * TIDYING: All tasks have terminated, workerCount is zero,
322 * the thread transitioning to state TIDYING
323 * will run the terminated() hook method
324 * TERMINATED: terminated() has completed
325 *
326 * The numerical order among these values matters, to allow
327 * ordered comparisons. The runState monotonically increases over
328 * time, but need not hit each state. The transitions are:
329 *
330 * RUNNING -> SHUTDOWN
331 * On invocation of shutdown()
332 * (RUNNING or SHUTDOWN) -> STOP
333 * On invocation of shutdownNow()
334 * SHUTDOWN -> TIDYING
335 * When both queue and pool are empty
336 * STOP -> TIDYING
337 * When pool is empty
338 * TIDYING -> TERMINATED
339 * When the terminated() hook method has completed
340 *
341 * Threads waiting in awaitTermination() will return when the
342 * state reaches TERMINATED.
343 *
344 * Detecting the transition from SHUTDOWN to TIDYING is less
345 * straightforward than you'd like because the queue may become
346 * empty after non-empty and vice versa during SHUTDOWN state, but
347 * we can only terminate if, after seeing that it is empty, we see
348 * that workerCount is 0 (which sometimes entails a recheck -- see
349 * below).
350 */
351 private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
352 private static final int COUNT_BITS = Integer.SIZE - 3;
353 private static final int COUNT_MASK = (1 << COUNT_BITS) - 1;
354
355 // runState is stored in the high-order bits
356 private static final int RUNNING = -1 << COUNT_BITS;
357 private static final int SHUTDOWN = 0 << COUNT_BITS;
358 private static final int STOP = 1 << COUNT_BITS;
359 private static final int TIDYING = 2 << COUNT_BITS;
360 private static final int TERMINATED = 3 << COUNT_BITS;
361
362 // Packing and unpacking ctl
363 private static int runStateOf(int c) { return c & ~COUNT_MASK; }
364 private static int workerCountOf(int c) { return c & COUNT_MASK; }
365 private static int ctlOf(int rs, int wc) { return rs | wc; }
366
367 /*
368 * Bit field accessors that don't require unpacking ctl.
369 * These depend on the bit layout and on workerCount being never negative.
370 */
371
372 private static boolean runStateLessThan(int c, int s) {
373 return c < s;
374 }
375
376 private static boolean runStateAtLeast(int c, int s) {
377 return c >= s;
378 }
379
380 private static boolean isRunning(int c) {
381 return c < SHUTDOWN;
382 }
383
384 /**
385 * Attempts to CAS-increment the workerCount field of ctl.
386 */
387 private boolean compareAndIncrementWorkerCount(int expect) {
388 return ctl.compareAndSet(expect, expect + 1);
389 }
390
391 /**
392 * Attempts to CAS-decrement the workerCount field of ctl.
393 */
394 private boolean compareAndDecrementWorkerCount(int expect) {
395 return ctl.compareAndSet(expect, expect - 1);
396 }
397
398 /**
399 * Decrements the workerCount field of ctl. This is called only on
400 * abrupt termination of a thread (see processWorkerExit). Other
401 * decrements are performed within getTask.
402 */
403 private void decrementWorkerCount() {
404 ctl.addAndGet(-1);
405 }
406
407 /**
408 * The queue used for holding tasks and handing off to worker
409 * threads. We do not require that workQueue.poll() returning
410 * null necessarily means that workQueue.isEmpty(), so rely
411 * solely on isEmpty to see if the queue is empty (which we must
412 * do for example when deciding whether to transition from
413 * SHUTDOWN to TIDYING). This accommodates special-purpose
414 * queues such as DelayQueues for which poll() is allowed to
415 * return null even if it may later return non-null when delays
416 * expire.
417 */
418 private final BlockingQueue<Runnable> workQueue;
419
420 /**
421 * Lock held on access to workers set and related bookkeeping.
422 * While we could use a concurrent set of some sort, it turns out
423 * to be generally preferable to use a lock. Among the reasons is
424 * that this serializes interruptIdleWorkers, which avoids
425 * unnecessary interrupt storms, especially during shutdown.
426 * Otherwise exiting threads would concurrently interrupt those
427 * that have not yet interrupted. It also simplifies some of the
428 * associated statistics bookkeeping of largestPoolSize etc. We
429 * also hold mainLock on shutdown and shutdownNow, for the sake of
430 * ensuring workers set is stable while separately checking
431 * permission to interrupt and actually interrupting.
432 */
433 private final ReentrantLock mainLock = new ReentrantLock();
434
435 /**
436 * Set containing all worker threads in pool. Accessed only when
437 * holding mainLock.
438 */
439 private final HashSet<Worker> workers = new HashSet<>();
440
441 /**
442 * Wait condition to support awaitTermination.
443 */
444 private final Condition termination = mainLock.newCondition();
445
446 /**
447 * Tracks largest attained pool size. Accessed only under
448 * mainLock.
449 */
450 private int largestPoolSize;
451
452 /**
453 * Counter for completed tasks. Updated only on termination of
454 * worker threads. Accessed only under mainLock.
455 */
456 private long completedTaskCount;
457
458 /*
459 * All user control parameters are declared as volatiles so that
460 * ongoing actions are based on freshest values, but without need
461 * for locking, since no internal invariants depend on them
462 * changing synchronously with respect to other actions.
463 */
464
465 /**
466 * Factory for new threads. All threads are created using this
467 * factory (via method addWorker). All callers must be prepared
468 * for addWorker to fail, which may reflect a system or user's
469 * policy limiting the number of threads. Even though it is not
470 * treated as an error, failure to create threads may result in
471 * new tasks being rejected or existing ones remaining stuck in
472 * the queue.
473 *
474 * We go further and preserve pool invariants even in the face of
475 * errors such as OutOfMemoryError, that might be thrown while
476 * trying to create threads. Such errors are rather common due to
477 * the need to allocate a native stack in Thread.start, and users
478 * will want to perform clean pool shutdown to clean up. There
479 * will likely be enough memory available for the cleanup code to
480 * complete without encountering yet another OutOfMemoryError.
481 */
482 private volatile ThreadFactory threadFactory;
483
484 /**
485 * Handler called when saturated or shutdown in execute.
486 */
487 private volatile RejectedExecutionHandler handler;
488
489 /**
490 * Timeout in nanoseconds for idle threads waiting for work.
491 * Threads use this timeout when there are more than corePoolSize
492 * present or if allowCoreThreadTimeOut. Otherwise they wait
493 * forever for new work.
494 */
495 private volatile long keepAliveTime;
496
497 /**
498 * If false (default), core threads stay alive even when idle.
499 * If true, core threads use keepAliveTime to time out waiting
500 * for work.
501 */
502 private volatile boolean allowCoreThreadTimeOut;
503
504 /**
505 * Core pool size is the minimum number of workers to keep alive
506 * (and not allow to time out etc) unless allowCoreThreadTimeOut
507 * is set, in which case the minimum is zero.
508 *
509 * Since the worker count is actually stored in COUNT_BITS bits,
510 * the effective limit is {@code corePoolSize & COUNT_MASK}.
511 */
512 private volatile int corePoolSize;
513
514 /**
515 * Maximum pool size.
516 *
517 * Since the worker count is actually stored in COUNT_BITS bits,
518 * the effective limit is {@code maximumPoolSize & COUNT_MASK}.
519 */
520 private volatile int maximumPoolSize;
521
522 /**
523 * The default rejected execution handler.
524 */
525 private static final RejectedExecutionHandler defaultHandler =
526 new AbortPolicy();
527
528 /**
529 * Permission required for callers of shutdown and shutdownNow.
530 * We additionally require (see checkShutdownAccess) that callers
531 * have permission to actually interrupt threads in the worker set
532 * (as governed by Thread.interrupt, which relies on
533 * ThreadGroup.checkAccess, which in turn relies on
534 * SecurityManager.checkAccess). Shutdowns are attempted only if
535 * these checks pass.
536 *
537 * All actual invocations of Thread.interrupt (see
538 * interruptIdleWorkers and interruptWorkers) ignore
539 * SecurityExceptions, meaning that the attempted interrupts
540 * silently fail. In the case of shutdown, they should not fail
541 * unless the SecurityManager has inconsistent policies, sometimes
542 * allowing access to a thread and sometimes not. In such cases,
543 * failure to actually interrupt threads may disable or delay full
544 * termination. Other uses of interruptIdleWorkers are advisory,
545 * and failure to actually interrupt will merely delay response to
546 * configuration changes so is not handled exceptionally.
547 */
548 private static final RuntimePermission shutdownPerm =
549 new RuntimePermission("modifyThread");
550
551 /**
552 * Class Worker mainly maintains interrupt control state for
553 * threads running tasks, along with other minor bookkeeping.
554 * This class opportunistically extends AbstractQueuedSynchronizer
555 * to simplify acquiring and releasing a lock surrounding each
556 * task execution. This protects against interrupts that are
557 * intended to wake up a worker thread waiting for a task from
558 * instead interrupting a task being run. We implement a simple
559 * non-reentrant mutual exclusion lock rather than use
560 * ReentrantLock because we do not want worker tasks to be able to
561 * reacquire the lock when they invoke pool control methods like
562 * setCorePoolSize. Additionally, to suppress interrupts until
563 * the thread actually starts running tasks, we initialize lock
564 * state to a negative value, and clear it upon start (in
565 * runWorker).
566 */
567 private final class Worker
568 extends AbstractQueuedSynchronizer
569 implements Runnable
570 {
571 /**
572 * This class will never be serialized, but we provide a
573 * serialVersionUID to suppress a javac warning.
574 */
575 private static final long serialVersionUID = 6138294804551838833L;
576
577 /** Thread this worker is running in. Null if factory fails. */
578 @SuppressWarnings("serial") // Unlikely to be serializable
579 final Thread thread;
580 /** Initial task to run. Possibly null. */
581 @SuppressWarnings("serial") // Not statically typed as Serializable
582 Runnable firstTask;
583 /** Per-thread task counter */
584 volatile long completedTasks;
585
586 // TODO: switch to AbstractQueuedLongSynchronizer and move
587 // completedTasks into the lock word.
588
589 /**
590 * Creates with given first task and thread from ThreadFactory.
591 * @param firstTask the first task (null if none)
592 */
593 Worker(Runnable firstTask) {
594 setState(-1); // inhibit interrupts until runWorker
595 this.firstTask = firstTask;
596 this.thread = getThreadFactory().newThread(this);
597 }
598
599 /** Delegates main run loop to outer runWorker. */
600 public void run() {
601 runWorker(this);
602 }
603
604 // Lock methods
605 //
606 // The value 0 represents the unlocked state.
607 // The value 1 represents the locked state.
608
609 protected boolean isHeldExclusively() {
610 return getState() != 0;
611 }
612
613 protected boolean tryAcquire(int unused) {
614 if (compareAndSetState(0, 1)) {
615 setExclusiveOwnerThread(Thread.currentThread());
616 return true;
617 }
618 return false;
619 }
620
621 protected boolean tryRelease(int unused) {
622 setExclusiveOwnerThread(null);
623 setState(0);
624 return true;
625 }
626
627 public void lock() { acquire(1); }
628 public boolean tryLock() { return tryAcquire(1); }
629 public void unlock() { release(1); }
630 public boolean isLocked() { return isHeldExclusively(); }
631
632 void interruptIfStarted() {
633 Thread t;
634 if (getState() >= 0 && (t = thread) != null && !t.isInterrupted()) {
635 try {
636 t.interrupt();
637 } catch (SecurityException ignore) {
638 }
639 }
640 }
641 }
642
643 /*
644 * Methods for setting control state
645 */
646
647 /**
648 * Transitions runState to given target, or leaves it alone if
649 * already at least the given target.
650 *
651 * @param targetState the desired state, either SHUTDOWN or STOP
652 * (but not TIDYING or TERMINATED -- use tryTerminate for that)
653 */
654 private void advanceRunState(int targetState) {
655 // assert targetState == SHUTDOWN || targetState == STOP;
656 for (;;) {
657 int c = ctl.get();
658 if (runStateAtLeast(c, targetState) ||
659 ctl.compareAndSet(c, ctlOf(targetState, workerCountOf(c))))
660 break;
661 }
662 }
663
664 /**
665 * Transitions to TERMINATED state if either (SHUTDOWN and pool
666 * and queue empty) or (STOP and pool empty). If otherwise
667 * eligible to terminate but workerCount is nonzero, interrupts an
668 * idle worker to ensure that shutdown signals propagate. This
669 * method must be called following any action that might make
670 * termination possible -- reducing worker count or removing tasks
671 * from the queue during shutdown. The method is non-private to
672 * allow access from ScheduledThreadPoolExecutor.
673 */
674 final void tryTerminate() {
675 for (;;) {
676 int c = ctl.get();
677 if (isRunning(c) ||
678 runStateAtLeast(c, TIDYING) ||
679 (runStateLessThan(c, STOP) && ! workQueue.isEmpty()))
680 return;
681 if (workerCountOf(c) != 0) { // Eligible to terminate
682 interruptIdleWorkers(ONLY_ONE);
683 return;
684 }
685
686 final ReentrantLock mainLock = this.mainLock;
687 mainLock.lock();
688 try {
689 if (ctl.compareAndSet(c, ctlOf(TIDYING, 0))) {
690 try {
691 terminated();
692 } finally {
693 ctl.set(ctlOf(TERMINATED, 0));
694 termination.signalAll();
695 }
696 return;
697 }
698 } finally {
699 mainLock.unlock();
700 }
701 // else retry on failed CAS
702 }
703 }
704
705 /*
706 * Methods for controlling interrupts to worker threads.
707 */
708
709 /**
710 * If there is a security manager, makes sure caller has
711 * permission to shut down threads in general (see shutdownPerm).
712 * If this passes, additionally makes sure the caller is allowed
713 * to interrupt each worker thread. This might not be true even if
714 * first check passed, if the SecurityManager treats some threads
715 * specially.
716 */
717 private void checkShutdownAccess() {
718 // assert mainLock.isHeldByCurrentThread();
719 SecurityManager security = System.getSecurityManager();
720 if (security != null) {
721 security.checkPermission(shutdownPerm);
722 for (Worker w : workers)
723 security.checkAccess(w.thread);
724 }
725 }
726
727 /**
728 * Interrupts all threads, even if active. Ignores SecurityExceptions
729 * (in which case some threads may remain uninterrupted).
730 */
731 private void interruptWorkers() {
732 // assert mainLock.isHeldByCurrentThread();
733 for (Worker w : workers)
734 w.interruptIfStarted();
735 }
736
737 /**
738 * Interrupts threads that might be waiting for tasks (as
739 * indicated by not being locked) so they can check for
740 * termination or configuration changes. Ignores
741 * SecurityExceptions (in which case some threads may remain
742 * uninterrupted).
743 *
744 * @param onlyOne If true, interrupt at most one worker. This is
745 * called only from tryTerminate when termination is otherwise
746 * enabled but there are still other workers. In this case, at
747 * most one waiting worker is interrupted to propagate shutdown
748 * signals in case all threads are currently waiting.
749 * Interrupting any arbitrary thread ensures that newly arriving
750 * workers since shutdown began will also eventually exit.
751 * To guarantee eventual termination, it suffices to always
752 * interrupt only one idle worker, but shutdown() interrupts all
753 * idle workers so that redundant workers exit promptly, not
754 * waiting for a straggler task to finish.
755 */
756 private void interruptIdleWorkers(boolean onlyOne) {
757 final ReentrantLock mainLock = this.mainLock;
758 mainLock.lock();
759 try {
760 for (Worker w : workers) {
761 Thread t = w.thread;
762 if (!t.isInterrupted() && w.tryLock()) {
763 try {
764 t.interrupt();
765 } catch (SecurityException ignore) {
766 } finally {
767 w.unlock();
768 }
769 }
770 if (onlyOne)
771 break;
772 }
773 } finally {
774 mainLock.unlock();
775 }
776 }
777
778 /**
779 * Common form of interruptIdleWorkers, to avoid having to
780 * remember what the boolean argument means.
781 */
782 private void interruptIdleWorkers() {
783 interruptIdleWorkers(false);
784 }
785
786 private static final boolean ONLY_ONE = true;
787
788 /*
789 * Misc utilities, most of which are also exported to
790 * ScheduledThreadPoolExecutor
791 */
792
793 /**
794 * Invokes the rejected execution handler for the given command.
795 * Package-protected for use by ScheduledThreadPoolExecutor.
796 */
797 final void reject(Runnable command) {
798 handler.rejectedExecution(command, this);
799 }
800
801 /**
802 * Performs any further cleanup following run state transition on
803 * invocation of shutdown. A no-op here, but used by
804 * ScheduledThreadPoolExecutor to cancel delayed tasks.
805 */
806 void onShutdown() {
807 }
808
809 /**
810 * Drains the task queue into a new list, normally using
811 * drainTo. But if the queue is a DelayQueue or any other kind of
812 * queue for which poll or drainTo may fail to remove some
813 * elements, it deletes them one by one.
814 */
815 private List<Runnable> drainQueue() {
816 BlockingQueue<Runnable> q = workQueue;
817 ArrayList<Runnable> taskList = new ArrayList<>();
818 q.drainTo(taskList);
819 if (!q.isEmpty()) {
820 for (Runnable r : q.toArray(new Runnable[0])) {
821 if (q.remove(r))
822 taskList.add(r);
823 }
824 }
825 return taskList;
826 }
827
828 /*
829 * Methods for creating, running and cleaning up after workers
830 */
831
832 /**
833 * Checks if a new worker can be added with respect to current
834 * pool state and the given bound (either core or maximum). If so,
835 * the worker count is adjusted accordingly, and, if possible, a
836 * new worker is created and started, running firstTask as its
837 * first task. This method returns false if the pool is stopped or
838 * eligible to shut down. It also returns false if the thread
839 * factory fails to create a thread when asked. If the thread
840 * creation fails, either due to the thread factory returning
841 * null, or due to an exception (typically OutOfMemoryError in
842 * Thread.start()), we roll back cleanly.
843 *
844 * @param firstTask the task the new thread should run first (or
845 * null if none). Workers are created with an initial first task
846 * (in method execute()) to bypass queuing when there are fewer
847 * than corePoolSize threads (in which case we always start one),
848 * or when the queue is full (in which case we must bypass queue).
849 * Initially idle threads are usually created via
850 * prestartCoreThread or to replace other dying workers.
851 *
852 * @param core if true use corePoolSize as bound, else
853 * maximumPoolSize. (A boolean indicator is used here rather than a
854 * value to ensure reads of fresh values after checking other pool
855 * state).
856 * @return true if successful
857 */
858 private boolean addWorker(Runnable firstTask, boolean core) {
859 retry:
860 for (int c = ctl.get();;) {
861 // Check if queue empty only if necessary.
862 if (runStateAtLeast(c, SHUTDOWN)
863 && (runStateAtLeast(c, STOP)
864 || firstTask != null
865 || workQueue.isEmpty()))
866 return false;
867
868 for (;;) {
869 if (workerCountOf(c)
870 >= ((core ? corePoolSize : maximumPoolSize) & COUNT_MASK))
871 return false;
872 if (compareAndIncrementWorkerCount(c))
873 break retry;
874 c = ctl.get(); // Re-read ctl
875 if (runStateAtLeast(c, SHUTDOWN))
876 continue retry;
877 // else CAS failed due to workerCount change; retry inner loop
878 }
879 }
880
881 boolean workerStarted = false;
882 boolean workerAdded = false;
883 Worker w = null;
884 try {
885 w = new Worker(firstTask);
886 final Thread t = w.thread;
887 if (t != null) {
888 final ReentrantLock mainLock = this.mainLock;
889 mainLock.lock();
890 try {
891 // Recheck while holding lock.
892 // Back out on ThreadFactory failure or if
893 // shut down before lock acquired.
894 int c = ctl.get();
895
896 if (isRunning(c) ||
897 (runStateLessThan(c, STOP) && firstTask == null)) {
898 if (t.getState() != Thread.State.NEW)
899 throw new IllegalThreadStateException();
900 workers.add(w);
901 workerAdded = true;
902 int s = workers.size();
903 if (s > largestPoolSize)
904 largestPoolSize = s;
905 }
906 } finally {
907 mainLock.unlock();
908 }
909 if (workerAdded) {
910 t.start();
911 workerStarted = true;
912 }
913 }
914 } finally {
915 if (! workerStarted)
916 addWorkerFailed(w);
917 }
918 return workerStarted;
919 }
920
921 /**
922 * Rolls back the worker thread creation.
923 * - removes worker from workers, if present
924 * - decrements worker count
925 * - rechecks for termination, in case the existence of this
926 * worker was holding up termination
927 */
928 private void addWorkerFailed(Worker w) {
929 final ReentrantLock mainLock = this.mainLock;
930 mainLock.lock();
931 try {
932 if (w != null)
933 workers.remove(w);
934 decrementWorkerCount();
935 tryTerminate();
936 } finally {
937 mainLock.unlock();
938 }
939 }
940
941 /**
942 * Performs cleanup and bookkeeping for a dying worker. Called
943 * only from worker threads. Unless completedAbruptly is set,
944 * assumes that workerCount has already been adjusted to account
945 * for exit. This method removes thread from worker set, and
946 * possibly terminates the pool or replaces the worker if either
947 * it exited due to user task exception or if fewer than
948 * corePoolSize workers are running or queue is non-empty but
949 * there are no workers.
950 *
951 * @param w the worker
952 * @param completedAbruptly if the worker died due to user exception
953 */
954 private void processWorkerExit(Worker w, boolean completedAbruptly) {
955 if (completedAbruptly) // If abrupt, then workerCount wasn't adjusted
956 decrementWorkerCount();
957
958 final ReentrantLock mainLock = this.mainLock;
959 mainLock.lock();
960 try {
961 completedTaskCount += w.completedTasks;
962 workers.remove(w);
963 } finally {
964 mainLock.unlock();
965 }
966
967 tryTerminate();
968
969 int c = ctl.get();
970 if (runStateLessThan(c, STOP)) {
971 if (!completedAbruptly) {
972 int min = allowCoreThreadTimeOut ? 0 : corePoolSize;
973 if (min == 0 && ! workQueue.isEmpty())
974 min = 1;
975 if (workerCountOf(c) >= min)
976 return; // replacement not needed
977 }
978 addWorker(null, false);
979 }
980 }
981
982 /**
983 * Performs blocking or timed wait for a task, depending on
984 * current configuration settings, or returns null if this worker
985 * must exit because of any of:
986 * 1. There are more than maximumPoolSize workers (due to
987 * a call to setMaximumPoolSize).
988 * 2. The pool is stopped.
989 * 3. The pool is shutdown and the queue is empty.
990 * 4. This worker timed out waiting for a task, and timed-out
991 * workers are subject to termination (that is,
992 * {@code allowCoreThreadTimeOut || workerCount > corePoolSize})
993 * both before and after the timed wait, and if the queue is
994 * non-empty, this worker is not the last thread in the pool.
995 *
996 * @return task, or null if the worker must exit, in which case
997 * workerCount is decremented
998 */
999 private Runnable getTask() {
1000 boolean timedOut = false; // Did the last poll() time out?
1001
1002 for (;;) {
1003 int c = ctl.get();
1004
1005 // Check if queue empty only if necessary.
1006 if (runStateAtLeast(c, SHUTDOWN)
1007 && (runStateAtLeast(c, STOP) || workQueue.isEmpty())) {
1008 decrementWorkerCount();
1009 return null;
1010 }
1011
1012 int wc = workerCountOf(c);
1013
1014 // Are workers subject to culling?
1015 boolean timed = allowCoreThreadTimeOut || wc > corePoolSize;
1016
1017 if ((wc > maximumPoolSize || (timed && timedOut))
1018 && (wc > 1 || workQueue.isEmpty())) {
1019 if (compareAndDecrementWorkerCount(c))
1020 return null;
1021 continue;
1022 }
1023
1024 try {
1025 Runnable r = timed ?
1026 workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
1027 workQueue.take();
1028 if (r != null)
1029 return r;
1030 timedOut = true;
1031 } catch (InterruptedException retry) {
1032 timedOut = false;
1033 }
1034 }
1035 }
1036
1037 /**
1038 * Main worker run loop. Repeatedly gets tasks from queue and
1039 * executes them, while coping with a number of issues:
1040 *
1041 * 1. We may start out with an initial task, in which case we
1042 * don't need to get the first one. Otherwise, as long as pool is
1043 * running, we get tasks from getTask. If it returns null then the
1044 * worker exits due to changed pool state or configuration
1045 * parameters. Other exits result from exception throws in
1046 * external code, in which case completedAbruptly holds, which
1047 * usually leads processWorkerExit to replace this thread.
1048 *
1049 * 2. Before running any task, the lock is acquired to prevent
1050 * other pool interrupts while the task is executing, and then we
1051 * ensure that unless pool is stopping, this thread does not have
1052 * its interrupt set.
1053 *
1054 * 3. Each task run is preceded by a call to beforeExecute, which
1055 * might throw an exception, in which case we cause thread to die
1056 * (breaking loop with completedAbruptly true) without processing
1057 * the task.
1058 *
1059 * 4. Assuming beforeExecute completes normally, we run the task,
1060 * gathering any of its thrown exceptions to send to afterExecute.
1061 * We separately handle RuntimeException, Error (both of which the
1062 * specs guarantee that we trap) and arbitrary Throwables.
1063 * Because we cannot rethrow Throwables within Runnable.run, we
1064 * wrap them within Errors on the way out (to the thread's
1065 * UncaughtExceptionHandler). Any thrown exception also
1066 * conservatively causes thread to die.
1067 *
1068 * 5. After task.run completes, we call afterExecute, which may
1069 * also throw an exception, which will also cause thread to
1070 * die. According to JLS Sec 14.20, this exception is the one that
1071 * will be in effect even if task.run throws.
1072 *
1073 * The net effect of the exception mechanics is that afterExecute
1074 * and the thread's UncaughtExceptionHandler have as accurate
1075 * information as we can provide about any problems encountered by
1076 * user code.
1077 *
1078 * @param w the worker
1079 */
1080 final void runWorker(Worker w) {
1081 Thread wt = Thread.currentThread();
1082 Runnable task = w.firstTask;
1083 w.firstTask = null;
1084 w.unlock(); // allow interrupts
1085 boolean completedAbruptly = true;
1086 try {
1087 while (task != null || (task = getTask()) != null) {
1088 w.lock();
1089 // If pool is stopping, ensure thread is interrupted;
1090 // if not, ensure thread is not interrupted. This
1091 // requires a recheck in second case to deal with
1092 // shutdownNow race while clearing interrupt
1093 if ((runStateAtLeast(ctl.get(), STOP) ||
1094 (Thread.interrupted() &&
1095 runStateAtLeast(ctl.get(), STOP))) &&
1096 !wt.isInterrupted())
1097 wt.interrupt();
1098 try {
1099 beforeExecute(wt, task);
1100 try {
1101 task.run();
1102 afterExecute(task, null);
1103 } catch (Throwable ex) {
1104 afterExecute(task, ex);
1105 throw ex;
1106 }
1107 } finally {
1108 task = null;
1109 w.completedTasks++;
1110 w.unlock();
1111 }
1112 }
1113 completedAbruptly = false;
1114 } finally {
1115 processWorkerExit(w, completedAbruptly);
1116 }
1117 }
1118
1119 // Public constructors and methods
1120
1121 /**
1122 * Creates a new {@code ThreadPoolExecutor} with the given initial
1123 * parameters, the default thread factory and the default rejected
1124 * execution handler.
1125 *
1126 * <p>It may be more convenient to use one of the {@link Executors}
1127 * factory methods instead of this general purpose constructor.
1128 *
1129 * @param corePoolSize the number of threads to keep in the pool, even
1130 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1131 * @param maximumPoolSize the maximum number of threads to allow in the
1132 * pool
1133 * @param keepAliveTime when the number of threads is greater than
1134 * the core, this is the maximum time that excess idle threads
1135 * will wait for new tasks before terminating.
1136 * @param unit the time unit for the {@code keepAliveTime} argument
1137 * @param workQueue the queue to use for holding tasks before they are
1138 * executed. This queue will hold only the {@code Runnable}
1139 * tasks submitted by the {@code execute} method.
1140 * @throws IllegalArgumentException if one of the following holds:<br>
1141 * {@code corePoolSize < 0}<br>
1142 * {@code keepAliveTime < 0}<br>
1143 * {@code maximumPoolSize <= 0}<br>
1144 * {@code maximumPoolSize < corePoolSize}
1145 * @throws NullPointerException if {@code workQueue} is null
1146 */
1147 public ThreadPoolExecutor(int corePoolSize,
1148 int maximumPoolSize,
1149 long keepAliveTime,
1150 TimeUnit unit,
1151 BlockingQueue<Runnable> workQueue) {
1152 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1153 Executors.defaultThreadFactory(), defaultHandler);
1154 }
1155
1156 /**
1157 * Creates a new {@code ThreadPoolExecutor} with the given initial
1158 * parameters and {@linkplain ThreadPoolExecutor.AbortPolicy
1159 * default rejected execution handler}.
1160 *
1161 * @param corePoolSize the number of threads to keep in the pool, even
1162 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1163 * @param maximumPoolSize the maximum number of threads to allow in the
1164 * pool
1165 * @param keepAliveTime when the number of threads is greater than
1166 * the core, this is the maximum time that excess idle threads
1167 * will wait for new tasks before terminating.
1168 * @param unit the time unit for the {@code keepAliveTime} argument
1169 * @param workQueue the queue to use for holding tasks before they are
1170 * executed. This queue will hold only the {@code Runnable}
1171 * tasks submitted by the {@code execute} method.
1172 * @param threadFactory the factory to use when the executor
1173 * creates a new thread
1174 * @throws IllegalArgumentException if one of the following holds:<br>
1175 * {@code corePoolSize < 0}<br>
1176 * {@code keepAliveTime < 0}<br>
1177 * {@code maximumPoolSize <= 0}<br>
1178 * {@code maximumPoolSize < corePoolSize}
1179 * @throws NullPointerException if {@code workQueue}
1180 * or {@code threadFactory} is null
1181 */
1182 public ThreadPoolExecutor(int corePoolSize,
1183 int maximumPoolSize,
1184 long keepAliveTime,
1185 TimeUnit unit,
1186 BlockingQueue<Runnable> workQueue,
1187 ThreadFactory threadFactory) {
1188 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1189 threadFactory, defaultHandler);
1190 }
1191
1192 /**
1193 * Creates a new {@code ThreadPoolExecutor} with the given initial
1194 * parameters and
1195 * {@linkplain Executors#defaultThreadFactory default thread factory}.
1196 *
1197 * @param corePoolSize the number of threads to keep in the pool, even
1198 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1199 * @param maximumPoolSize the maximum number of threads to allow in the
1200 * pool
1201 * @param keepAliveTime when the number of threads is greater than
1202 * the core, this is the maximum time that excess idle threads
1203 * will wait for new tasks before terminating.
1204 * @param unit the time unit for the {@code keepAliveTime} argument
1205 * @param workQueue the queue to use for holding tasks before they are
1206 * executed. This queue will hold only the {@code Runnable}
1207 * tasks submitted by the {@code execute} method.
1208 * @param handler the handler to use when execution is blocked
1209 * because the thread bounds and queue capacities are reached
1210 * @throws IllegalArgumentException if one of the following holds:<br>
1211 * {@code corePoolSize < 0}<br>
1212 * {@code keepAliveTime < 0}<br>
1213 * {@code maximumPoolSize <= 0}<br>
1214 * {@code maximumPoolSize < corePoolSize}
1215 * @throws NullPointerException if {@code workQueue}
1216 * or {@code handler} is null
1217 */
1218 public ThreadPoolExecutor(int corePoolSize,
1219 int maximumPoolSize,
1220 long keepAliveTime,
1221 TimeUnit unit,
1222 BlockingQueue<Runnable> workQueue,
1223 RejectedExecutionHandler handler) {
1224 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1225 Executors.defaultThreadFactory(), handler);
1226 }
1227
1228 /**
1229 * Creates a new {@code ThreadPoolExecutor} with the given initial
1230 * parameters.
1231 *
1232 * @param corePoolSize the number of threads to keep in the pool, even
1233 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1234 * @param maximumPoolSize the maximum number of threads to allow in the
1235 * pool
1236 * @param keepAliveTime when the number of threads is greater than
1237 * the core, this is the maximum time that excess idle threads
1238 * will wait for new tasks before terminating.
1239 * @param unit the time unit for the {@code keepAliveTime} argument
1240 * @param workQueue the queue to use for holding tasks before they are
1241 * executed. This queue will hold only the {@code Runnable}
1242 * tasks submitted by the {@code execute} method.
1243 * @param threadFactory the factory to use when the executor
1244 * creates a new thread
1245 * @param handler the handler to use when execution is blocked
1246 * because the thread bounds and queue capacities are reached
1247 * @throws IllegalArgumentException if one of the following holds:<br>
1248 * {@code corePoolSize < 0}<br>
1249 * {@code keepAliveTime < 0}<br>
1250 * {@code maximumPoolSize <= 0}<br>
1251 * {@code maximumPoolSize < corePoolSize}
1252 * @throws NullPointerException if {@code workQueue}
1253 * or {@code threadFactory} or {@code handler} is null
1254 */
1255 public ThreadPoolExecutor(int corePoolSize,
1256 int maximumPoolSize,
1257 long keepAliveTime,
1258 TimeUnit unit,
1259 BlockingQueue<Runnable> workQueue,
1260 ThreadFactory threadFactory,
1261 RejectedExecutionHandler handler) {
1262 if (corePoolSize < 0 ||
1263 maximumPoolSize <= 0 ||
1264 maximumPoolSize < corePoolSize ||
1265 keepAliveTime < 0)
1266 throw new IllegalArgumentException();
1267 if (workQueue == null || threadFactory == null || handler == null)
1268 throw new NullPointerException();
1269 this.corePoolSize = corePoolSize;
1270 this.maximumPoolSize = maximumPoolSize;
1271 this.workQueue = workQueue;
1272 this.keepAliveTime = unit.toNanos(keepAliveTime);
1273 this.threadFactory = threadFactory;
1274 this.handler = handler;
1275 }
1276
1277 /**
1278 * Executes the given task sometime in the future. The task
1279 * may execute in a new thread or in an existing pooled thread.
1280 *
1281 * If the task cannot be submitted for execution, either because this
1282 * executor has been shutdown or because its capacity has been reached,
1283 * the task is handled by the current {@link RejectedExecutionHandler}.
1284 *
1285 * @param command the task to execute
1286 * @throws RejectedExecutionException at discretion of
1287 * {@code RejectedExecutionHandler}, if the task
1288 * cannot be accepted for execution
1289 * @throws NullPointerException if {@code command} is null
1290 */
1291 public void execute(Runnable command) {
1292 if (command == null)
1293 throw new NullPointerException();
1294 /*
1295 * Proceed in 3 steps:
1296 *
1297 * 1. If fewer than corePoolSize threads are running, try to
1298 * start a new thread with the given command as its first
1299 * task. The call to addWorker atomically checks runState and
1300 * workerCount, and so prevents false alarms that would add
1301 * threads when it shouldn't, by returning false.
1302 *
1303 * 2. If a task can be successfully queued, then we still need
1304 * to double-check whether we should have added a thread
1305 * (because existing ones died since last checking) or that
1306 * the pool shut down since entry into this method. So we
1307 * recheck state and if necessary roll back the enqueuing if
1308 * stopped, or start a new thread if there are none.
1309 *
1310 * 3. If we cannot queue task, then we try to add a new
1311 * thread. If it fails, we know we are shut down or saturated
1312 * and so reject the task.
1313 */
1314 int c = ctl.get();
1315 if (workerCountOf(c) < corePoolSize) {
1316 if (addWorker(command, true))
1317 return;
1318 c = ctl.get();
1319 }
1320 if (isRunning(c) && workQueue.offer(command)) {
1321 int recheck = ctl.get();
1322 if (! isRunning(recheck) && remove(command))
1323 reject(command);
1324 else if (workerCountOf(recheck) == 0)
1325 addWorker(null, false);
1326 }
1327 else if (!addWorker(command, false))
1328 reject(command);
1329 }
1330
1331 /**
1332 * Initiates an orderly shutdown in which previously submitted
1333 * tasks are executed, but no new tasks will be accepted.
1334 * Invocation has no additional effect if already shut down.
1335 *
1336 * <p>This method does not wait for previously submitted tasks to
1337 * complete execution. Use {@link #awaitTermination awaitTermination}
1338 * to do that.
1339 *
1340 * @throws SecurityException {@inheritDoc}
1341 */
1342 public void shutdown() {
1343 final ReentrantLock mainLock = this.mainLock;
1344 mainLock.lock();
1345 try {
1346 checkShutdownAccess();
1347 advanceRunState(SHUTDOWN);
1348 interruptIdleWorkers();
1349 onShutdown(); // hook for ScheduledThreadPoolExecutor
1350 } finally {
1351 mainLock.unlock();
1352 }
1353 tryTerminate();
1354 }
1355
1356 /**
1357 * Attempts to stop all actively executing tasks, halts the
1358 * processing of waiting tasks, and returns a list of the tasks
1359 * that were awaiting execution. These tasks are drained (removed)
1360 * from the task queue upon return from this method.
1361 *
1362 * <p>This method does not wait for actively executing tasks to
1363 * terminate. Use {@link #awaitTermination awaitTermination} to
1364 * do that.
1365 *
1366 * <p>There are no guarantees beyond best-effort attempts to stop
1367 * processing actively executing tasks. This implementation
1368 * interrupts tasks via {@link Thread#interrupt}; any task that
1369 * fails to respond to interrupts may never terminate.
1370 *
1371 * @throws SecurityException {@inheritDoc}
1372 */
1373 public List<Runnable> shutdownNow() {
1374 List<Runnable> tasks;
1375 final ReentrantLock mainLock = this.mainLock;
1376 mainLock.lock();
1377 try {
1378 checkShutdownAccess();
1379 advanceRunState(STOP);
1380 interruptWorkers();
1381 tasks = drainQueue();
1382 } finally {
1383 mainLock.unlock();
1384 }
1385 tryTerminate();
1386 return tasks;
1387 }
1388
1389 public boolean isShutdown() {
1390 return runStateAtLeast(ctl.get(), SHUTDOWN);
1391 }
1392
1393 /** Used by ScheduledThreadPoolExecutor. */
1394 boolean isStopped() {
1395 return runStateAtLeast(ctl.get(), STOP);
1396 }
1397
1398 /**
1399 * Returns true if this executor is in the process of terminating
1400 * after {@link #shutdown} or {@link #shutdownNow} but has not
1401 * completely terminated. This method may be useful for
1402 * debugging. A return of {@code true} reported a sufficient
1403 * period after shutdown may indicate that submitted tasks have
1404 * ignored or suppressed interruption, causing this executor not
1405 * to properly terminate.
1406 *
1407 * @return {@code true} if terminating but not yet terminated
1408 */
1409 public boolean isTerminating() {
1410 int c = ctl.get();
1411 return runStateAtLeast(c, SHUTDOWN) && runStateLessThan(c, TERMINATED);
1412 }
1413
1414 public boolean isTerminated() {
1415 return runStateAtLeast(ctl.get(), TERMINATED);
1416 }
1417
1418 public boolean awaitTermination(long timeout, TimeUnit unit)
1419 throws InterruptedException {
1420 long nanos = unit.toNanos(timeout);
1421 final ReentrantLock mainLock = this.mainLock;
1422 mainLock.lock();
1423 try {
1424 while (runStateLessThan(ctl.get(), TERMINATED)) {
1425 if (nanos <= 0L)
1426 return false;
1427 nanos = termination.awaitNanos(nanos);
1428 }
1429 return true;
1430 } finally {
1431 mainLock.unlock();
1432 }
1433 }
1434
1435 // Override without "throws Throwable" for compatibility with subclasses
1436 // whose finalize method invokes super.finalize() (as is recommended).
1437 // Before JDK 11, finalize() had a non-empty method body.
1438
1439 /**
1440 * @implNote Previous versions of this class had a finalize method
1441 * that shut down this executor, but in this version, finalize
1442 * does nothing.
1443 */
1444 @Deprecated(since="9")
1445 protected void finalize() {}
1446
1447 /**
1448 * Sets the thread factory used to create new threads.
1449 *
1450 * @param threadFactory the new thread factory
1451 * @throws NullPointerException if threadFactory is null
1452 * @see #getThreadFactory
1453 */
1454 public void setThreadFactory(ThreadFactory threadFactory) {
1455 if (threadFactory == null)
1456 throw new NullPointerException();
1457 this.threadFactory = threadFactory;
1458 }
1459
1460 /**
1461 * Returns the thread factory used to create new threads.
1462 *
1463 * @return the current thread factory
1464 * @see #setThreadFactory(ThreadFactory)
1465 */
1466 public ThreadFactory getThreadFactory() {
1467 return threadFactory;
1468 }
1469
1470 /**
1471 * Sets a new handler for unexecutable tasks.
1472 *
1473 * @param handler the new handler
1474 * @throws NullPointerException if handler is null
1475 * @see #getRejectedExecutionHandler
1476 */
1477 public void setRejectedExecutionHandler(RejectedExecutionHandler handler) {
1478 if (handler == null)
1479 throw new NullPointerException();
1480 this.handler = handler;
1481 }
1482
1483 /**
1484 * Returns the current handler for unexecutable tasks.
1485 *
1486 * @return the current handler
1487 * @see #setRejectedExecutionHandler(RejectedExecutionHandler)
1488 */
1489 public RejectedExecutionHandler getRejectedExecutionHandler() {
1490 return handler;
1491 }
1492
1493 /**
1494 * Sets the core number of threads. This overrides any value set
1495 * in the constructor. If the new value is smaller than the
1496 * current value, excess existing threads will be terminated when
1497 * they next become idle. If larger, new threads will, if needed,
1498 * be started to execute any queued tasks.
1499 *
1500 * @param corePoolSize the new core size
1501 * @throws IllegalArgumentException if {@code corePoolSize < 0}
1502 * or {@code corePoolSize} is greater than the {@linkplain
1503 * #getMaximumPoolSize() maximum pool size}
1504 * @see #getCorePoolSize
1505 */
1506 public void setCorePoolSize(int corePoolSize) {
1507 if (corePoolSize < 0 || maximumPoolSize < corePoolSize)
1508 throw new IllegalArgumentException();
1509 int delta = corePoolSize - this.corePoolSize;
1510 this.corePoolSize = corePoolSize;
1511 if (workerCountOf(ctl.get()) > corePoolSize)
1512 interruptIdleWorkers();
1513 else if (delta > 0) {
1514 // We don't really know how many new threads are "needed".
1515 // As a heuristic, prestart enough new workers (up to new
1516 // core size) to handle the current number of tasks in
1517 // queue, but stop if queue becomes empty while doing so.
1518 int k = Math.min(delta, workQueue.size());
1519 while (k-- > 0 && addWorker(null, true)) {
1520 if (workQueue.isEmpty())
1521 break;
1522 }
1523 }
1524 }
1525
1526 /**
1527 * Returns the core number of threads.
1528 *
1529 * @return the core number of threads
1530 * @see #setCorePoolSize
1531 */
1532 public int getCorePoolSize() {
1533 return corePoolSize;
1534 }
1535
1536 /**
1537 * Starts a core thread, causing it to idly wait for work. This
1538 * overrides the default policy of starting core threads only when
1539 * new tasks are executed. This method will return {@code false}
1540 * if all core threads have already been started.
1541 *
1542 * @return {@code true} if a thread was started
1543 */
1544 public boolean prestartCoreThread() {
1545 return workerCountOf(ctl.get()) < corePoolSize &&
1546 addWorker(null, true);
1547 }
1548
1549 /**
1550 * Same as prestartCoreThread except arranges that at least one
1551 * thread is started even if corePoolSize is 0.
1552 */
1553 void ensurePrestart() {
1554 int wc = workerCountOf(ctl.get());
1555 if (wc < corePoolSize)
1556 addWorker(null, true);
1557 else if (wc == 0)
1558 addWorker(null, false);
1559 }
1560
1561 /**
1562 * Starts all core threads, causing them to idly wait for work. This
1563 * overrides the default policy of starting core threads only when
1564 * new tasks are executed.
1565 *
1566 * @return the number of threads started
1567 */
1568 public int prestartAllCoreThreads() {
1569 int n = 0;
1570 while (addWorker(null, true))
1571 ++n;
1572 return n;
1573 }
1574
1575 /**
1576 * Returns true if this pool allows core threads to time out and
1577 * terminate if no tasks arrive within the keepAlive time, being
1578 * replaced if needed when new tasks arrive. When true, the same
1579 * keep-alive policy applying to non-core threads applies also to
1580 * core threads. When false (the default), core threads are never
1581 * terminated due to lack of incoming tasks.
1582 *
1583 * @return {@code true} if core threads are allowed to time out,
1584 * else {@code false}
1585 *
1586 * @since 1.6
1587 */
1588 public boolean allowsCoreThreadTimeOut() {
1589 return allowCoreThreadTimeOut;
1590 }
1591
1592 /**
1593 * Sets the policy governing whether core threads may time out and
1594 * terminate if no tasks arrive within the keep-alive time, being
1595 * replaced if needed when new tasks arrive. When false, core
1596 * threads are never terminated due to lack of incoming
1597 * tasks. When true, the same keep-alive policy applying to
1598 * non-core threads applies also to core threads. To avoid
1599 * continual thread replacement, the keep-alive time must be
1600 * greater than zero when setting {@code true}. This method
1601 * should in general be called before the pool is actively used.
1602 *
1603 * @param value {@code true} if should time out, else {@code false}
1604 * @throws IllegalArgumentException if value is {@code true}
1605 * and the current keep-alive time is not greater than zero
1606 *
1607 * @since 1.6
1608 */
1609 public void allowCoreThreadTimeOut(boolean value) {
1610 if (value && keepAliveTime <= 0)
1611 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1612 if (value != allowCoreThreadTimeOut) {
1613 allowCoreThreadTimeOut = value;
1614 if (value)
1615 interruptIdleWorkers();
1616 }
1617 }
1618
1619 /**
1620 * Sets the maximum allowed number of threads. This overrides any
1621 * value set in the constructor. If the new value is smaller than
1622 * the current value, excess existing threads will be
1623 * terminated when they next become idle.
1624 *
1625 * @param maximumPoolSize the new maximum
1626 * @throws IllegalArgumentException if the new maximum is
1627 * less than or equal to zero, or
1628 * less than the {@linkplain #getCorePoolSize core pool size}
1629 * @see #getMaximumPoolSize
1630 */
1631 public void setMaximumPoolSize(int maximumPoolSize) {
1632 if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize)
1633 throw new IllegalArgumentException();
1634 this.maximumPoolSize = maximumPoolSize;
1635 if (workerCountOf(ctl.get()) > maximumPoolSize)
1636 interruptIdleWorkers();
1637 }
1638
1639 /**
1640 * Returns the maximum allowed number of threads.
1641 *
1642 * @return the maximum allowed number of threads
1643 * @see #setMaximumPoolSize
1644 */
1645 public int getMaximumPoolSize() {
1646 return maximumPoolSize;
1647 }
1648
1649 /**
1650 * Sets the thread keep-alive time, which is the amount of time
1651 * that threads may remain idle before being terminated.
1652 * Threads that wait this amount of time without processing a
1653 * task will be terminated if there are more than the core
1654 * number of threads currently in the pool, or if this pool
1655 * {@linkplain #allowsCoreThreadTimeOut() allows core thread timeout}.
1656 * This overrides any value set in the constructor.
1657 *
1658 * @param time the time to wait. A time value of zero will cause
1659 * excess threads to terminate immediately after executing tasks.
1660 * @param unit the time unit of the {@code time} argument
1661 * @throws IllegalArgumentException if {@code time} less than zero or
1662 * if {@code time} is zero and {@code allowsCoreThreadTimeOut}
1663 * @see #getKeepAliveTime(TimeUnit)
1664 */
1665 public void setKeepAliveTime(long time, TimeUnit unit) {
1666 if (time < 0)
1667 throw new IllegalArgumentException();
1668 if (time == 0 && allowsCoreThreadTimeOut())
1669 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1670 long keepAliveTime = unit.toNanos(time);
1671 long delta = keepAliveTime - this.keepAliveTime;
1672 this.keepAliveTime = keepAliveTime;
1673 if (delta < 0)
1674 interruptIdleWorkers();
1675 }
1676
1677 /**
1678 * Returns the thread keep-alive time, which is the amount of time
1679 * that threads may remain idle before being terminated.
1680 * Threads that wait this amount of time without processing a
1681 * task will be terminated if there are more than the core
1682 * number of threads currently in the pool, or if this pool
1683 * {@linkplain #allowsCoreThreadTimeOut() allows core thread timeout}.
1684 *
1685 * @param unit the desired time unit of the result
1686 * @return the time limit
1687 * @see #setKeepAliveTime(long, TimeUnit)
1688 */
1689 public long getKeepAliveTime(TimeUnit unit) {
1690 return unit.convert(keepAliveTime, TimeUnit.NANOSECONDS);
1691 }
1692
1693 /* User-level queue utilities */
1694
1695 /**
1696 * Returns the task queue used by this executor. Access to the
1697 * task queue is intended primarily for debugging and monitoring.
1698 * This queue may be in active use. Retrieving the task queue
1699 * does not prevent queued tasks from executing.
1700 *
1701 * @return the task queue
1702 */
1703 public BlockingQueue<Runnable> getQueue() {
1704 return workQueue;
1705 }
1706
1707 /**
1708 * Removes this task from the executor's internal queue if it is
1709 * present, thus causing it not to be run if it has not already
1710 * started.
1711 *
1712 * <p>This method may be useful as one part of a cancellation
1713 * scheme. It may fail to remove tasks that have been converted
1714 * into other forms before being placed on the internal queue.
1715 * For example, a task entered using {@code submit} might be
1716 * converted into a form that maintains {@code Future} status.
1717 * However, in such cases, method {@link #purge} may be used to
1718 * remove those Futures that have been cancelled.
1719 *
1720 * @param task the task to remove
1721 * @return {@code true} if the task was removed
1722 */
1723 public boolean remove(Runnable task) {
1724 boolean removed = workQueue.remove(task);
1725 tryTerminate(); // In case SHUTDOWN and now empty
1726 return removed;
1727 }
1728
1729 /**
1730 * Tries to remove from the work queue all {@link Future}
1731 * tasks that have been cancelled. This method can be useful as a
1732 * storage reclamation operation, that has no other impact on
1733 * functionality. Cancelled tasks are never executed, but may
1734 * accumulate in work queues until worker threads can actively
1735 * remove them. Invoking this method instead tries to remove them now.
1736 * However, this method may fail to remove tasks in
1737 * the presence of interference by other threads.
1738 */
1739 public void purge() {
1740 final BlockingQueue<Runnable> q = workQueue;
1741 try {
1742 Iterator<Runnable> it = q.iterator();
1743 while (it.hasNext()) {
1744 Runnable r = it.next();
1745 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1746 it.remove();
1747 }
1748 } catch (ConcurrentModificationException fallThrough) {
1749 // Take slow path if we encounter interference during traversal.
1750 // Make copy for traversal and call remove for cancelled entries.
1751 // The slow path is more likely to be O(N*N).
1752 for (Object r : q.toArray())
1753 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1754 q.remove(r);
1755 }
1756
1757 tryTerminate(); // In case SHUTDOWN and now empty
1758 }
1759
1760 /* Statistics */
1761
1762 /**
1763 * Returns the current number of threads in the pool.
1764 *
1765 * @return the number of threads
1766 */
1767 public int getPoolSize() {
1768 final ReentrantLock mainLock = this.mainLock;
1769 mainLock.lock();
1770 try {
1771 // Remove rare and surprising possibility of
1772 // isTerminated() && getPoolSize() > 0
1773 return runStateAtLeast(ctl.get(), TIDYING) ? 0
1774 : workers.size();
1775 } finally {
1776 mainLock.unlock();
1777 }
1778 }
1779
1780 /**
1781 * Returns the approximate number of threads that are actively
1782 * executing tasks.
1783 *
1784 * @return the number of threads
1785 */
1786 public int getActiveCount() {
1787 final ReentrantLock mainLock = this.mainLock;
1788 mainLock.lock();
1789 try {
1790 int n = 0;
1791 for (Worker w : workers)
1792 if (w.isLocked())
1793 ++n;
1794 return n;
1795 } finally {
1796 mainLock.unlock();
1797 }
1798 }
1799
1800 /**
1801 * Returns the largest number of threads that have ever
1802 * simultaneously been in the pool.
1803 *
1804 * @return the number of threads
1805 */
1806 public int getLargestPoolSize() {
1807 final ReentrantLock mainLock = this.mainLock;
1808 mainLock.lock();
1809 try {
1810 return largestPoolSize;
1811 } finally {
1812 mainLock.unlock();
1813 }
1814 }
1815
1816 /**
1817 * Returns the approximate total number of tasks that have ever been
1818 * scheduled for execution. Because the states of tasks and
1819 * threads may change dynamically during computation, the returned
1820 * value is only an approximation.
1821 *
1822 * @return the number of tasks
1823 */
1824 public long getTaskCount() {
1825 final ReentrantLock mainLock = this.mainLock;
1826 mainLock.lock();
1827 try {
1828 long n = completedTaskCount;
1829 for (Worker w : workers) {
1830 n += w.completedTasks;
1831 if (w.isLocked())
1832 ++n;
1833 }
1834 return n + workQueue.size();
1835 } finally {
1836 mainLock.unlock();
1837 }
1838 }
1839
1840 /**
1841 * Returns the approximate total number of tasks that have
1842 * completed execution. Because the states of tasks and threads
1843 * may change dynamically during computation, the returned value
1844 * is only an approximation, but one that does not ever decrease
1845 * across successive calls.
1846 *
1847 * @return the number of tasks
1848 */
1849 public long getCompletedTaskCount() {
1850 final ReentrantLock mainLock = this.mainLock;
1851 mainLock.lock();
1852 try {
1853 long n = completedTaskCount;
1854 for (Worker w : workers)
1855 n += w.completedTasks;
1856 return n;
1857 } finally {
1858 mainLock.unlock();
1859 }
1860 }
1861
1862 /**
1863 * Returns a string identifying this pool, as well as its state,
1864 * including indications of run state and estimated worker and
1865 * task counts.
1866 *
1867 * @return a string identifying this pool, as well as its state
1868 */
1869 public String toString() {
1870 long ncompleted;
1871 int nworkers, nactive;
1872 final ReentrantLock mainLock = this.mainLock;
1873 mainLock.lock();
1874 try {
1875 ncompleted = completedTaskCount;
1876 nactive = 0;
1877 nworkers = workers.size();
1878 for (Worker w : workers) {
1879 ncompleted += w.completedTasks;
1880 if (w.isLocked())
1881 ++nactive;
1882 }
1883 } finally {
1884 mainLock.unlock();
1885 }
1886 int c = ctl.get();
1887 String runState =
1888 isRunning(c) ? "Running" :
1889 runStateAtLeast(c, TERMINATED) ? "Terminated" :
1890 "Shutting down";
1891 return super.toString() +
1892 "[" + runState +
1893 ", pool size = " + nworkers +
1894 ", active threads = " + nactive +
1895 ", queued tasks = " + workQueue.size() +
1896 ", completed tasks = " + ncompleted +
1897 "]";
1898 }
1899
1900 /* Extension hooks */
1901
1902 /**
1903 * Method invoked prior to executing the given Runnable in the
1904 * given thread. This method is invoked by thread {@code t} that
1905 * will execute task {@code r}, and may be used to re-initialize
1906 * ThreadLocals, or to perform logging.
1907 *
1908 * <p>This implementation does nothing, but may be customized in
1909 * subclasses. Note: To properly nest multiple overridings, subclasses
1910 * should generally invoke {@code super.beforeExecute} at the end of
1911 * this method.
1912 *
1913 * @param t the thread that will run task {@code r}
1914 * @param r the task that will be executed
1915 */
1916 protected void beforeExecute(Thread t, Runnable r) { }
1917
1918 /**
1919 * Method invoked upon completion of execution of the given Runnable.
1920 * This method is invoked by the thread that executed the task. If
1921 * non-null, the Throwable is the uncaught {@code RuntimeException}
1922 * or {@code Error} that caused execution to terminate abruptly.
1923 *
1924 * <p>This implementation does nothing, but may be customized in
1925 * subclasses. Note: To properly nest multiple overridings, subclasses
1926 * should generally invoke {@code super.afterExecute} at the
1927 * beginning of this method.
1928 *
1929 * <p><b>Note:</b> When actions are enclosed in tasks (such as
1930 * {@link FutureTask}) either explicitly or via methods such as
1931 * {@code submit}, these task objects catch and maintain
1932 * computational exceptions, and so they do not cause abrupt
1933 * termination, and the internal exceptions are <em>not</em>
1934 * passed to this method. If you would like to trap both kinds of
1935 * failures in this method, you can further probe for such cases,
1936 * as in this sample subclass that prints either the direct cause
1937 * or the underlying exception if a task has been aborted:
1938 *
1939 * <pre> {@code
1940 * class ExtendedExecutor extends ThreadPoolExecutor {
1941 * // ...
1942 * protected void afterExecute(Runnable r, Throwable t) {
1943 * super.afterExecute(r, t);
1944 * if (t == null
1945 * && r instanceof Future<?>
1946 * && ((Future<?>)r).isDone()) {
1947 * try {
1948 * Object result = ((Future<?>) r).get();
1949 * } catch (CancellationException ce) {
1950 * t = ce;
1951 * } catch (ExecutionException ee) {
1952 * t = ee.getCause();
1953 * } catch (InterruptedException ie) {
1954 * // ignore/reset
1955 * Thread.currentThread().interrupt();
1956 * }
1957 * }
1958 * if (t != null)
1959 * System.out.println(t);
1960 * }
1961 * }}</pre>
1962 *
1963 * @param r the runnable that has completed
1964 * @param t the exception that caused termination, or null if
1965 * execution completed normally
1966 */
1967 protected void afterExecute(Runnable r, Throwable t) { }
1968
1969 /**
1970 * Method invoked when the Executor has terminated. Default
1971 * implementation does nothing. Note: To properly nest multiple
1972 * overridings, subclasses should generally invoke
1973 * {@code super.terminated} within this method.
1974 */
1975 protected void terminated() { }
1976
1977 /* Predefined RejectedExecutionHandlers */
1978
1979 /**
1980 * A handler for rejected tasks that runs the rejected task
1981 * directly in the calling thread of the {@code execute} method,
1982 * unless the executor has been shut down, in which case the task
1983 * is discarded.
1984 */
1985 public static class CallerRunsPolicy implements RejectedExecutionHandler {
1986 /**
1987 * Creates a {@code CallerRunsPolicy}.
1988 */
1989 public CallerRunsPolicy() { }
1990
1991 /**
1992 * Executes task r in the caller's thread, unless the executor
1993 * has been shut down, in which case the task is discarded.
1994 *
1995 * @param r the runnable task requested to be executed
1996 * @param e the executor attempting to execute this task
1997 */
1998 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1999 if (!e.isShutdown()) {
2000 r.run();
2001 }
2002 }
2003 }
2004
2005 /**
2006 * A handler for rejected tasks that throws a
2007 * {@link RejectedExecutionException}.
2008 *
2009 * This is the default handler for {@link ThreadPoolExecutor} and
2010 * {@link ScheduledThreadPoolExecutor}.
2011 */
2012 public static class AbortPolicy implements RejectedExecutionHandler {
2013 /**
2014 * Creates an {@code AbortPolicy}.
2015 */
2016 public AbortPolicy() { }
2017
2018 /**
2019 * Always throws RejectedExecutionException.
2020 *
2021 * @param r the runnable task requested to be executed
2022 * @param e the executor attempting to execute this task
2023 * @throws RejectedExecutionException always
2024 */
2025 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2026 throw new RejectedExecutionException("Task " + r.toString() +
2027 " rejected from " +
2028 e.toString());
2029 }
2030 }
2031
2032 /**
2033 * A handler for rejected tasks that silently discards the
2034 * rejected task.
2035 */
2036 public static class DiscardPolicy implements RejectedExecutionHandler {
2037 /**
2038 * Creates a {@code DiscardPolicy}.
2039 */
2040 public DiscardPolicy() { }
2041
2042 /**
2043 * Does nothing, which has the effect of discarding task r.
2044 *
2045 * @param r the runnable task requested to be executed
2046 * @param e the executor attempting to execute this task
2047 */
2048 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2049 }
2050 }
2051
2052 /**
2053 * A handler for rejected tasks that discards the oldest unhandled
2054 * request and then retries {@code execute}, unless the executor
2055 * is shut down, in which case the task is discarded.
2056 */
2057 public static class DiscardOldestPolicy implements RejectedExecutionHandler {
2058 /**
2059 * Creates a {@code DiscardOldestPolicy} for the given executor.
2060 */
2061 public DiscardOldestPolicy() { }
2062
2063 /**
2064 * Obtains and ignores the next task that the executor
2065 * would otherwise execute, if one is immediately available,
2066 * and then retries execution of task r, unless the executor
2067 * is shut down, in which case task r is instead discarded.
2068 *
2069 * @param r the runnable task requested to be executed
2070 * @param e the executor attempting to execute this task
2071 */
2072 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2073 if (!e.isShutdown()) {
2074 e.getQueue().poll();
2075 e.execute(r);
2076 }
2077 }
2078 }
2079 }