ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ThreadPoolExecutor.java
Revision: 1.195
Committed: Tue Dec 8 20:33:21 2020 UTC (3 years, 5 months ago) by jsr166
Branch: MAIN
Changes since 1.194: +0 -2 lines
Log Message:
rollback inadvertent commit of experiment in 1.194

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain, as explained at
4 * http://creativecommons.org/publicdomain/zero/1.0/
5 */
6
7 package java.util.concurrent;
8
9 import java.util.ArrayList;
10 import java.util.ConcurrentModificationException;
11 import java.util.HashSet;
12 import java.util.Iterator;
13 import java.util.List;
14 import java.util.concurrent.atomic.AtomicInteger;
15 import java.util.concurrent.locks.AbstractQueuedSynchronizer;
16 import java.util.concurrent.locks.Condition;
17 import java.util.concurrent.locks.ReentrantLock;
18
19 /**
20 * An {@link ExecutorService} that executes each submitted task using
21 * one of possibly several pooled threads, normally configured
22 * using {@link Executors} factory methods.
23 *
24 * <p>Thread pools address two different problems: they usually
25 * provide improved performance when executing large numbers of
26 * asynchronous tasks, due to reduced per-task invocation overhead,
27 * and they provide a means of bounding and managing the resources,
28 * including threads, consumed when executing a collection of tasks.
29 * Each {@code ThreadPoolExecutor} also maintains some basic
30 * statistics, such as the number of completed tasks.
31 *
32 * <p>To be useful across a wide range of contexts, this class
33 * provides many adjustable parameters and extensibility
34 * hooks. However, programmers are urged to use the more convenient
35 * {@link Executors} factory methods {@link
36 * Executors#newCachedThreadPool} (unbounded thread pool, with
37 * automatic thread reclamation), {@link Executors#newFixedThreadPool}
38 * (fixed size thread pool) and {@link
39 * Executors#newSingleThreadExecutor} (single background thread), that
40 * preconfigure settings for the most common usage
41 * scenarios. Otherwise, use the following guide when manually
42 * configuring and tuning this class:
43 *
44 * <dl>
45 *
46 * <dt>Core and maximum pool sizes</dt>
47 *
48 * <dd>A {@code ThreadPoolExecutor} will automatically adjust the
49 * pool size (see {@link #getPoolSize})
50 * according to the bounds set by
51 * corePoolSize (see {@link #getCorePoolSize}) and
52 * maximumPoolSize (see {@link #getMaximumPoolSize}).
53 *
54 * When a new task is submitted in method {@link #execute(Runnable)},
55 * if fewer than corePoolSize threads are running, a new thread is
56 * created to handle the request, even if other worker threads are
57 * idle. Else if fewer than maximumPoolSize threads are running, a
58 * new thread will be created to handle the request only if the queue
59 * is full. By setting corePoolSize and maximumPoolSize the same, you
60 * create a fixed-size thread pool. By setting maximumPoolSize to an
61 * essentially unbounded value such as {@code Integer.MAX_VALUE}, you
62 * allow the pool to accommodate an arbitrary number of concurrent
63 * tasks. Most typically, core and maximum pool sizes are set only
64 * upon construction, but they may also be changed dynamically using
65 * {@link #setCorePoolSize} and {@link #setMaximumPoolSize}. </dd>
66 *
67 * <dt>On-demand construction</dt>
68 *
69 * <dd>By default, even core threads are initially created and
70 * started only when new tasks arrive, but this can be overridden
71 * dynamically using method {@link #prestartCoreThread} or {@link
72 * #prestartAllCoreThreads}. You probably want to prestart threads if
73 * you construct the pool with a non-empty queue. </dd>
74 *
75 * <dt>Creating new threads</dt>
76 *
77 * <dd>New threads are created using a {@link ThreadFactory}. If not
78 * otherwise specified, a {@link Executors#defaultThreadFactory} is
79 * used, that creates threads to all be in the same {@link
80 * ThreadGroup} and with the same {@code NORM_PRIORITY} priority and
81 * non-daemon status. By supplying a different ThreadFactory, you can
82 * alter the thread's name, thread group, priority, daemon status,
83 * etc. If a {@code ThreadFactory} fails to create a thread when asked
84 * by returning null from {@code newThread}, the executor will
85 * continue, but might not be able to execute any tasks. Threads
86 * should possess the "modifyThread" {@code RuntimePermission}. If
87 * worker threads or other threads using the pool do not possess this
88 * permission, service may be degraded: configuration changes may not
89 * take effect in a timely manner, and a shutdown pool may remain in a
90 * state in which termination is possible but not completed.</dd>
91 *
92 * <dt>Keep-alive times</dt>
93 *
94 * <dd>If the pool currently has more than corePoolSize threads,
95 * excess threads will be terminated if they have been idle for more
96 * than the keepAliveTime (see {@link #getKeepAliveTime(TimeUnit)}).
97 * This provides a means of reducing resource consumption when the
98 * pool is not being actively used. If the pool becomes more active
99 * later, new threads will be constructed. This parameter can also be
100 * changed dynamically using method {@link #setKeepAliveTime(long,
101 * TimeUnit)}. Using a value of {@code Long.MAX_VALUE} {@link
102 * TimeUnit#NANOSECONDS} effectively disables idle threads from ever
103 * terminating prior to shut down. By default, the keep-alive policy
104 * applies only when there are more than corePoolSize threads, but
105 * method {@link #allowCoreThreadTimeOut(boolean)} can be used to
106 * apply this time-out policy to core threads as well, so long as the
107 * keepAliveTime value is non-zero. </dd>
108 *
109 * <dt>Queuing</dt>
110 *
111 * <dd>Any {@link BlockingQueue} may be used to transfer and hold
112 * submitted tasks. The use of this queue interacts with pool sizing:
113 *
114 * <ul>
115 *
116 * <li>If fewer than corePoolSize threads are running, the Executor
117 * always prefers adding a new thread
118 * rather than queuing.
119 *
120 * <li>If corePoolSize or more threads are running, the Executor
121 * always prefers queuing a request rather than adding a new
122 * thread.
123 *
124 * <li>If a request cannot be queued, a new thread is created unless
125 * this would exceed maximumPoolSize, in which case, the task will be
126 * rejected.
127 *
128 * </ul>
129 *
130 * There are three general strategies for queuing:
131 * <ol>
132 *
133 * <li><em> Direct handoffs.</em> A good default choice for a work
134 * queue is a {@link SynchronousQueue} that hands off tasks to threads
135 * without otherwise holding them. Here, an attempt to queue a task
136 * will fail if no threads are immediately available to run it, so a
137 * new thread will be constructed. This policy avoids lockups when
138 * handling sets of requests that might have internal dependencies.
139 * Direct handoffs generally require unbounded maximumPoolSizes to
140 * avoid rejection of new submitted tasks. This in turn admits the
141 * possibility of unbounded thread growth when commands continue to
142 * arrive on average faster than they can be processed.
143 *
144 * <li><em> Unbounded queues.</em> Using an unbounded queue (for
145 * example a {@link LinkedBlockingQueue} without a predefined
146 * capacity) will cause new tasks to wait in the queue when all
147 * corePoolSize threads are busy. Thus, no more than corePoolSize
148 * threads will ever be created. (And the value of the maximumPoolSize
149 * therefore doesn't have any effect.) This may be appropriate when
150 * each task is completely independent of others, so tasks cannot
151 * affect each others execution; for example, in a web page server.
152 * While this style of queuing can be useful in smoothing out
153 * transient bursts of requests, it admits the possibility of
154 * unbounded work queue growth when commands continue to arrive on
155 * average faster than they can be processed.
156 *
157 * <li><em>Bounded queues.</em> A bounded queue (for example, an
158 * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
159 * used with finite maximumPoolSizes, but can be more difficult to
160 * tune and control. Queue sizes and maximum pool sizes may be traded
161 * off for each other: Using large queues and small pools minimizes
162 * CPU usage, OS resources, and context-switching overhead, but can
163 * lead to artificially low throughput. If tasks frequently block (for
164 * example if they are I/O bound), a system may be able to schedule
165 * time for more threads than you otherwise allow. Use of small queues
166 * generally requires larger pool sizes, which keeps CPUs busier but
167 * may encounter unacceptable scheduling overhead, which also
168 * decreases throughput.
169 *
170 * </ol>
171 *
172 * </dd>
173 *
174 * <dt>Rejected tasks</dt>
175 *
176 * <dd>New tasks submitted in method {@link #execute(Runnable)} will be
177 * <em>rejected</em> when the Executor has been shut down, and also when
178 * the Executor uses finite bounds for both maximum threads and work queue
179 * capacity, and is saturated. In either case, the {@code execute} method
180 * invokes the {@link
181 * RejectedExecutionHandler#rejectedExecution(Runnable, ThreadPoolExecutor)}
182 * method of its {@link RejectedExecutionHandler}. Four predefined handler
183 * policies are provided:
184 *
185 * <ol>
186 *
187 * <li>In the default {@link ThreadPoolExecutor.AbortPolicy}, the handler
188 * throws a runtime {@link RejectedExecutionException} upon rejection.
189 *
190 * <li>In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread
191 * that invokes {@code execute} itself runs the task. This provides a
192 * simple feedback control mechanism that will slow down the rate that
193 * new tasks are submitted.
194 *
195 * <li>In {@link ThreadPoolExecutor.DiscardPolicy}, a task that cannot
196 * be executed is simply dropped. This policy is designed only for
197 * those rare cases in which task completion is never relied upon.
198 *
199 * <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the
200 * executor is not shut down, the task at the head of the work queue
201 * is dropped, and then execution is retried (which can fail again,
202 * causing this to be repeated.) This policy is rarely acceptable. In
203 * nearly all cases, you should also cancel the task to cause an
204 * exception in any component waiting for its completion, and/or log
205 * the failure, as illustrated in {@link
206 * ThreadPoolExecutor.DiscardOldestPolicy} documentation.
207 *
208 * </ol>
209 *
210 * It is possible to define and use other kinds of {@link
211 * RejectedExecutionHandler} classes. Doing so requires some care
212 * especially when policies are designed to work only under particular
213 * capacity or queuing policies. </dd>
214 *
215 * <dt>Hook methods</dt>
216 *
217 * <dd>This class provides {@code protected} overridable
218 * {@link #beforeExecute(Thread, Runnable)} and
219 * {@link #afterExecute(Runnable, Throwable)} methods that are called
220 * before and after execution of each task. These can be used to
221 * manipulate the execution environment; for example, reinitializing
222 * ThreadLocals, gathering statistics, or adding log entries.
223 * Additionally, method {@link #terminated} can be overridden to perform
224 * any special processing that needs to be done once the Executor has
225 * fully terminated.
226 *
227 * <p>If hook, callback, or BlockingQueue methods throw exceptions,
228 * internal worker threads may in turn fail, abruptly terminate, and
229 * possibly be replaced.</dd>
230 *
231 * <dt>Queue maintenance</dt>
232 *
233 * <dd>Method {@link #getQueue()} allows access to the work queue
234 * for purposes of monitoring and debugging. Use of this method for
235 * any other purpose is strongly discouraged. Two supplied methods,
236 * {@link #remove(Runnable)} and {@link #purge} are available to
237 * assist in storage reclamation when large numbers of queued tasks
238 * become cancelled.</dd>
239 *
240 * <dt>Reclamation</dt>
241 *
242 * <dd>A pool that is no longer referenced in a program <em>AND</em>
243 * has no remaining threads may be reclaimed (garbage collected)
244 * without being explicitly shutdown. You can configure a pool to
245 * allow all unused threads to eventually die by setting appropriate
246 * keep-alive times, using a lower bound of zero core threads and/or
247 * setting {@link #allowCoreThreadTimeOut(boolean)}. </dd>
248 *
249 * </dl>
250 *
251 * <p><b>Extension example.</b> Most extensions of this class
252 * override one or more of the protected hook methods. For example,
253 * here is a subclass that adds a simple pause/resume feature:
254 *
255 * <pre> {@code
256 * class PausableThreadPoolExecutor extends ThreadPoolExecutor {
257 * private boolean isPaused;
258 * private ReentrantLock pauseLock = new ReentrantLock();
259 * private Condition unpaused = pauseLock.newCondition();
260 *
261 * public PausableThreadPoolExecutor(...) { super(...); }
262 *
263 * protected void beforeExecute(Thread t, Runnable r) {
264 * super.beforeExecute(t, r);
265 * pauseLock.lock();
266 * try {
267 * while (isPaused) unpaused.await();
268 * } catch (InterruptedException ie) {
269 * t.interrupt();
270 * } finally {
271 * pauseLock.unlock();
272 * }
273 * }
274 *
275 * public void pause() {
276 * pauseLock.lock();
277 * try {
278 * isPaused = true;
279 * } finally {
280 * pauseLock.unlock();
281 * }
282 * }
283 *
284 * public void resume() {
285 * pauseLock.lock();
286 * try {
287 * isPaused = false;
288 * unpaused.signalAll();
289 * } finally {
290 * pauseLock.unlock();
291 * }
292 * }
293 * }}</pre>
294 *
295 * @since 1.5
296 * @author Doug Lea
297 */
298 public class ThreadPoolExecutor extends AbstractExecutorService {
299 /**
300 * The main pool control state, ctl, is an atomic integer packing
301 * two conceptual fields
302 * workerCount, indicating the effective number of threads
303 * runState, indicating whether running, shutting down etc
304 *
305 * In order to pack them into one int, we limit workerCount to
306 * (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2
307 * billion) otherwise representable. If this is ever an issue in
308 * the future, the variable can be changed to be an AtomicLong,
309 * and the shift/mask constants below adjusted. But until the need
310 * arises, this code is a bit faster and simpler using an int.
311 *
312 * The workerCount is the number of workers that have been
313 * permitted to start and not permitted to stop. The value may be
314 * transiently different from the actual number of live threads,
315 * for example when a ThreadFactory fails to create a thread when
316 * asked, and when exiting threads are still performing
317 * bookkeeping before terminating. The user-visible pool size is
318 * reported as the current size of the workers set.
319 *
320 * The runState provides the main lifecycle control, taking on values:
321 *
322 * RUNNING: Accept new tasks and process queued tasks
323 * SHUTDOWN: Don't accept new tasks, but process queued tasks
324 * STOP: Don't accept new tasks, don't process queued tasks,
325 * and interrupt in-progress tasks
326 * TIDYING: All tasks have terminated, workerCount is zero,
327 * the thread transitioning to state TIDYING
328 * will run the terminated() hook method
329 * TERMINATED: terminated() has completed
330 *
331 * The numerical order among these values matters, to allow
332 * ordered comparisons. The runState monotonically increases over
333 * time, but need not hit each state. The transitions are:
334 *
335 * RUNNING -> SHUTDOWN
336 * On invocation of shutdown()
337 * (RUNNING or SHUTDOWN) -> STOP
338 * On invocation of shutdownNow()
339 * SHUTDOWN -> TIDYING
340 * When both queue and pool are empty
341 * STOP -> TIDYING
342 * When pool is empty
343 * TIDYING -> TERMINATED
344 * When the terminated() hook method has completed
345 *
346 * Threads waiting in awaitTermination() will return when the
347 * state reaches TERMINATED.
348 *
349 * Detecting the transition from SHUTDOWN to TIDYING is less
350 * straightforward than you'd like because the queue may become
351 * empty after non-empty and vice versa during SHUTDOWN state, but
352 * we can only terminate if, after seeing that it is empty, we see
353 * that workerCount is 0 (which sometimes entails a recheck -- see
354 * below).
355 */
356 private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
357 private static final int COUNT_BITS = Integer.SIZE - 3;
358 private static final int COUNT_MASK = (1 << COUNT_BITS) - 1;
359
360 // runState is stored in the high-order bits
361 private static final int RUNNING = -1 << COUNT_BITS;
362 private static final int SHUTDOWN = 0 << COUNT_BITS;
363 private static final int STOP = 1 << COUNT_BITS;
364 private static final int TIDYING = 2 << COUNT_BITS;
365 private static final int TERMINATED = 3 << COUNT_BITS;
366
367 // Packing and unpacking ctl
368 private static int runStateOf(int c) { return c & ~COUNT_MASK; }
369 private static int workerCountOf(int c) { return c & COUNT_MASK; }
370 private static int ctlOf(int rs, int wc) { return rs | wc; }
371
372 /*
373 * Bit field accessors that don't require unpacking ctl.
374 * These depend on the bit layout and on workerCount being never negative.
375 */
376
377 private static boolean runStateLessThan(int c, int s) {
378 return c < s;
379 }
380
381 private static boolean runStateAtLeast(int c, int s) {
382 return c >= s;
383 }
384
385 private static boolean isRunning(int c) {
386 return c < SHUTDOWN;
387 }
388
389 /**
390 * Attempts to CAS-increment the workerCount field of ctl.
391 */
392 private boolean compareAndIncrementWorkerCount(int expect) {
393 return ctl.compareAndSet(expect, expect + 1);
394 }
395
396 /**
397 * Attempts to CAS-decrement the workerCount field of ctl.
398 */
399 private boolean compareAndDecrementWorkerCount(int expect) {
400 return ctl.compareAndSet(expect, expect - 1);
401 }
402
403 /**
404 * Decrements the workerCount field of ctl. This is called only on
405 * abrupt termination of a thread (see processWorkerExit). Other
406 * decrements are performed within getTask.
407 */
408 private void decrementWorkerCount() {
409 ctl.addAndGet(-1);
410 }
411
412 /**
413 * The queue used for holding tasks and handing off to worker
414 * threads. We do not require that workQueue.poll() returning
415 * null necessarily means that workQueue.isEmpty(), so rely
416 * solely on isEmpty to see if the queue is empty (which we must
417 * do for example when deciding whether to transition from
418 * SHUTDOWN to TIDYING). This accommodates special-purpose
419 * queues such as DelayQueues for which poll() is allowed to
420 * return null even if it may later return non-null when delays
421 * expire.
422 */
423 private final BlockingQueue<Runnable> workQueue;
424
425 /**
426 * Lock held on access to workers set and related bookkeeping.
427 * While we could use a concurrent set of some sort, it turns out
428 * to be generally preferable to use a lock. Among the reasons is
429 * that this serializes interruptIdleWorkers, which avoids
430 * unnecessary interrupt storms, especially during shutdown.
431 * Otherwise exiting threads would concurrently interrupt those
432 * that have not yet interrupted. It also simplifies some of the
433 * associated statistics bookkeeping of largestPoolSize etc. We
434 * also hold mainLock on shutdown and shutdownNow, for the sake of
435 * ensuring workers set is stable while separately checking
436 * permission to interrupt and actually interrupting.
437 */
438 private final ReentrantLock mainLock = new ReentrantLock();
439
440 /**
441 * Set containing all worker threads in pool. Accessed only when
442 * holding mainLock.
443 */
444 private final HashSet<Worker> workers = new HashSet<>();
445
446 /**
447 * Wait condition to support awaitTermination.
448 */
449 private final Condition termination = mainLock.newCondition();
450
451 /**
452 * Tracks largest attained pool size. Accessed only under
453 * mainLock.
454 */
455 private int largestPoolSize;
456
457 /**
458 * Counter for completed tasks. Updated only on termination of
459 * worker threads. Accessed only under mainLock.
460 */
461 private long completedTaskCount;
462
463 /*
464 * All user control parameters are declared as volatiles so that
465 * ongoing actions are based on freshest values, but without need
466 * for locking, since no internal invariants depend on them
467 * changing synchronously with respect to other actions.
468 */
469
470 /**
471 * Factory for new threads. All threads are created using this
472 * factory (via method addWorker). All callers must be prepared
473 * for addWorker to fail, which may reflect a system or user's
474 * policy limiting the number of threads. Even though it is not
475 * treated as an error, failure to create threads may result in
476 * new tasks being rejected or existing ones remaining stuck in
477 * the queue.
478 *
479 * We go further and preserve pool invariants even in the face of
480 * errors such as OutOfMemoryError, that might be thrown while
481 * trying to create threads. Such errors are rather common due to
482 * the need to allocate a native stack in Thread.start, and users
483 * will want to perform clean pool shutdown to clean up. There
484 * will likely be enough memory available for the cleanup code to
485 * complete without encountering yet another OutOfMemoryError.
486 */
487 private volatile ThreadFactory threadFactory;
488
489 /**
490 * Handler called when saturated or shutdown in execute.
491 */
492 private volatile RejectedExecutionHandler handler;
493
494 /**
495 * Timeout in nanoseconds for idle threads waiting for work.
496 * Threads use this timeout when there are more than corePoolSize
497 * present or if allowCoreThreadTimeOut. Otherwise they wait
498 * forever for new work.
499 */
500 private volatile long keepAliveTime;
501
502 /**
503 * If false (default), core threads stay alive even when idle.
504 * If true, core threads use keepAliveTime to time out waiting
505 * for work.
506 */
507 private volatile boolean allowCoreThreadTimeOut;
508
509 /**
510 * Core pool size is the minimum number of workers to keep alive
511 * (and not allow to time out etc) unless allowCoreThreadTimeOut
512 * is set, in which case the minimum is zero.
513 *
514 * Since the worker count is actually stored in COUNT_BITS bits,
515 * the effective limit is {@code corePoolSize & COUNT_MASK}.
516 */
517 private volatile int corePoolSize;
518
519 /**
520 * Maximum pool size.
521 *
522 * Since the worker count is actually stored in COUNT_BITS bits,
523 * the effective limit is {@code maximumPoolSize & COUNT_MASK}.
524 */
525 private volatile int maximumPoolSize;
526
527 /**
528 * The default rejected execution handler.
529 */
530 private static final RejectedExecutionHandler defaultHandler =
531 new AbortPolicy();
532
533 /**
534 * Permission required for callers of shutdown and shutdownNow.
535 * We additionally require (see checkShutdownAccess) that callers
536 * have permission to actually interrupt threads in the worker set
537 * (as governed by Thread.interrupt, which relies on
538 * ThreadGroup.checkAccess, which in turn relies on
539 * SecurityManager.checkAccess). Shutdowns are attempted only if
540 * these checks pass.
541 *
542 * All actual invocations of Thread.interrupt (see
543 * interruptIdleWorkers and interruptWorkers) ignore
544 * SecurityExceptions, meaning that the attempted interrupts
545 * silently fail. In the case of shutdown, they should not fail
546 * unless the SecurityManager has inconsistent policies, sometimes
547 * allowing access to a thread and sometimes not. In such cases,
548 * failure to actually interrupt threads may disable or delay full
549 * termination. Other uses of interruptIdleWorkers are advisory,
550 * and failure to actually interrupt will merely delay response to
551 * configuration changes so is not handled exceptionally.
552 */
553 private static final RuntimePermission shutdownPerm =
554 new RuntimePermission("modifyThread");
555
556 /**
557 * Class Worker mainly maintains interrupt control state for
558 * threads running tasks, along with other minor bookkeeping.
559 * This class opportunistically extends AbstractQueuedSynchronizer
560 * to simplify acquiring and releasing a lock surrounding each
561 * task execution. This protects against interrupts that are
562 * intended to wake up a worker thread waiting for a task from
563 * instead interrupting a task being run. We implement a simple
564 * non-reentrant mutual exclusion lock rather than use
565 * ReentrantLock because we do not want worker tasks to be able to
566 * reacquire the lock when they invoke pool control methods like
567 * setCorePoolSize. Additionally, to suppress interrupts until
568 * the thread actually starts running tasks, we initialize lock
569 * state to a negative value, and clear it upon start (in
570 * runWorker).
571 */
572 private final class Worker
573 extends AbstractQueuedSynchronizer
574 implements Runnable
575 {
576 /**
577 * This class will never be serialized, but we provide a
578 * serialVersionUID to suppress a javac warning.
579 */
580 private static final long serialVersionUID = 6138294804551838833L;
581
582 /** Thread this worker is running in. Null if factory fails. */
583 @SuppressWarnings("serial") // Unlikely to be serializable
584 final Thread thread;
585 /** Initial task to run. Possibly null. */
586 @SuppressWarnings("serial") // Not statically typed as Serializable
587 Runnable firstTask;
588 /** Per-thread task counter */
589 volatile long completedTasks;
590
591 // TODO: switch to AbstractQueuedLongSynchronizer and move
592 // completedTasks into the lock word.
593
594 /**
595 * Creates with given first task and thread from ThreadFactory.
596 * @param firstTask the first task (null if none)
597 */
598 Worker(Runnable firstTask) {
599 setState(-1); // inhibit interrupts until runWorker
600 this.firstTask = firstTask;
601 this.thread = getThreadFactory().newThread(this);
602 }
603
604 /** Delegates main run loop to outer runWorker. */
605 public void run() {
606 runWorker(this);
607 }
608
609 // Lock methods
610 //
611 // The value 0 represents the unlocked state.
612 // The value 1 represents the locked state.
613
614 protected boolean isHeldExclusively() {
615 return getState() != 0;
616 }
617
618 protected boolean tryAcquire(int unused) {
619 if (compareAndSetState(0, 1)) {
620 setExclusiveOwnerThread(Thread.currentThread());
621 return true;
622 }
623 return false;
624 }
625
626 protected boolean tryRelease(int unused) {
627 setExclusiveOwnerThread(null);
628 setState(0);
629 return true;
630 }
631
632 public void lock() { acquire(1); }
633 public boolean tryLock() { return tryAcquire(1); }
634 public void unlock() { release(1); }
635 public boolean isLocked() { return isHeldExclusively(); }
636
637 void interruptIfStarted() {
638 Thread t;
639 if (getState() >= 0 && (t = thread) != null && !t.isInterrupted()) {
640 try {
641 t.interrupt();
642 } catch (SecurityException ignore) {
643 }
644 }
645 }
646 }
647
648 /*
649 * Methods for setting control state
650 */
651
652 /**
653 * Transitions runState to given target, or leaves it alone if
654 * already at least the given target.
655 *
656 * @param targetState the desired state, either SHUTDOWN or STOP
657 * (but not TIDYING or TERMINATED -- use tryTerminate for that)
658 */
659 private void advanceRunState(int targetState) {
660 // assert targetState == SHUTDOWN || targetState == STOP;
661 for (;;) {
662 int c = ctl.get();
663 if (runStateAtLeast(c, targetState) ||
664 ctl.compareAndSet(c, ctlOf(targetState, workerCountOf(c))))
665 break;
666 }
667 }
668
669 /**
670 * Transitions to TERMINATED state if either (SHUTDOWN and pool
671 * and queue empty) or (STOP and pool empty). If otherwise
672 * eligible to terminate but workerCount is nonzero, interrupts an
673 * idle worker to ensure that shutdown signals propagate. This
674 * method must be called following any action that might make
675 * termination possible -- reducing worker count or removing tasks
676 * from the queue during shutdown. The method is non-private to
677 * allow access from ScheduledThreadPoolExecutor.
678 */
679 final void tryTerminate() {
680 for (;;) {
681 int c = ctl.get();
682 if (isRunning(c) ||
683 runStateAtLeast(c, TIDYING) ||
684 (runStateLessThan(c, STOP) && ! workQueue.isEmpty()))
685 return;
686 if (workerCountOf(c) != 0) { // Eligible to terminate
687 interruptIdleWorkers(ONLY_ONE);
688 return;
689 }
690
691 final ReentrantLock mainLock = this.mainLock;
692 mainLock.lock();
693 try {
694 if (ctl.compareAndSet(c, ctlOf(TIDYING, 0))) {
695 try {
696 terminated();
697 } finally {
698 ctl.set(ctlOf(TERMINATED, 0));
699 termination.signalAll();
700 }
701 return;
702 }
703 } finally {
704 mainLock.unlock();
705 }
706 // else retry on failed CAS
707 }
708 }
709
710 /*
711 * Methods for controlling interrupts to worker threads.
712 */
713
714 /**
715 * If there is a security manager, makes sure caller has
716 * permission to shut down threads in general (see shutdownPerm).
717 * If this passes, additionally makes sure the caller is allowed
718 * to interrupt each worker thread. This might not be true even if
719 * first check passed, if the SecurityManager treats some threads
720 * specially.
721 */
722 private void checkShutdownAccess() {
723 // assert mainLock.isHeldByCurrentThread();
724 SecurityManager security = System.getSecurityManager();
725 if (security != null) {
726 security.checkPermission(shutdownPerm);
727 for (Worker w : workers)
728 security.checkAccess(w.thread);
729 }
730 }
731
732 /**
733 * Interrupts all threads, even if active. Ignores SecurityExceptions
734 * (in which case some threads may remain uninterrupted).
735 */
736 private void interruptWorkers() {
737 // assert mainLock.isHeldByCurrentThread();
738 for (Worker w : workers)
739 w.interruptIfStarted();
740 }
741
742 /**
743 * Interrupts threads that might be waiting for tasks (as
744 * indicated by not being locked) so they can check for
745 * termination or configuration changes. Ignores
746 * SecurityExceptions (in which case some threads may remain
747 * uninterrupted).
748 *
749 * @param onlyOne If true, interrupt at most one worker. This is
750 * called only from tryTerminate when termination is otherwise
751 * enabled but there are still other workers. In this case, at
752 * most one waiting worker is interrupted to propagate shutdown
753 * signals in case all threads are currently waiting.
754 * Interrupting any arbitrary thread ensures that newly arriving
755 * workers since shutdown began will also eventually exit.
756 * To guarantee eventual termination, it suffices to always
757 * interrupt only one idle worker, but shutdown() interrupts all
758 * idle workers so that redundant workers exit promptly, not
759 * waiting for a straggler task to finish.
760 */
761 private void interruptIdleWorkers(boolean onlyOne) {
762 final ReentrantLock mainLock = this.mainLock;
763 mainLock.lock();
764 try {
765 for (Worker w : workers) {
766 Thread t = w.thread;
767 if (!t.isInterrupted() && w.tryLock()) {
768 try {
769 t.interrupt();
770 } catch (SecurityException ignore) {
771 } finally {
772 w.unlock();
773 }
774 }
775 if (onlyOne)
776 break;
777 }
778 } finally {
779 mainLock.unlock();
780 }
781 }
782
783 /**
784 * Common form of interruptIdleWorkers, to avoid having to
785 * remember what the boolean argument means.
786 */
787 private void interruptIdleWorkers() {
788 interruptIdleWorkers(false);
789 }
790
791 private static final boolean ONLY_ONE = true;
792
793 /*
794 * Misc utilities, most of which are also exported to
795 * ScheduledThreadPoolExecutor
796 */
797
798 /**
799 * Invokes the rejected execution handler for the given command.
800 * Package-protected for use by ScheduledThreadPoolExecutor.
801 */
802 final void reject(Runnable command) {
803 handler.rejectedExecution(command, this);
804 }
805
806 /**
807 * Performs any further cleanup following run state transition on
808 * invocation of shutdown. A no-op here, but used by
809 * ScheduledThreadPoolExecutor to cancel delayed tasks.
810 */
811 void onShutdown() {
812 }
813
814 /**
815 * Drains the task queue into a new list, normally using
816 * drainTo. But if the queue is a DelayQueue or any other kind of
817 * queue for which poll or drainTo may fail to remove some
818 * elements, it deletes them one by one.
819 */
820 private List<Runnable> drainQueue() {
821 BlockingQueue<Runnable> q = workQueue;
822 ArrayList<Runnable> taskList = new ArrayList<>();
823 q.drainTo(taskList);
824 if (!q.isEmpty()) {
825 for (Runnable r : q.toArray(new Runnable[0])) {
826 if (q.remove(r))
827 taskList.add(r);
828 }
829 }
830 return taskList;
831 }
832
833 /*
834 * Methods for creating, running and cleaning up after workers
835 */
836
837 /**
838 * Checks if a new worker can be added with respect to current
839 * pool state and the given bound (either core or maximum). If so,
840 * the worker count is adjusted accordingly, and, if possible, a
841 * new worker is created and started, running firstTask as its
842 * first task. This method returns false if the pool is stopped or
843 * eligible to shut down. It also returns false if the thread
844 * factory fails to create a thread when asked. If the thread
845 * creation fails, either due to the thread factory returning
846 * null, or due to an exception (typically OutOfMemoryError in
847 * Thread.start()), we roll back cleanly.
848 *
849 * @param firstTask the task the new thread should run first (or
850 * null if none). Workers are created with an initial first task
851 * (in method execute()) to bypass queuing when there are fewer
852 * than corePoolSize threads (in which case we always start one),
853 * or when the queue is full (in which case we must bypass queue).
854 * Initially idle threads are usually created via
855 * prestartCoreThread or to replace other dying workers.
856 *
857 * @param core if true use corePoolSize as bound, else
858 * maximumPoolSize. (A boolean indicator is used here rather than a
859 * value to ensure reads of fresh values after checking other pool
860 * state).
861 * @return true if successful
862 */
863 private boolean addWorker(Runnable firstTask, boolean core) {
864 retry:
865 for (int c = ctl.get();;) {
866 // Check if queue empty only if necessary.
867 if (runStateAtLeast(c, SHUTDOWN)
868 && (runStateAtLeast(c, STOP)
869 || firstTask != null
870 || workQueue.isEmpty()))
871 return false;
872
873 for (;;) {
874 if (workerCountOf(c)
875 >= ((core ? corePoolSize : maximumPoolSize) & COUNT_MASK))
876 return false;
877 if (compareAndIncrementWorkerCount(c))
878 break retry;
879 c = ctl.get(); // Re-read ctl
880 if (runStateAtLeast(c, SHUTDOWN))
881 continue retry;
882 // else CAS failed due to workerCount change; retry inner loop
883 }
884 }
885
886 boolean workerStarted = false;
887 boolean workerAdded = false;
888 Worker w = null;
889 try {
890 w = new Worker(firstTask);
891 final Thread t = w.thread;
892 if (t != null) {
893 final ReentrantLock mainLock = this.mainLock;
894 mainLock.lock();
895 try {
896 // Recheck while holding lock.
897 // Back out on ThreadFactory failure or if
898 // shut down before lock acquired.
899 int c = ctl.get();
900
901 if (isRunning(c) ||
902 (runStateLessThan(c, STOP) && firstTask == null)) {
903 if (t.getState() != Thread.State.NEW)
904 throw new IllegalThreadStateException();
905 workers.add(w);
906 workerAdded = true;
907 int s = workers.size();
908 if (s > largestPoolSize)
909 largestPoolSize = s;
910 }
911 } finally {
912 mainLock.unlock();
913 }
914 if (workerAdded) {
915 t.start();
916 workerStarted = true;
917 }
918 }
919 } finally {
920 if (! workerStarted)
921 addWorkerFailed(w);
922 }
923 return workerStarted;
924 }
925
926 /**
927 * Rolls back the worker thread creation.
928 * - removes worker from workers, if present
929 * - decrements worker count
930 * - rechecks for termination, in case the existence of this
931 * worker was holding up termination
932 */
933 private void addWorkerFailed(Worker w) {
934 final ReentrantLock mainLock = this.mainLock;
935 mainLock.lock();
936 try {
937 if (w != null)
938 workers.remove(w);
939 decrementWorkerCount();
940 tryTerminate();
941 } finally {
942 mainLock.unlock();
943 }
944 }
945
946 /**
947 * Performs cleanup and bookkeeping for a dying worker. Called
948 * only from worker threads. Unless completedAbruptly is set,
949 * assumes that workerCount has already been adjusted to account
950 * for exit. This method removes thread from worker set, and
951 * possibly terminates the pool or replaces the worker if either
952 * it exited due to user task exception or if fewer than
953 * corePoolSize workers are running or queue is non-empty but
954 * there are no workers.
955 *
956 * @param w the worker
957 * @param completedAbruptly if the worker died due to user exception
958 */
959 private void processWorkerExit(Worker w, boolean completedAbruptly) {
960 if (completedAbruptly) // If abrupt, then workerCount wasn't adjusted
961 decrementWorkerCount();
962
963 final ReentrantLock mainLock = this.mainLock;
964 mainLock.lock();
965 try {
966 completedTaskCount += w.completedTasks;
967 workers.remove(w);
968 } finally {
969 mainLock.unlock();
970 }
971
972 tryTerminate();
973
974 int c = ctl.get();
975 if (runStateLessThan(c, STOP)) {
976 if (!completedAbruptly) {
977 int min = allowCoreThreadTimeOut ? 0 : corePoolSize;
978 if (min == 0 && ! workQueue.isEmpty())
979 min = 1;
980 if (workerCountOf(c) >= min)
981 return; // replacement not needed
982 }
983 addWorker(null, false);
984 }
985 }
986
987 /**
988 * Performs blocking or timed wait for a task, depending on
989 * current configuration settings, or returns null if this worker
990 * must exit because of any of:
991 * 1. There are more than maximumPoolSize workers (due to
992 * a call to setMaximumPoolSize).
993 * 2. The pool is stopped.
994 * 3. The pool is shutdown and the queue is empty.
995 * 4. This worker timed out waiting for a task, and timed-out
996 * workers are subject to termination (that is,
997 * {@code allowCoreThreadTimeOut || workerCount > corePoolSize})
998 * both before and after the timed wait, and if the queue is
999 * non-empty, this worker is not the last thread in the pool.
1000 *
1001 * @return task, or null if the worker must exit, in which case
1002 * workerCount is decremented
1003 */
1004 private Runnable getTask() {
1005 boolean timedOut = false; // Did the last poll() time out?
1006
1007 for (;;) {
1008 int c = ctl.get();
1009
1010 // Check if queue empty only if necessary.
1011 if (runStateAtLeast(c, SHUTDOWN)
1012 && (runStateAtLeast(c, STOP) || workQueue.isEmpty())) {
1013 decrementWorkerCount();
1014 return null;
1015 }
1016
1017 int wc = workerCountOf(c);
1018
1019 // Are workers subject to culling?
1020 boolean timed = allowCoreThreadTimeOut || wc > corePoolSize;
1021
1022 if ((wc > maximumPoolSize || (timed && timedOut))
1023 && (wc > 1 || workQueue.isEmpty())) {
1024 if (compareAndDecrementWorkerCount(c))
1025 return null;
1026 continue;
1027 }
1028
1029 try {
1030 Runnable r = timed ?
1031 workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
1032 workQueue.take();
1033 if (r != null)
1034 return r;
1035 timedOut = true;
1036 } catch (InterruptedException retry) {
1037 timedOut = false;
1038 }
1039 }
1040 }
1041
1042 /**
1043 * Main worker run loop. Repeatedly gets tasks from queue and
1044 * executes them, while coping with a number of issues:
1045 *
1046 * 1. We may start out with an initial task, in which case we
1047 * don't need to get the first one. Otherwise, as long as pool is
1048 * running, we get tasks from getTask. If it returns null then the
1049 * worker exits due to changed pool state or configuration
1050 * parameters. Other exits result from exception throws in
1051 * external code, in which case completedAbruptly holds, which
1052 * usually leads processWorkerExit to replace this thread.
1053 *
1054 * 2. Before running any task, the lock is acquired to prevent
1055 * other pool interrupts while the task is executing, and then we
1056 * ensure that unless pool is stopping, this thread does not have
1057 * its interrupt set.
1058 *
1059 * 3. Each task run is preceded by a call to beforeExecute, which
1060 * might throw an exception, in which case we cause thread to die
1061 * (breaking loop with completedAbruptly true) without processing
1062 * the task.
1063 *
1064 * 4. Assuming beforeExecute completes normally, we run the task,
1065 * gathering any of its thrown exceptions to send to afterExecute.
1066 * We separately handle RuntimeException, Error (both of which the
1067 * specs guarantee that we trap) and arbitrary Throwables.
1068 * Because we cannot rethrow Throwables within Runnable.run, we
1069 * wrap them within Errors on the way out (to the thread's
1070 * UncaughtExceptionHandler). Any thrown exception also
1071 * conservatively causes thread to die.
1072 *
1073 * 5. After task.run completes, we call afterExecute, which may
1074 * also throw an exception, which will also cause thread to
1075 * die. According to JLS Sec 14.20, this exception is the one that
1076 * will be in effect even if task.run throws.
1077 *
1078 * The net effect of the exception mechanics is that afterExecute
1079 * and the thread's UncaughtExceptionHandler have as accurate
1080 * information as we can provide about any problems encountered by
1081 * user code.
1082 *
1083 * @param w the worker
1084 */
1085 final void runWorker(Worker w) {
1086 Thread wt = Thread.currentThread();
1087 Runnable task = w.firstTask;
1088 w.firstTask = null;
1089 w.unlock(); // allow interrupts
1090 boolean completedAbruptly = true;
1091 try {
1092 while (task != null || (task = getTask()) != null) {
1093 w.lock();
1094 // If pool is stopping, ensure thread is interrupted;
1095 // if not, ensure thread is not interrupted. This
1096 // requires a recheck in second case to deal with
1097 // shutdownNow race while clearing interrupt
1098 if ((runStateAtLeast(ctl.get(), STOP) ||
1099 (Thread.interrupted() &&
1100 runStateAtLeast(ctl.get(), STOP))) &&
1101 !wt.isInterrupted())
1102 wt.interrupt();
1103 try {
1104 beforeExecute(wt, task);
1105 try {
1106 task.run();
1107 afterExecute(task, null);
1108 } catch (Throwable ex) {
1109 afterExecute(task, ex);
1110 throw ex;
1111 }
1112 } finally {
1113 task = null;
1114 w.completedTasks++;
1115 w.unlock();
1116 }
1117 }
1118 completedAbruptly = false;
1119 } finally {
1120 processWorkerExit(w, completedAbruptly);
1121 }
1122 }
1123
1124 // Public constructors and methods
1125
1126 /**
1127 * Creates a new {@code ThreadPoolExecutor} with the given initial
1128 * parameters, the
1129 * {@linkplain Executors#defaultThreadFactory default thread factory}
1130 * and the {@linkplain ThreadPoolExecutor.AbortPolicy
1131 * default rejected execution handler}.
1132 *
1133 * <p>It may be more convenient to use one of the {@link Executors}
1134 * factory methods instead of this general purpose constructor.
1135 *
1136 * @param corePoolSize the number of threads to keep in the pool, even
1137 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1138 * @param maximumPoolSize the maximum number of threads to allow in the
1139 * pool
1140 * @param keepAliveTime when the number of threads is greater than
1141 * the core, this is the maximum time that excess idle threads
1142 * will wait for new tasks before terminating.
1143 * @param unit the time unit for the {@code keepAliveTime} argument
1144 * @param workQueue the queue to use for holding tasks before they are
1145 * executed. This queue will hold only the {@code Runnable}
1146 * tasks submitted by the {@code execute} method.
1147 * @throws IllegalArgumentException if one of the following holds:<br>
1148 * {@code corePoolSize < 0}<br>
1149 * {@code keepAliveTime < 0}<br>
1150 * {@code maximumPoolSize <= 0}<br>
1151 * {@code maximumPoolSize < corePoolSize}
1152 * @throws NullPointerException if {@code workQueue} is null
1153 */
1154 public ThreadPoolExecutor(int corePoolSize,
1155 int maximumPoolSize,
1156 long keepAliveTime,
1157 TimeUnit unit,
1158 BlockingQueue<Runnable> workQueue) {
1159 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1160 Executors.defaultThreadFactory(), defaultHandler);
1161 }
1162
1163 /**
1164 * Creates a new {@code ThreadPoolExecutor} with the given initial
1165 * parameters and the {@linkplain ThreadPoolExecutor.AbortPolicy
1166 * default rejected execution handler}.
1167 *
1168 * @param corePoolSize the number of threads to keep in the pool, even
1169 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1170 * @param maximumPoolSize the maximum number of threads to allow in the
1171 * pool
1172 * @param keepAliveTime when the number of threads is greater than
1173 * the core, this is the maximum time that excess idle threads
1174 * will wait for new tasks before terminating.
1175 * @param unit the time unit for the {@code keepAliveTime} argument
1176 * @param workQueue the queue to use for holding tasks before they are
1177 * executed. This queue will hold only the {@code Runnable}
1178 * tasks submitted by the {@code execute} method.
1179 * @param threadFactory the factory to use when the executor
1180 * creates a new thread
1181 * @throws IllegalArgumentException if one of the following holds:<br>
1182 * {@code corePoolSize < 0}<br>
1183 * {@code keepAliveTime < 0}<br>
1184 * {@code maximumPoolSize <= 0}<br>
1185 * {@code maximumPoolSize < corePoolSize}
1186 * @throws NullPointerException if {@code workQueue}
1187 * or {@code threadFactory} is null
1188 */
1189 public ThreadPoolExecutor(int corePoolSize,
1190 int maximumPoolSize,
1191 long keepAliveTime,
1192 TimeUnit unit,
1193 BlockingQueue<Runnable> workQueue,
1194 ThreadFactory threadFactory) {
1195 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1196 threadFactory, defaultHandler);
1197 }
1198
1199 /**
1200 * Creates a new {@code ThreadPoolExecutor} with the given initial
1201 * parameters and the
1202 * {@linkplain Executors#defaultThreadFactory default thread factory}.
1203 *
1204 * @param corePoolSize the number of threads to keep in the pool, even
1205 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1206 * @param maximumPoolSize the maximum number of threads to allow in the
1207 * pool
1208 * @param keepAliveTime when the number of threads is greater than
1209 * the core, this is the maximum time that excess idle threads
1210 * will wait for new tasks before terminating.
1211 * @param unit the time unit for the {@code keepAliveTime} argument
1212 * @param workQueue the queue to use for holding tasks before they are
1213 * executed. This queue will hold only the {@code Runnable}
1214 * tasks submitted by the {@code execute} method.
1215 * @param handler the handler to use when execution is blocked
1216 * because the thread bounds and queue capacities are reached
1217 * @throws IllegalArgumentException if one of the following holds:<br>
1218 * {@code corePoolSize < 0}<br>
1219 * {@code keepAliveTime < 0}<br>
1220 * {@code maximumPoolSize <= 0}<br>
1221 * {@code maximumPoolSize < corePoolSize}
1222 * @throws NullPointerException if {@code workQueue}
1223 * or {@code handler} is null
1224 */
1225 public ThreadPoolExecutor(int corePoolSize,
1226 int maximumPoolSize,
1227 long keepAliveTime,
1228 TimeUnit unit,
1229 BlockingQueue<Runnable> workQueue,
1230 RejectedExecutionHandler handler) {
1231 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1232 Executors.defaultThreadFactory(), handler);
1233 }
1234
1235 /**
1236 * Creates a new {@code ThreadPoolExecutor} with the given initial
1237 * parameters.
1238 *
1239 * @param corePoolSize the number of threads to keep in the pool, even
1240 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1241 * @param maximumPoolSize the maximum number of threads to allow in the
1242 * pool
1243 * @param keepAliveTime when the number of threads is greater than
1244 * the core, this is the maximum time that excess idle threads
1245 * will wait for new tasks before terminating.
1246 * @param unit the time unit for the {@code keepAliveTime} argument
1247 * @param workQueue the queue to use for holding tasks before they are
1248 * executed. This queue will hold only the {@code Runnable}
1249 * tasks submitted by the {@code execute} method.
1250 * @param threadFactory the factory to use when the executor
1251 * creates a new thread
1252 * @param handler the handler to use when execution is blocked
1253 * because the thread bounds and queue capacities are reached
1254 * @throws IllegalArgumentException if one of the following holds:<br>
1255 * {@code corePoolSize < 0}<br>
1256 * {@code keepAliveTime < 0}<br>
1257 * {@code maximumPoolSize <= 0}<br>
1258 * {@code maximumPoolSize < corePoolSize}
1259 * @throws NullPointerException if {@code workQueue}
1260 * or {@code threadFactory} or {@code handler} is null
1261 */
1262 public ThreadPoolExecutor(int corePoolSize,
1263 int maximumPoolSize,
1264 long keepAliveTime,
1265 TimeUnit unit,
1266 BlockingQueue<Runnable> workQueue,
1267 ThreadFactory threadFactory,
1268 RejectedExecutionHandler handler) {
1269 if (corePoolSize < 0 ||
1270 maximumPoolSize <= 0 ||
1271 maximumPoolSize < corePoolSize ||
1272 keepAliveTime < 0)
1273 throw new IllegalArgumentException();
1274 if (workQueue == null || threadFactory == null || handler == null)
1275 throw new NullPointerException();
1276 this.corePoolSize = corePoolSize;
1277 this.maximumPoolSize = maximumPoolSize;
1278 this.workQueue = workQueue;
1279 this.keepAliveTime = unit.toNanos(keepAliveTime);
1280 this.threadFactory = threadFactory;
1281 this.handler = handler;
1282 }
1283
1284 /**
1285 * Executes the given task sometime in the future. The task
1286 * may execute in a new thread or in an existing pooled thread.
1287 *
1288 * If the task cannot be submitted for execution, either because this
1289 * executor has been shutdown or because its capacity has been reached,
1290 * the task is handled by the current {@link RejectedExecutionHandler}.
1291 *
1292 * @param command the task to execute
1293 * @throws RejectedExecutionException at discretion of
1294 * {@code RejectedExecutionHandler}, if the task
1295 * cannot be accepted for execution
1296 * @throws NullPointerException if {@code command} is null
1297 */
1298 public void execute(Runnable command) {
1299 if (command == null)
1300 throw new NullPointerException();
1301 /*
1302 * Proceed in 3 steps:
1303 *
1304 * 1. If fewer than corePoolSize threads are running, try to
1305 * start a new thread with the given command as its first
1306 * task. The call to addWorker atomically checks runState and
1307 * workerCount, and so prevents false alarms that would add
1308 * threads when it shouldn't, by returning false.
1309 *
1310 * 2. If a task can be successfully queued, then we still need
1311 * to double-check whether we should have added a thread
1312 * (because existing ones died since last checking) or that
1313 * the pool shut down since entry into this method. So we
1314 * recheck state and if necessary roll back the enqueuing if
1315 * stopped, or start a new thread if there are none.
1316 *
1317 * 3. If we cannot queue task, then we try to add a new
1318 * thread. If it fails, we know we are shut down or saturated
1319 * and so reject the task.
1320 */
1321 int c = ctl.get();
1322 if (workerCountOf(c) < corePoolSize) {
1323 if (addWorker(command, true))
1324 return;
1325 c = ctl.get();
1326 }
1327 if (isRunning(c) && workQueue.offer(command)) {
1328 int recheck = ctl.get();
1329 if (! isRunning(recheck) && remove(command))
1330 reject(command);
1331 else if (workerCountOf(recheck) == 0)
1332 addWorker(null, false);
1333 }
1334 else if (!addWorker(command, false))
1335 reject(command);
1336 }
1337
1338 /**
1339 * Initiates an orderly shutdown in which previously submitted
1340 * tasks are executed, but no new tasks will be accepted.
1341 * Invocation has no additional effect if already shut down.
1342 *
1343 * <p>This method does not wait for previously submitted tasks to
1344 * complete execution. Use {@link #awaitTermination awaitTermination}
1345 * to do that.
1346 *
1347 * @throws SecurityException {@inheritDoc}
1348 */
1349 public void shutdown() {
1350 final ReentrantLock mainLock = this.mainLock;
1351 mainLock.lock();
1352 try {
1353 checkShutdownAccess();
1354 advanceRunState(SHUTDOWN);
1355 interruptIdleWorkers();
1356 onShutdown(); // hook for ScheduledThreadPoolExecutor
1357 } finally {
1358 mainLock.unlock();
1359 }
1360 tryTerminate();
1361 }
1362
1363 /**
1364 * Attempts to stop all actively executing tasks, halts the
1365 * processing of waiting tasks, and returns a list of the tasks
1366 * that were awaiting execution. These tasks are drained (removed)
1367 * from the task queue upon return from this method.
1368 *
1369 * <p>This method does not wait for actively executing tasks to
1370 * terminate. Use {@link #awaitTermination awaitTermination} to
1371 * do that.
1372 *
1373 * <p>There are no guarantees beyond best-effort attempts to stop
1374 * processing actively executing tasks. This implementation
1375 * interrupts tasks via {@link Thread#interrupt}; any task that
1376 * fails to respond to interrupts may never terminate.
1377 *
1378 * @throws SecurityException {@inheritDoc}
1379 */
1380 public List<Runnable> shutdownNow() {
1381 List<Runnable> tasks;
1382 final ReentrantLock mainLock = this.mainLock;
1383 mainLock.lock();
1384 try {
1385 checkShutdownAccess();
1386 advanceRunState(STOP);
1387 interruptWorkers();
1388 tasks = drainQueue();
1389 } finally {
1390 mainLock.unlock();
1391 }
1392 tryTerminate();
1393 return tasks;
1394 }
1395
1396 public boolean isShutdown() {
1397 return runStateAtLeast(ctl.get(), SHUTDOWN);
1398 }
1399
1400 /** Used by ScheduledThreadPoolExecutor. */
1401 boolean isStopped() {
1402 return runStateAtLeast(ctl.get(), STOP);
1403 }
1404
1405 /**
1406 * Returns true if this executor is in the process of terminating
1407 * after {@link #shutdown} or {@link #shutdownNow} but has not
1408 * completely terminated. This method may be useful for
1409 * debugging. A return of {@code true} reported a sufficient
1410 * period after shutdown may indicate that submitted tasks have
1411 * ignored or suppressed interruption, causing this executor not
1412 * to properly terminate.
1413 *
1414 * @return {@code true} if terminating but not yet terminated
1415 */
1416 public boolean isTerminating() {
1417 int c = ctl.get();
1418 return runStateAtLeast(c, SHUTDOWN) && runStateLessThan(c, TERMINATED);
1419 }
1420
1421 public boolean isTerminated() {
1422 return runStateAtLeast(ctl.get(), TERMINATED);
1423 }
1424
1425 public boolean awaitTermination(long timeout, TimeUnit unit)
1426 throws InterruptedException {
1427 long nanos = unit.toNanos(timeout);
1428 final ReentrantLock mainLock = this.mainLock;
1429 mainLock.lock();
1430 try {
1431 while (runStateLessThan(ctl.get(), TERMINATED)) {
1432 if (nanos <= 0L)
1433 return false;
1434 nanos = termination.awaitNanos(nanos);
1435 }
1436 return true;
1437 } finally {
1438 mainLock.unlock();
1439 }
1440 }
1441
1442 // Override without "throws Throwable" for compatibility with subclasses
1443 // whose finalize method invokes super.finalize() (as is recommended).
1444 // Before JDK 11, finalize() had a non-empty method body.
1445
1446 /**
1447 * @implNote Previous versions of this class had a finalize method
1448 * that shut down this executor, but in this version, finalize
1449 * does nothing.
1450 */
1451 @Deprecated(since="9")
1452 protected void finalize() {}
1453
1454 /**
1455 * Sets the thread factory used to create new threads.
1456 *
1457 * @param threadFactory the new thread factory
1458 * @throws NullPointerException if threadFactory is null
1459 * @see #getThreadFactory
1460 */
1461 public void setThreadFactory(ThreadFactory threadFactory) {
1462 if (threadFactory == null)
1463 throw new NullPointerException();
1464 this.threadFactory = threadFactory;
1465 }
1466
1467 /**
1468 * Returns the thread factory used to create new threads.
1469 *
1470 * @return the current thread factory
1471 * @see #setThreadFactory(ThreadFactory)
1472 */
1473 public ThreadFactory getThreadFactory() {
1474 return threadFactory;
1475 }
1476
1477 /**
1478 * Sets a new handler for unexecutable tasks.
1479 *
1480 * @param handler the new handler
1481 * @throws NullPointerException if handler is null
1482 * @see #getRejectedExecutionHandler
1483 */
1484 public void setRejectedExecutionHandler(RejectedExecutionHandler handler) {
1485 if (handler == null)
1486 throw new NullPointerException();
1487 this.handler = handler;
1488 }
1489
1490 /**
1491 * Returns the current handler for unexecutable tasks.
1492 *
1493 * @return the current handler
1494 * @see #setRejectedExecutionHandler(RejectedExecutionHandler)
1495 */
1496 public RejectedExecutionHandler getRejectedExecutionHandler() {
1497 return handler;
1498 }
1499
1500 /**
1501 * Sets the core number of threads. This overrides any value set
1502 * in the constructor. If the new value is smaller than the
1503 * current value, excess existing threads will be terminated when
1504 * they next become idle. If larger, new threads will, if needed,
1505 * be started to execute any queued tasks.
1506 *
1507 * @param corePoolSize the new core size
1508 * @throws IllegalArgumentException if {@code corePoolSize < 0}
1509 * or {@code corePoolSize} is greater than the {@linkplain
1510 * #getMaximumPoolSize() maximum pool size}
1511 * @see #getCorePoolSize
1512 */
1513 public void setCorePoolSize(int corePoolSize) {
1514 if (corePoolSize < 0 || maximumPoolSize < corePoolSize)
1515 throw new IllegalArgumentException();
1516 int delta = corePoolSize - this.corePoolSize;
1517 this.corePoolSize = corePoolSize;
1518 if (workerCountOf(ctl.get()) > corePoolSize)
1519 interruptIdleWorkers();
1520 else if (delta > 0) {
1521 // We don't really know how many new threads are "needed".
1522 // As a heuristic, prestart enough new workers (up to new
1523 // core size) to handle the current number of tasks in
1524 // queue, but stop if queue becomes empty while doing so.
1525 int k = Math.min(delta, workQueue.size());
1526 while (k-- > 0 && addWorker(null, true)) {
1527 if (workQueue.isEmpty())
1528 break;
1529 }
1530 }
1531 }
1532
1533 /**
1534 * Returns the core number of threads.
1535 *
1536 * @return the core number of threads
1537 * @see #setCorePoolSize
1538 */
1539 public int getCorePoolSize() {
1540 return corePoolSize;
1541 }
1542
1543 /**
1544 * Starts a core thread, causing it to idly wait for work. This
1545 * overrides the default policy of starting core threads only when
1546 * new tasks are executed. This method will return {@code false}
1547 * if all core threads have already been started.
1548 *
1549 * @return {@code true} if a thread was started
1550 */
1551 public boolean prestartCoreThread() {
1552 return workerCountOf(ctl.get()) < corePoolSize &&
1553 addWorker(null, true);
1554 }
1555
1556 /**
1557 * Same as prestartCoreThread except arranges that at least one
1558 * thread is started even if corePoolSize is 0.
1559 */
1560 void ensurePrestart() {
1561 int wc = workerCountOf(ctl.get());
1562 if (wc < corePoolSize)
1563 addWorker(null, true);
1564 else if (wc == 0)
1565 addWorker(null, false);
1566 }
1567
1568 /**
1569 * Starts all core threads, causing them to idly wait for work. This
1570 * overrides the default policy of starting core threads only when
1571 * new tasks are executed.
1572 *
1573 * @return the number of threads started
1574 */
1575 public int prestartAllCoreThreads() {
1576 int n = 0;
1577 while (addWorker(null, true))
1578 ++n;
1579 return n;
1580 }
1581
1582 /**
1583 * Returns true if this pool allows core threads to time out and
1584 * terminate if no tasks arrive within the keepAlive time, being
1585 * replaced if needed when new tasks arrive. When true, the same
1586 * keep-alive policy applying to non-core threads applies also to
1587 * core threads. When false (the default), core threads are never
1588 * terminated due to lack of incoming tasks.
1589 *
1590 * @return {@code true} if core threads are allowed to time out,
1591 * else {@code false}
1592 *
1593 * @since 1.6
1594 */
1595 public boolean allowsCoreThreadTimeOut() {
1596 return allowCoreThreadTimeOut;
1597 }
1598
1599 /**
1600 * Sets the policy governing whether core threads may time out and
1601 * terminate if no tasks arrive within the keep-alive time, being
1602 * replaced if needed when new tasks arrive. When false, core
1603 * threads are never terminated due to lack of incoming
1604 * tasks. When true, the same keep-alive policy applying to
1605 * non-core threads applies also to core threads. To avoid
1606 * continual thread replacement, the keep-alive time must be
1607 * greater than zero when setting {@code true}. This method
1608 * should in general be called before the pool is actively used.
1609 *
1610 * @param value {@code true} if should time out, else {@code false}
1611 * @throws IllegalArgumentException if value is {@code true}
1612 * and the current keep-alive time is not greater than zero
1613 *
1614 * @since 1.6
1615 */
1616 public void allowCoreThreadTimeOut(boolean value) {
1617 if (value && keepAliveTime <= 0)
1618 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1619 if (value != allowCoreThreadTimeOut) {
1620 allowCoreThreadTimeOut = value;
1621 if (value)
1622 interruptIdleWorkers();
1623 }
1624 }
1625
1626 /**
1627 * Sets the maximum allowed number of threads. This overrides any
1628 * value set in the constructor. If the new value is smaller than
1629 * the current value, excess existing threads will be
1630 * terminated when they next become idle.
1631 *
1632 * @param maximumPoolSize the new maximum
1633 * @throws IllegalArgumentException if the new maximum is
1634 * less than or equal to zero, or
1635 * less than the {@linkplain #getCorePoolSize core pool size}
1636 * @see #getMaximumPoolSize
1637 */
1638 public void setMaximumPoolSize(int maximumPoolSize) {
1639 if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize)
1640 throw new IllegalArgumentException();
1641 this.maximumPoolSize = maximumPoolSize;
1642 if (workerCountOf(ctl.get()) > maximumPoolSize)
1643 interruptIdleWorkers();
1644 }
1645
1646 /**
1647 * Returns the maximum allowed number of threads.
1648 *
1649 * @return the maximum allowed number of threads
1650 * @see #setMaximumPoolSize
1651 */
1652 public int getMaximumPoolSize() {
1653 return maximumPoolSize;
1654 }
1655
1656 /**
1657 * Sets the thread keep-alive time, which is the amount of time
1658 * that threads may remain idle before being terminated.
1659 * Threads that wait this amount of time without processing a
1660 * task will be terminated if there are more than the core
1661 * number of threads currently in the pool, or if this pool
1662 * {@linkplain #allowsCoreThreadTimeOut() allows core thread timeout}.
1663 * This overrides any value set in the constructor.
1664 *
1665 * @param time the time to wait. A time value of zero will cause
1666 * excess threads to terminate immediately after executing tasks.
1667 * @param unit the time unit of the {@code time} argument
1668 * @throws IllegalArgumentException if {@code time} less than zero or
1669 * if {@code time} is zero and {@code allowsCoreThreadTimeOut}
1670 * @see #getKeepAliveTime(TimeUnit)
1671 */
1672 public void setKeepAliveTime(long time, TimeUnit unit) {
1673 if (time < 0)
1674 throw new IllegalArgumentException();
1675 if (time == 0 && allowsCoreThreadTimeOut())
1676 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1677 long keepAliveTime = unit.toNanos(time);
1678 long delta = keepAliveTime - this.keepAliveTime;
1679 this.keepAliveTime = keepAliveTime;
1680 if (delta < 0)
1681 interruptIdleWorkers();
1682 }
1683
1684 /**
1685 * Returns the thread keep-alive time, which is the amount of time
1686 * that threads may remain idle before being terminated.
1687 * Threads that wait this amount of time without processing a
1688 * task will be terminated if there are more than the core
1689 * number of threads currently in the pool, or if this pool
1690 * {@linkplain #allowsCoreThreadTimeOut() allows core thread timeout}.
1691 *
1692 * @param unit the desired time unit of the result
1693 * @return the time limit
1694 * @see #setKeepAliveTime(long, TimeUnit)
1695 */
1696 public long getKeepAliveTime(TimeUnit unit) {
1697 return unit.convert(keepAliveTime, TimeUnit.NANOSECONDS);
1698 }
1699
1700 /* User-level queue utilities */
1701
1702 /**
1703 * Returns the task queue used by this executor. Access to the
1704 * task queue is intended primarily for debugging and monitoring.
1705 * This queue may be in active use. Retrieving the task queue
1706 * does not prevent queued tasks from executing.
1707 *
1708 * @return the task queue
1709 */
1710 public BlockingQueue<Runnable> getQueue() {
1711 return workQueue;
1712 }
1713
1714 /**
1715 * Removes this task from the executor's internal queue if it is
1716 * present, thus causing it not to be run if it has not already
1717 * started.
1718 *
1719 * <p>This method may be useful as one part of a cancellation
1720 * scheme. It may fail to remove tasks that have been converted
1721 * into other forms before being placed on the internal queue.
1722 * For example, a task entered using {@code submit} might be
1723 * converted into a form that maintains {@code Future} status.
1724 * However, in such cases, method {@link #purge} may be used to
1725 * remove those Futures that have been cancelled.
1726 *
1727 * @param task the task to remove
1728 * @return {@code true} if the task was removed
1729 */
1730 public boolean remove(Runnable task) {
1731 boolean removed = workQueue.remove(task);
1732 tryTerminate(); // In case SHUTDOWN and now empty
1733 return removed;
1734 }
1735
1736 /**
1737 * Tries to remove from the work queue all {@link Future}
1738 * tasks that have been cancelled. This method can be useful as a
1739 * storage reclamation operation, that has no other impact on
1740 * functionality. Cancelled tasks are never executed, but may
1741 * accumulate in work queues until worker threads can actively
1742 * remove them. Invoking this method instead tries to remove them now.
1743 * However, this method may fail to remove tasks in
1744 * the presence of interference by other threads.
1745 */
1746 public void purge() {
1747 final BlockingQueue<Runnable> q = workQueue;
1748 try {
1749 Iterator<Runnable> it = q.iterator();
1750 while (it.hasNext()) {
1751 Runnable r = it.next();
1752 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1753 it.remove();
1754 }
1755 } catch (ConcurrentModificationException fallThrough) {
1756 // Take slow path if we encounter interference during traversal.
1757 // Make copy for traversal and call remove for cancelled entries.
1758 // The slow path is more likely to be O(N*N).
1759 for (Object r : q.toArray())
1760 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1761 q.remove(r);
1762 }
1763
1764 tryTerminate(); // In case SHUTDOWN and now empty
1765 }
1766
1767 /* Statistics */
1768
1769 /**
1770 * Returns the current number of threads in the pool.
1771 *
1772 * @return the number of threads
1773 */
1774 public int getPoolSize() {
1775 final ReentrantLock mainLock = this.mainLock;
1776 mainLock.lock();
1777 try {
1778 // Remove rare and surprising possibility of
1779 // isTerminated() && getPoolSize() > 0
1780 return runStateAtLeast(ctl.get(), TIDYING) ? 0
1781 : workers.size();
1782 } finally {
1783 mainLock.unlock();
1784 }
1785 }
1786
1787 /**
1788 * Returns the approximate number of threads that are actively
1789 * executing tasks.
1790 *
1791 * @return the number of threads
1792 */
1793 public int getActiveCount() {
1794 final ReentrantLock mainLock = this.mainLock;
1795 mainLock.lock();
1796 try {
1797 int n = 0;
1798 for (Worker w : workers)
1799 if (w.isLocked())
1800 ++n;
1801 return n;
1802 } finally {
1803 mainLock.unlock();
1804 }
1805 }
1806
1807 /**
1808 * Returns the largest number of threads that have ever
1809 * simultaneously been in the pool.
1810 *
1811 * @return the number of threads
1812 */
1813 public int getLargestPoolSize() {
1814 final ReentrantLock mainLock = this.mainLock;
1815 mainLock.lock();
1816 try {
1817 return largestPoolSize;
1818 } finally {
1819 mainLock.unlock();
1820 }
1821 }
1822
1823 /**
1824 * Returns the approximate total number of tasks that have ever been
1825 * scheduled for execution. Because the states of tasks and
1826 * threads may change dynamically during computation, the returned
1827 * value is only an approximation.
1828 *
1829 * @return the number of tasks
1830 */
1831 public long getTaskCount() {
1832 final ReentrantLock mainLock = this.mainLock;
1833 mainLock.lock();
1834 try {
1835 long n = completedTaskCount;
1836 for (Worker w : workers) {
1837 n += w.completedTasks;
1838 if (w.isLocked())
1839 ++n;
1840 }
1841 return n + workQueue.size();
1842 } finally {
1843 mainLock.unlock();
1844 }
1845 }
1846
1847 /**
1848 * Returns the approximate total number of tasks that have
1849 * completed execution. Because the states of tasks and threads
1850 * may change dynamically during computation, the returned value
1851 * is only an approximation, but one that does not ever decrease
1852 * across successive calls.
1853 *
1854 * @return the number of tasks
1855 */
1856 public long getCompletedTaskCount() {
1857 final ReentrantLock mainLock = this.mainLock;
1858 mainLock.lock();
1859 try {
1860 long n = completedTaskCount;
1861 for (Worker w : workers)
1862 n += w.completedTasks;
1863 return n;
1864 } finally {
1865 mainLock.unlock();
1866 }
1867 }
1868
1869 /**
1870 * Returns a string identifying this pool, as well as its state,
1871 * including indications of run state and estimated worker and
1872 * task counts.
1873 *
1874 * @return a string identifying this pool, as well as its state
1875 */
1876 public String toString() {
1877 long ncompleted;
1878 int nworkers, nactive;
1879 final ReentrantLock mainLock = this.mainLock;
1880 mainLock.lock();
1881 try {
1882 ncompleted = completedTaskCount;
1883 nactive = 0;
1884 nworkers = workers.size();
1885 for (Worker w : workers) {
1886 ncompleted += w.completedTasks;
1887 if (w.isLocked())
1888 ++nactive;
1889 }
1890 } finally {
1891 mainLock.unlock();
1892 }
1893 int c = ctl.get();
1894 String runState =
1895 isRunning(c) ? "Running" :
1896 runStateAtLeast(c, TERMINATED) ? "Terminated" :
1897 "Shutting down";
1898 return super.toString() +
1899 "[" + runState +
1900 ", pool size = " + nworkers +
1901 ", active threads = " + nactive +
1902 ", queued tasks = " + workQueue.size() +
1903 ", completed tasks = " + ncompleted +
1904 "]";
1905 }
1906
1907 /* Extension hooks */
1908
1909 /**
1910 * Method invoked prior to executing the given Runnable in the
1911 * given thread. This method is invoked by thread {@code t} that
1912 * will execute task {@code r}, and may be used to re-initialize
1913 * ThreadLocals, or to perform logging.
1914 *
1915 * <p>This implementation does nothing, but may be customized in
1916 * subclasses. Note: To properly nest multiple overridings, subclasses
1917 * should generally invoke {@code super.beforeExecute} at the end of
1918 * this method.
1919 *
1920 * @param t the thread that will run task {@code r}
1921 * @param r the task that will be executed
1922 */
1923 protected void beforeExecute(Thread t, Runnable r) { }
1924
1925 /**
1926 * Method invoked upon completion of execution of the given Runnable.
1927 * This method is invoked by the thread that executed the task. If
1928 * non-null, the Throwable is the uncaught {@code RuntimeException}
1929 * or {@code Error} that caused execution to terminate abruptly.
1930 *
1931 * <p>This implementation does nothing, but may be customized in
1932 * subclasses. Note: To properly nest multiple overridings, subclasses
1933 * should generally invoke {@code super.afterExecute} at the
1934 * beginning of this method.
1935 *
1936 * <p><b>Note:</b> When actions are enclosed in tasks (such as
1937 * {@link FutureTask}) either explicitly or via methods such as
1938 * {@code submit}, these task objects catch and maintain
1939 * computational exceptions, and so they do not cause abrupt
1940 * termination, and the internal exceptions are <em>not</em>
1941 * passed to this method. If you would like to trap both kinds of
1942 * failures in this method, you can further probe for such cases,
1943 * as in this sample subclass that prints either the direct cause
1944 * or the underlying exception if a task has been aborted:
1945 *
1946 * <pre> {@code
1947 * class ExtendedExecutor extends ThreadPoolExecutor {
1948 * // ...
1949 * protected void afterExecute(Runnable r, Throwable t) {
1950 * super.afterExecute(r, t);
1951 * if (t == null
1952 * && r instanceof Future<?>
1953 * && ((Future<?>)r).isDone()) {
1954 * try {
1955 * Object result = ((Future<?>) r).get();
1956 * } catch (CancellationException ce) {
1957 * t = ce;
1958 * } catch (ExecutionException ee) {
1959 * t = ee.getCause();
1960 * } catch (InterruptedException ie) {
1961 * // ignore/reset
1962 * Thread.currentThread().interrupt();
1963 * }
1964 * }
1965 * if (t != null)
1966 * System.out.println(t);
1967 * }
1968 * }}</pre>
1969 *
1970 * @param r the runnable that has completed
1971 * @param t the exception that caused termination, or null if
1972 * execution completed normally
1973 */
1974 protected void afterExecute(Runnable r, Throwable t) { }
1975
1976 /**
1977 * Method invoked when the Executor has terminated. Default
1978 * implementation does nothing. Note: To properly nest multiple
1979 * overridings, subclasses should generally invoke
1980 * {@code super.terminated} within this method.
1981 */
1982 protected void terminated() { }
1983
1984 /* Predefined RejectedExecutionHandlers */
1985
1986 /**
1987 * A handler for rejected tasks that runs the rejected task
1988 * directly in the calling thread of the {@code execute} method,
1989 * unless the executor has been shut down, in which case the task
1990 * is discarded.
1991 */
1992 public static class CallerRunsPolicy implements RejectedExecutionHandler {
1993 /**
1994 * Creates a {@code CallerRunsPolicy}.
1995 */
1996 public CallerRunsPolicy() { }
1997
1998 /**
1999 * Executes task r in the caller's thread, unless the executor
2000 * has been shut down, in which case the task is discarded.
2001 *
2002 * @param r the runnable task requested to be executed
2003 * @param e the executor attempting to execute this task
2004 */
2005 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2006 if (!e.isShutdown()) {
2007 r.run();
2008 }
2009 }
2010 }
2011
2012 /**
2013 * A handler for rejected tasks that throws a
2014 * {@link RejectedExecutionException}.
2015 *
2016 * This is the default handler for {@link ThreadPoolExecutor} and
2017 * {@link ScheduledThreadPoolExecutor}.
2018 */
2019 public static class AbortPolicy implements RejectedExecutionHandler {
2020 /**
2021 * Creates an {@code AbortPolicy}.
2022 */
2023 public AbortPolicy() { }
2024
2025 /**
2026 * Always throws RejectedExecutionException.
2027 *
2028 * @param r the runnable task requested to be executed
2029 * @param e the executor attempting to execute this task
2030 * @throws RejectedExecutionException always
2031 */
2032 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2033 throw new RejectedExecutionException("Task " + r.toString() +
2034 " rejected from " +
2035 e.toString());
2036 }
2037 }
2038
2039 /**
2040 * A handler for rejected tasks that silently discards the
2041 * rejected task.
2042 */
2043 public static class DiscardPolicy implements RejectedExecutionHandler {
2044 /**
2045 * Creates a {@code DiscardPolicy}.
2046 */
2047 public DiscardPolicy() { }
2048
2049 /**
2050 * Does nothing, which has the effect of discarding task r.
2051 *
2052 * @param r the runnable task requested to be executed
2053 * @param e the executor attempting to execute this task
2054 */
2055 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2056 }
2057 }
2058
2059 /**
2060 * A handler for rejected tasks that discards the oldest unhandled
2061 * request and then retries {@code execute}, unless the executor
2062 * is shut down, in which case the task is discarded. This policy is
2063 * rarely useful in cases where other threads may be waiting for
2064 * tasks to terminate, or failures must be recorded. Instead consider
2065 * using a handler of the form:
2066 * <pre> {@code
2067 * new RejectedExecutionHandler() {
2068 * public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2069 * Future<?> dropped = e.getQueue().poll();
2070 * if (dropped != null)
2071 * dropped.cancel(false); // also consider logging the failure
2072 * e.execute(r); // retry
2073 * }}}</pre>
2074 */
2075 public static class DiscardOldestPolicy implements RejectedExecutionHandler {
2076 /**
2077 * Creates a {@code DiscardOldestPolicy} for the given executor.
2078 */
2079 public DiscardOldestPolicy() { }
2080
2081 /**
2082 * Obtains and ignores the next task that the executor
2083 * would otherwise execute, if one is immediately available,
2084 * and then retries execution of task r, unless the executor
2085 * is shut down, in which case task r is instead discarded.
2086 *
2087 * @param r the runnable task requested to be executed
2088 * @param e the executor attempting to execute this task
2089 */
2090 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2091 if (!e.isShutdown()) {
2092 e.getQueue().poll();
2093 e.execute(r);
2094 }
2095 }
2096 }
2097 }