ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ThreadPoolExecutor.java
Revision: 1.37
Committed: Thu Dec 4 20:54:29 2003 UTC (20 years, 6 months ago) by dl
Branch: MAIN
CVS Tags: JSR166_DEC9_PRE_ES_SUBMIT
Changes since 1.36: +3 -3 lines
Log Message:
Revised tests for revised Future classes

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain. Use, modify, and
4 * redistribute this code in any way without acknowledgement.
5 */
6
7 package java.util.concurrent;
8 import java.util.concurrent.locks.*;
9 import java.util.concurrent.atomic.AtomicInteger;
10 import java.util.*;
11
12 /**
13 * An {@link ExecutorService} that executes each submitted task using
14 * one of possibly several pooled threads, normally configured
15 * using {@link Executors} factory methods.
16 *
17 * <p>Thread pools address two different problems: they usually
18 * provide improved performance when executing large numbers of
19 * asynchronous tasks, due to reduced per-task invocation overhead,
20 * and they provide a means of bounding and managing the resources,
21 * including threads, consumed when executing a collection of tasks.
22 * Each <tt>ThreadPoolExecutor</tt> also maintains some basic
23 * statistics, such as the number of completed tasks.
24 *
25 * <p>To be useful across a wide range of contexts, this class
26 * provides many adjustable parameters and extensibility
27 * hooks. However, programmers are urged to use the more convenient
28 * {@link Executors} factory methods {@link
29 * Executors#newCachedThreadPool} (unbounded thread pool, with
30 * automatic thread reclamation), {@link Executors#newFixedThreadPool}
31 * (fixed size thread pool) and {@link
32 * Executors#newSingleThreadExecutor} (single background thread), that
33 * preconfigure settings for the most common usage
34 * scenarios. Otherwise, use the following guide when manually
35 * configuring and tuning this class:
36 *
37 * <dl>
38 *
39 * <dt>Core and maximum pool sizes</dt>
40 *
41 * <dd>A <tt>ThreadPoolExecutor</tt> will automatically adjust the
42 * pool size
43 * (see {@link ThreadPoolExecutor#getPoolSize})
44 * according to the bounds set by corePoolSize
45 * (see {@link ThreadPoolExecutor#getCorePoolSize})
46 * and
47 * maximumPoolSize
48 * (see {@link ThreadPoolExecutor#getMaximumPoolSize}).
49 * When a new task is submitted in method {@link
50 * ThreadPoolExecutor#execute}, and fewer than corePoolSize threads
51 * are running, a new thread is created to handle the request, even if
52 * other worker threads are idle. If there are more than
53 * corePoolSize but less than maximumPoolSize threads running, a new
54 * thread will be created only if the queue is full. By setting
55 * corePoolSize and maximumPoolSize the same, you create a fixed-size
56 * thread pool. By setting maximumPoolSize to an essentially unbounded
57 * value such as <tt>Integer.MAX_VALUE</tt>, you allow the pool to
58 * accommodate an arbitrary number of concurrent tasks. Most typically,
59 * core and maximum pool sizes are set only upon construction, but they
60 * may also be changed dynamically using {@link
61 * ThreadPoolExecutor#setCorePoolSize} and {@link
62 * ThreadPoolExecutor#setMaximumPoolSize}. <dd>
63 *
64 * <dt> On-demand construction
65 *
66 * <dd> By default, even core threads are initially created and
67 * started only when needed by new tasks, but this can be overridden
68 * dynamically using method {@link
69 * ThreadPoolExecutor#prestartCoreThread} or
70 * {@link ThreadPoolExecutor#prestartAllCoreThreads}. </dd>
71 *
72 * <dt>Creating new threads</dt>
73 *
74 * <dd>New threads are created using a {@link
75 * java.util.concurrent.ThreadFactory}. If not otherwise specified, a
76 * {@link Executors#defaultThreadFactory} is used, that creates threads to all
77 * be in the same {@link ThreadGroup} and with the same
78 * <tt>NORM_PRIORITY</tt> priority and non-daemon status. By supplying
79 * a different ThreadFactory, you can alter the thread's name, thread
80 * group, priority, daemon status, etc. </dd>
81 *
82 * <dt>Keep-alive times</dt>
83 *
84 * <dd>If the pool currently has more than corePoolSize threads,
85 * excess threads will be terminated if they have been idle for more
86 * than the keepAliveTime (see {@link
87 * ThreadPoolExecutor#getKeepAliveTime}). This provides a means of
88 * reducing resource consumption when the pool is not being actively
89 * used. If the pool becomes more active later, new threads will be
90 * constructed. This parameter can also be changed dynamically
91 * using method {@link ThreadPoolExecutor#setKeepAliveTime}. Using
92 * a value of <tt>Long.MAX_VALUE</tt> {@link TimeUnit#NANOSECONDS}
93 * effectively disables idle threads from ever terminating prior
94 * to shut down.
95 * </dd>
96 *
97 * <dt>Queueing</dt>
98 *
99 * <dd>Any {@link BlockingQueue} may be used to transfer and hold
100 * submitted tasks. The use of this queue interacts with pool sizing:
101 *
102 * <ul>
103 *
104 * <li> If fewer than corePoolSize threads are running, the Executor
105 * always prefers adding a new thread
106 * rather than queueing.</li>
107 *
108 * <li> If corePoolSize or more threads are running, the Executor
109 * always prefers queuing a request rather than adding a new
110 * thread.</li>
111 *
112 * <li> If a request cannot be queued, a new thread is created unless
113 * this would exceed maximumPoolSize, in which case, the task will be
114 * rejected.</li>
115 *
116 * </ul>
117 *
118 * There are three general strategies for queuing:
119 * <ol>
120 *
121 * <li> <em> Direct handoffs.</em> A good default choice for a work
122 * queue is a {@link SynchronousQueue} that hands off tasks to threads
123 * without otherwise holding them. Here, an attempt to queue a task
124 * will fail if no threads are immediately available to run it, so a
125 * new thread will be constructed. This policy avoids lockups when
126 * handling sets of requests that might have internal dependencies.
127 * Direct handoffs generally require unbounded maximumPoolSizes to
128 * avoid rejection of new submitted tasks. This in turn admits the
129 * possibility of unbounded thread growth when commands continue to
130 * arrive on average faster than they can be processed. </li>
131 *
132 * <li><em> Unbounded queues.</em> Using an unbounded queue (for
133 * example a {@link LinkedBlockingQueue} without a predefined
134 * capacity) will cause new tasks to be queued in cases where all
135 * corePoolSize threads are busy. Thus, no more than corePoolSize
136 * threads will ever be created. (And the value of the maximumPoolSize
137 * therefore doesn't have any effect.) This may be appropriate when
138 * each task is completely independent of others, so tasks cannot
139 * affect each others execution; for example, in a web page server.
140 * While this style of queuing can be useful in smoothing out
141 * transient bursts of requests, it admits the possibility of
142 * unbounded work queue growth when commands continue to arrive on
143 * average faster than they can be processed. </li>
144 *
145 * <li><em>Bounded queues.</em> A bounded queue (for example, an
146 * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
147 * used with finite maximumPoolSizes, but can be more difficult to
148 * tune and control. Queue sizes and maximum pool sizes may be traded
149 * off for each other: Using large queues and small pools minimizes
150 * CPU usage, OS resources, and context-switching overhead, but can
151 * lead to artificially low throughput. If tasks frequently block (for
152 * example if they are I/O bound), a system may be able to schedule
153 * time for more threads than you otherwise allow. Use of small queues
154 * generally requires larger pool sizes, which keeps CPUs busier but
155 * may encounter unacceptable scheduling overhead, which also
156 * decreases throughput. </li>
157 *
158 * </ol>
159 *
160 * </dd>
161 *
162 * <dt>Rejected tasks</dt>
163 *
164 * <dd> New tasks submitted in method {@link
165 * ThreadPoolExecutor#execute} will be <em>rejected</em> when the
166 * Executor has been shut down, and also when the Executor uses finite
167 * bounds for both maximum threads and work queue capacity, and is
168 * saturated. In either case, the <tt>execute</tt> method invokes the
169 * {@link RejectedExecutionHandler#rejectedExecution} method of its
170 * {@link RejectedExecutionHandler}. Four predefined handler policies
171 * are provided:
172 *
173 * <ol>
174 *
175 * <li> In the
176 * default {@link ThreadPoolExecutor.AbortPolicy}, the handler throws a
177 * runtime {@link RejectedExecutionException} upon rejection. </li>
178 *
179 * <li> In {@link
180 * ThreadPoolExecutor.CallerRunsPolicy}, the thread that invokes
181 * <tt>execute</tt> itself runs the task. This provides a simple
182 * feedback control mechanism that will slow down the rate that new
183 * tasks are submitted. </li>
184 *
185 * <li> In {@link ThreadPoolExecutor.DiscardPolicy},
186 * a task that cannot be executed is simply dropped. </li>
187 *
188 * <li>In {@link
189 * ThreadPoolExecutor.DiscardOldestPolicy}, if the executor is not
190 * shut down, the task at the head of the work queue is dropped, and
191 * then execution is retried (which can fail again, causing this to be
192 * repeated.) </li>
193 *
194 * </ol>
195 *
196 * It is possible to define and use other kinds of {@link
197 * RejectedExecutionHandler} classes. Doing so requires some care
198 * especially when policies are designed to work only under particular
199 * capacity or queueing policies. </dd>
200 *
201 * <dt>Hook methods</dt>
202 *
203 * <dd>This class provides <tt>protected</tt> overridable {@link
204 * ThreadPoolExecutor#beforeExecute} and {@link
205 * ThreadPoolExecutor#afterExecute} methods that are called before and
206 * after execution of each task. These can be used to manipulate the
207 * execution environment, for example, reinitializing ThreadLocals,
208 * gathering statistics, or adding log entries. Additionally, method
209 * {@link ThreadPoolExecutor#terminated} can be overridden to perform
210 * any special processing that needs to be done once the Executor has
211 * fully terminated.</dd>
212 *
213 * <dt>Queue maintenance</dt>
214 *
215 * <dd> Method {@link ThreadPoolExecutor#getQueue} allows access to
216 * the work queue for purposes of monitoring and debugging. Use of
217 * this method for any other purpose is strongly discouraged. Two
218 * supplied methods, {@link ThreadPoolExecutor#remove} and {@link
219 * ThreadPoolExecutor#purge} are available to assist in storage
220 * reclamation when large numbers of queued tasks become
221 * cancelled.</dd> </dl>
222 *
223 * @since 1.5
224 * @author Doug Lea
225 */
226 public class ThreadPoolExecutor implements ExecutorService {
227 /**
228 * Queue used for holding tasks and handing off to worker threads.
229 */
230 private final BlockingQueue<Runnable> workQueue;
231
232 /**
233 * Lock held on updates to poolSize, corePoolSize, maximumPoolSize, and
234 * workers set.
235 */
236 private final ReentrantLock mainLock = new ReentrantLock();
237
238 /**
239 * Wait condition to support awaitTermination
240 */
241 private final ReentrantLock.ConditionObject termination = mainLock.newCondition();
242
243 /**
244 * Set containing all worker threads in pool.
245 */
246 private final HashSet<Worker> workers = new HashSet<Worker>();
247
248 /**
249 * Timeout in nanoseconds for idle threads waiting for work.
250 * Threads use this timeout only when there are more than
251 * corePoolSize present. Otherwise they wait forever for new work.
252 */
253 private volatile long keepAliveTime;
254
255 /**
256 * Core pool size, updated only while holding mainLock,
257 * but volatile to allow concurrent readability even
258 * during updates.
259 */
260 private volatile int corePoolSize;
261
262 /**
263 * Maximum pool size, updated only while holding mainLock
264 * but volatile to allow concurrent readability even
265 * during updates.
266 */
267 private volatile int maximumPoolSize;
268
269 /**
270 * Current pool size, updated only while holding mainLock
271 * but volatile to allow concurrent readability even
272 * during updates.
273 */
274 private volatile int poolSize;
275
276 /**
277 * Lifecycle state
278 */
279 private volatile int runState;
280
281 // Special values for runState
282 /** Normal, not-shutdown mode */
283 private static final int RUNNING = 0;
284 /** Controlled shutdown mode */
285 private static final int SHUTDOWN = 1;
286 /** Immediate shutdown mode */
287 private static final int STOP = 2;
288 /** Final state */
289 private static final int TERMINATED = 3;
290
291 /**
292 * Handler called when saturated or shutdown in execute.
293 */
294 private volatile RejectedExecutionHandler handler;
295
296 /**
297 * Factory for new threads.
298 */
299 private volatile ThreadFactory threadFactory;
300
301 /**
302 * Tracks largest attained pool size.
303 */
304 private int largestPoolSize;
305
306 /**
307 * Counter for completed tasks. Updated only on termination of
308 * worker threads.
309 */
310 private long completedTaskCount;
311
312 /**
313 * The default rejected execution handler
314 */
315 private static final RejectedExecutionHandler defaultHandler =
316 new AbortPolicy();
317
318 /**
319 * Invoke the rejected execution handler for the given command.
320 */
321 void reject(Runnable command) {
322 handler.rejectedExecution(command, this);
323 }
324
325
326
327 /**
328 * Create and return a new thread running firstTask as its first
329 * task. Call only while holding mainLock
330 * @param firstTask the task the new thread should run first (or
331 * null if none)
332 * @return the new thread
333 */
334 private Thread addThread(Runnable firstTask) {
335 Worker w = new Worker(firstTask);
336 Thread t = threadFactory.newThread(w);
337 w.thread = t;
338 workers.add(w);
339 int nt = ++poolSize;
340 if (nt > largestPoolSize)
341 largestPoolSize = nt;
342 return t;
343 }
344
345
346
347 /**
348 * Create and start a new thread running firstTask as its first
349 * task, only if less than corePoolSize threads are running.
350 * @param firstTask the task the new thread should run first (or
351 * null if none)
352 * @return true if successful.
353 */
354 private boolean addIfUnderCorePoolSize(Runnable firstTask) {
355 Thread t = null;
356 mainLock.lock();
357 try {
358 if (poolSize < corePoolSize)
359 t = addThread(firstTask);
360 } finally {
361 mainLock.unlock();
362 }
363 if (t == null)
364 return false;
365 t.start();
366 return true;
367 }
368
369 /**
370 * Create and start a new thread only if less than maximumPoolSize
371 * threads are running. The new thread runs as its first task the
372 * next task in queue, or if there is none, the given task.
373 * @param firstTask the task the new thread should run first (or
374 * null if none)
375 * @return null on failure, else the first task to be run by new thread.
376 */
377 private Runnable addIfUnderMaximumPoolSize(Runnable firstTask) {
378 Thread t = null;
379 Runnable next = null;
380 mainLock.lock();
381 try {
382 if (poolSize < maximumPoolSize) {
383 next = workQueue.poll();
384 if (next == null)
385 next = firstTask;
386 t = addThread(next);
387 }
388 } finally {
389 mainLock.unlock();
390 }
391 if (t == null)
392 return null;
393 t.start();
394 return next;
395 }
396
397
398 /**
399 * Get the next task for a worker thread to run.
400 * @return the task
401 * @throws InterruptedException if interrupted while waiting for task
402 */
403 private Runnable getTask() throws InterruptedException {
404 for (;;) {
405 switch(runState) {
406 case RUNNING: {
407 if (poolSize <= corePoolSize) // untimed wait if core
408 return workQueue.take();
409
410 long timeout = keepAliveTime;
411 if (timeout <= 0) // die immediately for 0 timeout
412 return null;
413 Runnable r = workQueue.poll(timeout, TimeUnit.NANOSECONDS);
414 if (r != null)
415 return r;
416 if (poolSize > corePoolSize) // timed out
417 return null;
418 // else, after timeout, pool shrank so shouldn't die, so retry
419 break;
420 }
421
422 case SHUTDOWN: {
423 // Help drain queue
424 Runnable r = workQueue.poll();
425 if (r != null)
426 return r;
427
428 // Check if can terminate
429 if (workQueue.isEmpty()) {
430 interruptIdleWorkers();
431 return null;
432 }
433
434 // There could still be delayed tasks in queue.
435 // Wait for one, re-checking state upon interruption
436 try {
437 return workQueue.take();
438 }
439 catch(InterruptedException ignore) {
440 }
441 break;
442 }
443
444 case STOP:
445 return null;
446 default:
447 assert false;
448 }
449 }
450 }
451
452 /**
453 * Wake up all threads that might be waiting for tasks.
454 */
455 void interruptIdleWorkers() {
456 mainLock.lock();
457 try {
458 for (Iterator<Worker> it = workers.iterator(); it.hasNext(); )
459 it.next().interruptIfIdle();
460 } finally {
461 mainLock.unlock();
462 }
463 }
464
465 /**
466 * Perform bookkeeping for a terminated worker thread.
467 * @param w the worker
468 */
469 private void workerDone(Worker w) {
470 mainLock.lock();
471 try {
472 completedTaskCount += w.completedTasks;
473 workers.remove(w);
474 if (--poolSize > 0)
475 return;
476
477 // Else, this is the last thread. Deal with potential shutdown.
478
479 int state = runState;
480 assert state != TERMINATED;
481
482 if (state != STOP) {
483 // If there are queued tasks but no threads, create
484 // replacement.
485 Runnable r = workQueue.poll();
486 if (r != null) {
487 addThread(r).start();
488 return;
489 }
490
491 // If there are some (presumably delayed) tasks but
492 // none pollable, create an idle replacement to wait.
493 if (!workQueue.isEmpty()) {
494 addThread(null).start();
495 return;
496 }
497
498 // Otherwise, we can exit without replacement
499 if (state == RUNNING)
500 return;
501 }
502
503 // Either state is STOP, or state is SHUTDOWN and there is
504 // no work to do. So we can terminate.
505 runState = TERMINATED;
506 termination.signalAll();
507 // fall through to call terminate() outside of lock.
508 } finally {
509 mainLock.unlock();
510 }
511
512 assert runState == TERMINATED;
513 terminated();
514 }
515
516 /**
517 * Worker threads
518 */
519 private class Worker implements Runnable {
520
521 /**
522 * The runLock is acquired and released surrounding each task
523 * execution. It mainly protects against interrupts that are
524 * intended to cancel the worker thread from instead
525 * interrupting the task being run.
526 */
527 private final ReentrantLock runLock = new ReentrantLock();
528
529 /**
530 * Initial task to run before entering run loop
531 */
532 private Runnable firstTask;
533
534 /**
535 * Per thread completed task counter; accumulated
536 * into completedTaskCount upon termination.
537 */
538 volatile long completedTasks;
539
540 /**
541 * Thread this worker is running in. Acts as a final field,
542 * but cannot be set until thread is created.
543 */
544 Thread thread;
545
546 Worker(Runnable firstTask) {
547 this.firstTask = firstTask;
548 }
549
550 boolean isActive() {
551 return runLock.isLocked();
552 }
553
554 /**
555 * Interrupt thread if not running a task
556 */
557 void interruptIfIdle() {
558 if (runLock.tryLock()) {
559 try {
560 thread.interrupt();
561 } finally {
562 runLock.unlock();
563 }
564 }
565 }
566
567 /**
568 * Cause thread to die even if running a task.
569 */
570 void interruptNow() {
571 thread.interrupt();
572 }
573
574 /**
575 * Run a single task between before/after methods.
576 */
577 private void runTask(Runnable task) {
578 runLock.lock();
579 try {
580 // Abort now if immediate cancel. Otherwise, we have
581 // committed to run this task.
582 if (runState == STOP)
583 return;
584
585 Thread.interrupted(); // clear interrupt status on entry
586 boolean ran = false;
587 beforeExecute(thread, task);
588 try {
589 task.run();
590 ran = true;
591 afterExecute(task, null);
592 ++completedTasks;
593 } catch(RuntimeException ex) {
594 if (!ran)
595 afterExecute(task, ex);
596 // Else the exception occurred within
597 // afterExecute itself in which case we don't
598 // want to call it again.
599 throw ex;
600 }
601 } finally {
602 runLock.unlock();
603 }
604 }
605
606 /**
607 * Main run loop
608 */
609 public void run() {
610 try {
611 for (;;) {
612 Runnable task;
613 if (firstTask != null) {
614 task = firstTask;
615 firstTask = null;
616 } else {
617 task = getTask();
618 if (task == null)
619 break;
620 }
621 runTask(task);
622 task = null; // unnecessary but can help GC
623 }
624 } catch(InterruptedException ie) {
625 // fall through
626 } finally {
627 workerDone(this);
628 }
629 }
630 }
631
632 // Public methods
633
634 /**
635 * Creates a new <tt>ThreadPoolExecutor</tt> with the given
636 * initial parameters and default thread factory and handler. It
637 * may be more convenient to use one of the {@link Executors}
638 * factory methods instead of this general purpose constructor.
639 *
640 * @param corePoolSize the number of threads to keep in the
641 * pool, even if they are idle.
642 * @param maximumPoolSize the maximum number of threads to allow in the
643 * pool.
644 * @param keepAliveTime when the number of threads is greater than
645 * the core, this is the maximum time that excess idle threads
646 * will wait for new tasks before terminating.
647 * @param unit the time unit for the keepAliveTime
648 * argument.
649 * @param workQueue the queue to use for holding tasks before they
650 * are executed. This queue will hold only the <tt>Runnable</tt>
651 * tasks submitted by the <tt>execute</tt> method.
652 * @throws IllegalArgumentException if corePoolSize, or
653 * keepAliveTime less than zero, or if maximumPoolSize less than or
654 * equal to zero, or if corePoolSize greater than maximumPoolSize.
655 * @throws NullPointerException if <tt>workQueue</tt> is null
656 */
657 public ThreadPoolExecutor(int corePoolSize,
658 int maximumPoolSize,
659 long keepAliveTime,
660 TimeUnit unit,
661 BlockingQueue<Runnable> workQueue) {
662 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
663 Executors.defaultThreadFactory(), defaultHandler);
664 }
665
666 /**
667 * Creates a new <tt>ThreadPoolExecutor</tt> with the given initial
668 * parameters.
669 *
670 * @param corePoolSize the number of threads to keep in the
671 * pool, even if they are idle.
672 * @param maximumPoolSize the maximum number of threads to allow in the
673 * pool.
674 * @param keepAliveTime when the number of threads is greater than
675 * the core, this is the maximum time that excess idle threads
676 * will wait for new tasks before terminating.
677 * @param unit the time unit for the keepAliveTime
678 * argument.
679 * @param workQueue the queue to use for holding tasks before they
680 * are executed. This queue will hold only the <tt>Runnable</tt>
681 * tasks submitted by the <tt>execute</tt> method.
682 * @param threadFactory the factory to use when the executor
683 * creates a new thread.
684 * @throws IllegalArgumentException if corePoolSize, or
685 * keepAliveTime less than zero, or if maximumPoolSize less than or
686 * equal to zero, or if corePoolSize greater than maximumPoolSize.
687 * @throws NullPointerException if <tt>workQueue</tt>
688 * or <tt>threadFactory</tt> are null.
689 */
690 public ThreadPoolExecutor(int corePoolSize,
691 int maximumPoolSize,
692 long keepAliveTime,
693 TimeUnit unit,
694 BlockingQueue<Runnable> workQueue,
695 ThreadFactory threadFactory) {
696
697 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
698 threadFactory, defaultHandler);
699 }
700
701 /**
702 * Creates a new <tt>ThreadPoolExecutor</tt> with the given initial
703 * parameters.
704 *
705 * @param corePoolSize the number of threads to keep in the
706 * pool, even if they are idle.
707 * @param maximumPoolSize the maximum number of threads to allow in the
708 * pool.
709 * @param keepAliveTime when the number of threads is greater than
710 * the core, this is the maximum time that excess idle threads
711 * will wait for new tasks before terminating.
712 * @param unit the time unit for the keepAliveTime
713 * argument.
714 * @param workQueue the queue to use for holding tasks before they
715 * are executed. This queue will hold only the <tt>Runnable</tt>
716 * tasks submitted by the <tt>execute</tt> method.
717 * @param handler the handler to use when execution is blocked
718 * because the thread bounds and queue capacities are reached.
719 * @throws IllegalArgumentException if corePoolSize, or
720 * keepAliveTime less than zero, or if maximumPoolSize less than or
721 * equal to zero, or if corePoolSize greater than maximumPoolSize.
722 * @throws NullPointerException if <tt>workQueue</tt>
723 * or <tt>handler</tt> are null.
724 */
725 public ThreadPoolExecutor(int corePoolSize,
726 int maximumPoolSize,
727 long keepAliveTime,
728 TimeUnit unit,
729 BlockingQueue<Runnable> workQueue,
730 RejectedExecutionHandler handler) {
731 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
732 Executors.defaultThreadFactory(), handler);
733 }
734
735 /**
736 * Creates a new <tt>ThreadPoolExecutor</tt> with the given initial
737 * parameters.
738 *
739 * @param corePoolSize the number of threads to keep in the
740 * pool, even if they are idle.
741 * @param maximumPoolSize the maximum number of threads to allow in the
742 * pool.
743 * @param keepAliveTime when the number of threads is greater than
744 * the core, this is the maximum time that excess idle threads
745 * will wait for new tasks before terminating.
746 * @param unit the time unit for the keepAliveTime
747 * argument.
748 * @param workQueue the queue to use for holding tasks before they
749 * are executed. This queue will hold only the <tt>Runnable</tt>
750 * tasks submitted by the <tt>execute</tt> method.
751 * @param threadFactory the factory to use when the executor
752 * creates a new thread.
753 * @param handler the handler to use when execution is blocked
754 * because the thread bounds and queue capacities are reached.
755 * @throws IllegalArgumentException if corePoolSize, or
756 * keepAliveTime less than zero, or if maximumPoolSize less than or
757 * equal to zero, or if corePoolSize greater than maximumPoolSize.
758 * @throws NullPointerException if <tt>workQueue</tt>
759 * or <tt>threadFactory</tt> or <tt>handler</tt> are null.
760 */
761 public ThreadPoolExecutor(int corePoolSize,
762 int maximumPoolSize,
763 long keepAliveTime,
764 TimeUnit unit,
765 BlockingQueue<Runnable> workQueue,
766 ThreadFactory threadFactory,
767 RejectedExecutionHandler handler) {
768 if (corePoolSize < 0 ||
769 maximumPoolSize <= 0 ||
770 maximumPoolSize < corePoolSize ||
771 keepAliveTime < 0)
772 throw new IllegalArgumentException();
773 if (workQueue == null || threadFactory == null || handler == null)
774 throw new NullPointerException();
775 this.corePoolSize = corePoolSize;
776 this.maximumPoolSize = maximumPoolSize;
777 this.workQueue = workQueue;
778 this.keepAliveTime = unit.toNanos(keepAliveTime);
779 this.threadFactory = threadFactory;
780 this.handler = handler;
781 }
782
783
784 /**
785 * Executes the given task sometime in the future. The task
786 * may execute in a new thread or in an existing pooled thread.
787 *
788 * If the task cannot be submitted for execution, either because this
789 * executor has been shutdown or because its capacity has been reached,
790 * the task is handled by the current <tt>RejectedExecutionHandler</tt>.
791 *
792 * @param command the task to execute
793 * @throws RejectedExecutionException at discretion of
794 * <tt>RejectedExecutionHandler</tt>, if task cannot be accepted
795 * for execution
796 * @throws NullPointerException if command is null
797 */
798 public void execute(Runnable command) {
799 if (command == null)
800 throw new NullPointerException();
801 for (;;) {
802 if (runState != RUNNING) {
803 reject(command);
804 return;
805 }
806 if (poolSize < corePoolSize && addIfUnderCorePoolSize(command))
807 return;
808 if (workQueue.offer(command))
809 return;
810 Runnable r = addIfUnderMaximumPoolSize(command);
811 if (r == command)
812 return;
813 if (r == null) {
814 reject(command);
815 return;
816 }
817 // else retry
818 }
819 }
820
821 public void shutdown() {
822 boolean fullyTerminated = false;
823 mainLock.lock();
824 try {
825 if (workers.size() > 0) {
826 if (runState == RUNNING) // don't override shutdownNow
827 runState = SHUTDOWN;
828 for (Iterator<Worker> it = workers.iterator(); it.hasNext(); )
829 it.next().interruptIfIdle();
830 }
831 else { // If no workers, trigger full termination now
832 fullyTerminated = true;
833 runState = TERMINATED;
834 termination.signalAll();
835 }
836 } finally {
837 mainLock.unlock();
838 }
839 if (fullyTerminated)
840 terminated();
841 }
842
843
844 public List shutdownNow() {
845 boolean fullyTerminated = false;
846 mainLock.lock();
847 try {
848 if (workers.size() > 0) {
849 if (runState != TERMINATED)
850 runState = STOP;
851 for (Iterator<Worker> it = workers.iterator(); it.hasNext(); )
852 it.next().interruptNow();
853 }
854 else { // If no workers, trigger full termination now
855 fullyTerminated = true;
856 runState = TERMINATED;
857 termination.signalAll();
858 }
859 } finally {
860 mainLock.unlock();
861 }
862 if (fullyTerminated)
863 terminated();
864 return Arrays.asList(workQueue.toArray());
865 }
866
867 public boolean isShutdown() {
868 return runState != RUNNING;
869 }
870
871 /**
872 * Return true if this executor is in the process of terminating
873 * after <tt>shutdown</tt> or <tt>shutdownNow</tt> but has not
874 * completely terminated. This method may be useful for
875 * debugging. A return of <tt>true</tt> reported a sufficient
876 * period after shutdown may indicate that submitted tasks have
877 * ignored or suppressed interruption, causing this executor not
878 * to properly terminate.
879 * @return true if terminating but not yet terminated.
880 */
881 public boolean isTerminating() {
882 return runState == STOP;
883 }
884
885 public boolean isTerminated() {
886 return runState == TERMINATED;
887 }
888
889 public boolean awaitTermination(long timeout, TimeUnit unit)
890 throws InterruptedException {
891 mainLock.lock();
892 try {
893 long nanos = unit.toNanos(timeout);
894 for (;;) {
895 if (runState == TERMINATED)
896 return true;
897 if (nanos <= 0)
898 return false;
899 nanos = termination.awaitNanos(nanos);
900 }
901 } finally {
902 mainLock.unlock();
903 }
904 }
905
906 /**
907 * Invokes <tt>shutdown</tt> when this executor is no longer
908 * referenced.
909 */
910 protected void finalize() {
911 shutdown();
912 }
913
914 /**
915 * Sets the thread factory used to create new threads.
916 *
917 * @param threadFactory the new thread factory
918 * @throws NullPointerException if threadFactory is null
919 * @see #getThreadFactory
920 */
921 public void setThreadFactory(ThreadFactory threadFactory) {
922 if (threadFactory == null)
923 throw new NullPointerException();
924 this.threadFactory = threadFactory;
925 }
926
927 /**
928 * Returns the thread factory used to create new threads.
929 *
930 * @return the current thread factory
931 * @see #setThreadFactory
932 */
933 public ThreadFactory getThreadFactory() {
934 return threadFactory;
935 }
936
937 /**
938 * Sets a new handler for unexecutable tasks.
939 *
940 * @param handler the new handler
941 * @throws NullPointerException if handler is null
942 * @see #getRejectedExecutionHandler
943 */
944 public void setRejectedExecutionHandler(RejectedExecutionHandler handler) {
945 if (handler == null)
946 throw new NullPointerException();
947 this.handler = handler;
948 }
949
950 /**
951 * Returns the current handler for unexecutable tasks.
952 *
953 * @return the current handler
954 * @see #setRejectedExecutionHandler
955 */
956 public RejectedExecutionHandler getRejectedExecutionHandler() {
957 return handler;
958 }
959
960 /**
961 * Returns the task queue used by this executor. Access to the
962 * task queue is intended primarily for debugging and monitoring.
963 * This queue may be in active use. Retrieving the task queue
964 * does not prevent queued tasks from executing.
965 *
966 * @return the task queue
967 */
968 public BlockingQueue<Runnable> getQueue() {
969 return workQueue;
970 }
971
972 /**
973 * Removes this task from internal queue if it is present, thus
974 * causing it not to be run if it has not already started. This
975 * method may be useful as one part of a cancellation scheme.
976 *
977 * @param task the task to remove
978 * @return true if the task was removed
979 */
980 public boolean remove(Runnable task) {
981 return getQueue().remove(task);
982 }
983
984
985 /**
986 * Tries to remove from the work queue all {@link Future}
987 * tasks that have been cancelled. This method can be useful as a
988 * storage reclamation operation, that has no other impact on
989 * functionality. Cancelled tasks are never executed, but may
990 * accumulate in work queues until worker threads can actively
991 * remove them. Invoking this method instead tries to remove them now.
992 * However, this method may fail to remove tasks in
993 * the presence of interference by other threads.
994 */
995
996 public void purge() {
997 // Fail if we encounter interference during traversal
998 try {
999 Iterator<Runnable> it = getQueue().iterator();
1000 while (it.hasNext()) {
1001 Runnable r = it.next();
1002 if (r instanceof Future<?>) {
1003 Future<?> c = (Future<?>)r;
1004 if (c.isCancelled())
1005 it.remove();
1006 }
1007 }
1008 }
1009 catch(ConcurrentModificationException ex) {
1010 return;
1011 }
1012 }
1013
1014 /**
1015 * Sets the core number of threads. This overrides any value set
1016 * in the constructor. If the new value is smaller than the
1017 * current value, excess existing threads will be terminated when
1018 * they next become idle. If larger, new threads will, if needed,
1019 * be started to execute any queued tasks.
1020 *
1021 * @param corePoolSize the new core size
1022 * @throws IllegalArgumentException if <tt>corePoolSize</tt>
1023 * less than zero
1024 * @see #getCorePoolSize
1025 */
1026 public void setCorePoolSize(int corePoolSize) {
1027 if (corePoolSize < 0)
1028 throw new IllegalArgumentException();
1029 mainLock.lock();
1030 try {
1031 int extra = this.corePoolSize - corePoolSize;
1032 this.corePoolSize = corePoolSize;
1033 if (extra < 0) {
1034 Runnable r;
1035 while (extra++ < 0 && poolSize < corePoolSize &&
1036 (r = workQueue.poll()) != null)
1037 addThread(r).start();
1038 }
1039 else if (extra > 0 && poolSize > corePoolSize) {
1040 Iterator<Worker> it = workers.iterator();
1041 while (it.hasNext() &&
1042 extra-- > 0 &&
1043 poolSize > corePoolSize &&
1044 workQueue.remainingCapacity() == 0)
1045 it.next().interruptIfIdle();
1046 }
1047 } finally {
1048 mainLock.unlock();
1049 }
1050 }
1051
1052 /**
1053 * Returns the core number of threads.
1054 *
1055 * @return the core number of threads
1056 * @see #setCorePoolSize
1057 */
1058 public int getCorePoolSize() {
1059 return corePoolSize;
1060 }
1061
1062 /**
1063 * Start a core thread, causing it to idly wait for work. This
1064 * overrides the default policy of starting core threads only when
1065 * new tasks are executed. This method will return <tt>false</tt>
1066 * if all core threads have already been started.
1067 * @return true if a thread was started
1068 */
1069 public boolean prestartCoreThread() {
1070 return addIfUnderCorePoolSize(null);
1071 }
1072
1073 /**
1074 * Start all core threads, causing them to idly wait for work. This
1075 * overrides the default policy of starting core threads only when
1076 * new tasks are executed.
1077 * @return the number of threads started.
1078 */
1079 public int prestartAllCoreThreads() {
1080 int n = 0;
1081 while (addIfUnderCorePoolSize(null))
1082 ++n;
1083 return n;
1084 }
1085
1086 /**
1087 * Sets the maximum allowed number of threads. This overrides any
1088 * value set in the constructor. If the new value is smaller than
1089 * the current value, excess existing threads will be
1090 * terminated when they next become idle.
1091 *
1092 * @param maximumPoolSize the new maximum
1093 * @throws IllegalArgumentException if maximumPoolSize less than zero or
1094 * the {@link #getCorePoolSize core pool size}
1095 * @see #getMaximumPoolSize
1096 */
1097 public void setMaximumPoolSize(int maximumPoolSize) {
1098 if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize)
1099 throw new IllegalArgumentException();
1100 mainLock.lock();
1101 try {
1102 int extra = this.maximumPoolSize - maximumPoolSize;
1103 this.maximumPoolSize = maximumPoolSize;
1104 if (extra > 0 && poolSize > maximumPoolSize) {
1105 Iterator<Worker> it = workers.iterator();
1106 while (it.hasNext() &&
1107 extra > 0 &&
1108 poolSize > maximumPoolSize) {
1109 it.next().interruptIfIdle();
1110 --extra;
1111 }
1112 }
1113 } finally {
1114 mainLock.unlock();
1115 }
1116 }
1117
1118 /**
1119 * Returns the maximum allowed number of threads.
1120 *
1121 * @return the maximum allowed number of threads
1122 * @see #setMaximumPoolSize
1123 */
1124 public int getMaximumPoolSize() {
1125 return maximumPoolSize;
1126 }
1127
1128 /**
1129 * Sets the time limit for which threads may remain idle before
1130 * being terminated. If there are more than the core number of
1131 * threads currently in the pool, after waiting this amount of
1132 * time without processing a task, excess threads will be
1133 * terminated. This overrides any value set in the constructor.
1134 * @param time the time to wait. A time value of zero will cause
1135 * excess threads to terminate immediately after executing tasks.
1136 * @param unit the time unit of the time argument
1137 * @throws IllegalArgumentException if time less than zero
1138 * @see #getKeepAliveTime
1139 */
1140 public void setKeepAliveTime(long time, TimeUnit unit) {
1141 if (time < 0)
1142 throw new IllegalArgumentException();
1143 this.keepAliveTime = unit.toNanos(time);
1144 }
1145
1146 /**
1147 * Returns the thread keep-alive time, which is the amount of time
1148 * which threads in excess of the core pool size may remain
1149 * idle before being terminated.
1150 *
1151 * @param unit the desired time unit of the result
1152 * @return the time limit
1153 * @see #setKeepAliveTime
1154 */
1155 public long getKeepAliveTime(TimeUnit unit) {
1156 return unit.convert(keepAliveTime, TimeUnit.NANOSECONDS);
1157 }
1158
1159 /* Statistics */
1160
1161 /**
1162 * Returns the current number of threads in the pool.
1163 *
1164 * @return the number of threads
1165 */
1166 public int getPoolSize() {
1167 return poolSize;
1168 }
1169
1170 /**
1171 * Returns the approximate number of threads that are actively
1172 * executing tasks.
1173 *
1174 * @return the number of threads
1175 */
1176 public int getActiveCount() {
1177 mainLock.lock();
1178 try {
1179 int n = 0;
1180 for (Iterator<Worker> it = workers.iterator(); it.hasNext(); ) {
1181 if (it.next().isActive())
1182 ++n;
1183 }
1184 return n;
1185 } finally {
1186 mainLock.unlock();
1187 }
1188 }
1189
1190 /**
1191 * Returns the largest number of threads that have ever
1192 * simultaneously been in the pool.
1193 *
1194 * @return the number of threads
1195 */
1196 public int getLargestPoolSize() {
1197 mainLock.lock();
1198 try {
1199 return largestPoolSize;
1200 } finally {
1201 mainLock.unlock();
1202 }
1203 }
1204
1205 /**
1206 * Returns the approximate total number of tasks that have been
1207 * scheduled for execution. Because the states of tasks and
1208 * threads may change dynamically during computation, the returned
1209 * value is only an approximation, but one that does not ever
1210 * decrease across successive calls.
1211 *
1212 * @return the number of tasks
1213 */
1214 public long getTaskCount() {
1215 mainLock.lock();
1216 try {
1217 long n = completedTaskCount;
1218 for (Iterator<Worker> it = workers.iterator(); it.hasNext(); ) {
1219 Worker w = it.next();
1220 n += w.completedTasks;
1221 if (w.isActive())
1222 ++n;
1223 }
1224 return n + workQueue.size();
1225 } finally {
1226 mainLock.unlock();
1227 }
1228 }
1229
1230 /**
1231 * Returns the approximate total number of tasks that have
1232 * completed execution. Because the states of tasks and threads
1233 * may change dynamically during computation, the returned value
1234 * is only an approximation, but one that does not ever decrease
1235 * across successive calls.
1236 *
1237 * @return the number of tasks
1238 */
1239 public long getCompletedTaskCount() {
1240 mainLock.lock();
1241 try {
1242 long n = completedTaskCount;
1243 for (Iterator<Worker> it = workers.iterator(); it.hasNext(); )
1244 n += it.next().completedTasks;
1245 return n;
1246 } finally {
1247 mainLock.unlock();
1248 }
1249 }
1250
1251 /**
1252 * Method invoked prior to executing the given Runnable in the
1253 * given thread. This method may be used to re-initialize
1254 * ThreadLocals, or to perform logging. Note: To properly nest
1255 * multiple overridings, subclasses should generally invoke
1256 * <tt>super.beforeExecute</tt> at the end of this method.
1257 *
1258 * @param t the thread that will run task r.
1259 * @param r the task that will be executed.
1260 */
1261 protected void beforeExecute(Thread t, Runnable r) { }
1262
1263 /**
1264 * Method invoked upon completion of execution of the given
1265 * Runnable. If non-null, the Throwable is the uncaught exception
1266 * that caused execution to terminate abruptly. Note: To properly
1267 * nest multiple overridings, subclasses should generally invoke
1268 * <tt>super.afterExecute</tt> at the beginning of this method.
1269 *
1270 * @param r the runnable that has completed.
1271 * @param t the exception that caused termination, or null if
1272 * execution completed normally.
1273 */
1274 protected void afterExecute(Runnable r, Throwable t) { }
1275
1276 /**
1277 * Method invoked when the Executor has terminated. Default
1278 * implementation does nothing. Note: To properly nest multiple
1279 * overridings, subclasses should generally invoke
1280 * <tt>super.terminated</tt> within this method.
1281 */
1282 protected void terminated() { }
1283
1284 /**
1285 * A handler for rejected tasks that runs the rejected task
1286 * directly in the calling thread of the <tt>execute</tt> method,
1287 * unless the executor has been shut down, in which case the task
1288 * is discarded.
1289 */
1290 public static class CallerRunsPolicy implements RejectedExecutionHandler {
1291
1292 /**
1293 * Creates a <tt>CallerRunsPolicy</tt>.
1294 */
1295 public CallerRunsPolicy() { }
1296
1297 /**
1298 * Executes task r in the caller's thread, unless the executor
1299 * has been shut down, in which case the task is discarded.
1300 * @param r the runnable task requested to be executed
1301 * @param e the executor attempting to execute this task
1302 */
1303 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1304 if (!e.isShutdown()) {
1305 r.run();
1306 }
1307 }
1308 }
1309
1310 /**
1311 * A handler for rejected tasks that throws a
1312 * <tt>RejectedExecutionException</tt>.
1313 */
1314 public static class AbortPolicy implements RejectedExecutionHandler {
1315
1316 /**
1317 * Creates an <tt>AbortPolicy</tt>.
1318 */
1319 public AbortPolicy() { }
1320
1321 /**
1322 * Always throws RejectedExecutionException
1323 * @param r the runnable task requested to be executed
1324 * @param e the executor attempting to execute this task
1325 * @throws RejectedExecutionException always.
1326 */
1327 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1328 throw new RejectedExecutionException();
1329 }
1330 }
1331
1332 /**
1333 * A handler for rejected tasks that silently discards the
1334 * rejected task.
1335 */
1336 public static class DiscardPolicy implements RejectedExecutionHandler {
1337
1338 /**
1339 * Creates <tt>DiscardPolicy</tt>.
1340 */
1341 public DiscardPolicy() { }
1342
1343 /**
1344 * Does nothing, which has the effect of discarding task r.
1345 * @param r the runnable task requested to be executed
1346 * @param e the executor attempting to execute this task
1347 */
1348 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1349 }
1350 }
1351
1352 /**
1353 * A handler for rejected tasks that discards the oldest unhandled
1354 * request and then retries <tt>execute</tt>, unless the executor
1355 * is shut down, in which case the task is discarded.
1356 */
1357 public static class DiscardOldestPolicy implements RejectedExecutionHandler {
1358 /**
1359 * Creates a <tt>DiscardOldestPolicy</tt> for the given executor.
1360 */
1361 public DiscardOldestPolicy() { }
1362
1363 /**
1364 * Obtains and ignores the next task that the executor
1365 * would otherwise execute, if one is immediately available,
1366 * and then retries execution of task r, unless the executor
1367 * is shut down, in which case task r is instead discarded.
1368 * @param r the runnable task requested to be executed
1369 * @param e the executor attempting to execute this task
1370 */
1371 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1372 if (!e.isShutdown()) {
1373 e.getQueue().poll();
1374 e.execute(r);
1375 }
1376 }
1377 }
1378 }