ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ThreadPoolExecutor.java
(Generate patch)

Comparing jsr166/src/main/java/util/concurrent/ThreadPoolExecutor.java (file contents):
Revision 1.20 by dl, Mon Sep 1 14:27:11 2003 UTC vs.
Revision 1.21 by dl, Wed Sep 3 13:15:14 2003 UTC

# Line 32 | Line 32 | import java.util.*;
32   * automatic thread reclamation), {@link Executors#newFixedThreadPool}
33   * (fixed size thread pool) and {@link
34   * Executors#newSingleThreadExecutor} (single background thread), that
35 < * preconfigure settings for the most common usage scenarios.
35 > * preconfigure settings for the most common usage scenarios. Use the
36 > * following guide when manually configuring and tuning
37 > * <tt>ThreadPoolExecutors</tt>:
38   *
37 *
38 * <h3>Tuning guide</h3>
39   * <dl>
40   *
41 < * <dt>Core and maximum pool size</dt>
41 > * <dt>Core and maximum pool sizes</dt>
42   *
43   * <dd>A <tt>ThreadPoolExecutor</tt> will automatically adjust the
44 < * pool size according to the bounds set by corePoolSize and
45 < * maximumPoolSize.  When a new task is submitted, and fewer than
46 < * corePoolSize threads are running, a new thread is created to handle
47 < * the request, even if other worker threads are idle.  If there are
48 < * more than the corePoolSize but less than maximumPoolSize threads
49 < * running, a new thread will be created only if the queue is full.
50 < * By setting corePoolSize and maximumPoolSize the same, you create a
51 < * fixed-size thread pool. By default, even core threads are only
52 < * created and started when needed by new tasks, but this can be
53 < * overridden dynamically using method <tt>prestartCoreThread</tt>.
54 < * </dd>
44 > * pool size
45 > * (see {@link ThreadPoolExecutor#getPoolSize})
46 > * according to the bounds set by corePoolSize
47 > * (see {@link ThreadPoolExecutor#getCorePoolSize})
48 > * and
49 > * maximumPoolSize
50 > * (see {@link ThreadPoolExecutor#getMaximumPoolSize}).
51 > * When a new task is submitted in method {@link
52 > * ThreadPoolExecutor#execute}, and fewer than corePoolSize threads
53 > * are running, a new thread is created to handle the request, even if
54 > * other worker threads are idle.  If there are more than
55 > * corePoolSize but less than maximumPoolSize threads running, a new
56 > * thread will be created only if the queue is full.  By setting
57 > * corePoolSize and maximumPoolSize the same, you create a fixed-size
58 > * thread pool. By setting maximumPoolSize to an essentially unbounded
59 > * value such as <tt>Integer.MAX_VALUE</tt>, you allow the pool to
60 > * accomodate an arbitrary number of concurrent tasks. Most typically,
61 > * core and maximum pool sizes are set only upon construction, but they
62 > * may also be changed dynamically using {@link
63 > * ThreadPoolExecutor#setCorePoolSize} and {@link
64 > * ThreadPoolExecutor#setMaximumPoolSize}. <dd>
65   *
66 < * <dt>Keep-alive</dt>
66 > * <dt> On-demand construction
67   *
68 < * <dd>The keepAliveTime determines what happens to idle threads.  If
69 < * the pool currently has more than the core number of threads, excess
70 < * threads will be terminated if they have been idle for more than the
71 < * keepAliveTime.</dd>
72 < *
63 < * <dt>Queueing</dt>
64 < *
65 < * <dd>Any {@link BlockingQueue} may be used to transfer and hold
66 < * submitted tasks.  A good default is a {@link SynchronousQueue} that
67 < * hands off tasks to threads without otherwise holding them.  This
68 < * policy avoids lockups when handling sets of requests that might
69 < * have internal dependencies.  Using an unbounded queue (for example
70 < * a {@link LinkedBlockingQueue}) will cause new tasks to be queued in
71 < * cases where all corePoolSize threads are busy, so no more than
72 < * corePoolSize threads will be created.  This may be appropriate when
73 < * each task is completely independent of others, so tasks cannot
74 < * affect each others execution; for example, in a web page server.
75 < * When given a choice, a <tt>ThreadPoolExecutor</tt> always prefers
76 < * adding a new thread rather than queueing if there are currently
77 < * fewer than the current getCorePoolSize threads running, but
78 < * otherwise always prefers queuing a request rather than adding a new
79 < * thread.
80 < *
81 < * <p>While queuing can be useful in smoothing out transient bursts of
82 < * requests, especially in socket-based services, it is not very well
83 < * behaved when commands continue to arrive on average faster than
84 < * they can be processed.  Queue sizes and maximum pool sizes can
85 < * often be traded off for each other. Using large queues and small
86 < * pools minimizes CPU usage, OS resources, and context-switching
87 < * overhead, but can lead to artifically low throughput.  If tasks
88 < * frequently block (for example if they are I/O bound), a system may
89 < * be able to schedule time for more threads than you otherwise
90 < * allow. Use of small queues or queueless handoffs generally requires
91 < * larger pool sizes, which keeps CPUs busier but may encounter
92 < * unacceptable scheduling overhead, which also decreases throughput.
93 < * </dd>
68 > * <dd> By default, even core threads are initially created and
69 > * started only when needed by new tasks, but this can be overridden
70 > * dynamically using method {@link
71 > * ThreadPoolExecutor#prestartCoreThread} or
72 > * {@link ThreadPoolExecutor#prestartAllCoreThreads}.  </dd>
73   *
74   * <dt>Creating new threads</dt>
75   *
# Line 100 | Line 79 | import java.util.*;
79   * ThreadFactory, you can alter the thread's name, thread group,
80   * priority, daemon status, etc.  </dd>
81   *
82 < * <dt>Before and after intercepts</dt>
82 > * <dt>Keep-alive times</dt>
83   *
84 < * <dd>This class has overridable methods that are called before and
85 < * after execution of each task.  These can be used to manipulate the
86 < * execution environment, for example, reinitializing ThreadLocals,
87 < * gathering statistics, or adding log entries.  </dd>
84 > * <dd>If the pool currently has more than corePoolSize threads,
85 > * excess threads will be terminated if they have been idle for more
86 > * than the keepAliveTime (see {@link
87 > * ThreadPoolExecutor#getKeepAliveTime}). This provides a means of
88 > * reducing resource consumption when the pool is not being actively
89 > * used. If the pool becomes more active later, new threads will be
90 > * constructed. This parameter can also be changed dynamically
91 > * using method {@link ThreadPoolExecutor#setKeepAliveTime}. Using
92 > * a value of <tt>Long.MAX_VALUE</tt> {@link TimeUnit#NANOSECONDS}
93 > * effectively disables idle threads from ever terminating prior
94 > * to shut down.
95 > * </dd>
96 > *
97 > * <dt>Queueing</dt>
98 > *
99 > * <dd>Any {@link BlockingQueue} may be used to transfer and hold
100 > * submitted tasks.  The use of this queue interacts with pool sizing:
101 > *
102 > * <ul>
103 > *
104 > * <li> If fewer than corePoolSize threads are running, a
105 > * <tt>ThreadPoolExecutor</tt> always prefers adding a new thread
106 > * rather than queueing.</li>
107 > *
108 > * <li> If corePoolSize or more threads are running, a
109 > * <tt>ThreadPoolExecutor</tt>
110 > * always prefers queuing a request rather than adding a new thread.</li>
111 > *
112 > * <li> If a request cannot be queued, a new thread is created unless
113 > * this would exceed maximumPoolSize, in which case, the task will be
114 > * rejected.</li>
115 > *
116 > * </ul>
117 > *
118 > * There are three general strategies for queuing:
119 > * <ol>
120 > *
121 > * <li> <em> Direct handoffs.</em> A good default choice for a work
122 > * queue is a {@link SynchronousQueue} that hands off tasks to threads
123 > * without otherwise holding them. Here, an attempt to queue a task
124 > * will fail if no threads are immediately available to run it, so a
125 > * new thread will be constructed. This policy avoids lockups when
126 > * handling sets of requests that might have internal dependencies.
127 > * Direct handoffs generally require unbounded maximumPoolSizes to
128 > * avoid rejection of new submitted tasks, which in turn admit the
129 > * possibility of unbounded thread growth when commands continue to
130 > * arrive on average faster than they can be processed.  </li>
131 > *
132 > * <li><em> Unbounded queues.</em> Using an unbounded queue (for
133 > * example a {@link LinkedBlockingQueue} without a predefined
134 > * capacity) will cause new tasks to be queued in cases where all
135 > * corePoolSize threads are busy, so no more than corePoolSize threads
136 > * will be created.  This may be appropriate when each task is
137 > * completely independent of others, so tasks cannot affect each
138 > * others execution; for example, in a web page server.  While this
139 > * style of queuing can be useful in smoothing out transient bursts of
140 > * requests, it admits the possibility of unbounded work queue growth
141 > * when commands continue to arrive on average faster than they can be
142 > * processed.  </li>
143 > *
144 > * <li><em>Bounded queues.</em> A bounded queue (for example, an
145 > * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
146 > * used with finite maximumPoolSizes, but can be more difficult to
147 > * tune and control.  Queue sizes and maximum pool sizes may be traded
148 > * off for each other: Using large queues and small pools minimizes
149 > * CPU usage, OS resources, and context-switching overhead, but can
150 > * lead to artifically low throughput.  If tasks frequently block (for
151 > * example if they are I/O bound), a system may be able to schedule
152 > * time for more threads than you otherwise allow. Use of small queues
153 > * or queueless handoffs generally requires larger pool sizes, which
154 > * keeps CPUs busier but may encounter unacceptable scheduling
155 > * overhead, which also decreases throughput.  </li>
156 > *
157 > * </ol>
158 > *
159 > * </dd>
160   *
161   * <dt>Rejected tasks</dt>
162   *
163 < * <dd>There are a number of factors which can bound the number of
164 < * tasks which can execute at once, including the maximum pool size
165 < * and the queuing mechanism used.  If the executor determines that a
166 < * task cannot be executed because it has been refused by the queue
167 < * and no threads are available, or because the executor has been shut
168 < * down, the {@link RejectedExecutionHandler}
169 < * <tt>rejectedExecution</tt> method is invoked. The default
170 < * (<tt>AbortPolicy</tt>) handler throws a runtime {@link
120 < * RejectedExecutionException} upon rejection.  </dd>
163 > * <dd> New tasks submitted in method {@link
164 > * ThreadPoolExecutor#execute} will be <em>rejected</em> when the
165 > * Executor has been shut down, and also when the Executor uses finite
166 > * bounds for both maximum threads and work queue capacity, and is
167 > * saturated.  In both cases, the <tt>execute</tt> method invokes its
168 > * {@link RejectedExecutionHandler} {@link
169 > * RejectedExecutionHandler#rejectedExecution} method.  Four
170 > * predefined handler policies are provided:
171   *
172 + * <ol>
173 + *
174 + * <li> In the
175 + * default {@link ThreadPoolExecutor.AbortPolicy}, the handler throws a
176 + * runtime {@link RejectedExecutionException} upon rejection. </li>
177 + *
178 + * <li> In {@link
179 + * ThreadPoolExecutor.CallerRunsPolicy}, the thread that invokes
180 + * <tt>execute</tt> itself runs the task. This provides a simple
181 + * feedback control mechanism that will slow down the rate that new
182 + * tasks are submitted. </li>
183 + *
184 + * <li> In {@link ThreadPoolExecutor.DiscardPolicy},
185 + * a task that cannot be executed is simply dropped.  </li>
186 + *
187 + * <li>In {@link
188 + * ThreadPoolExecutor.DiscardOldestPolicy}, if the executor is not
189 + * shut down, the task at the head of the work queue is dropped, and
190 + * then execution is retried (which can fail again, causing this to be
191 + * repeated.) </li>
192 + *
193 + * </ol>
194 + *
195 + * It is possible to define and use other kinds of {@link
196 + * RejectedExecutionHandler} classes. Doing so requires some care
197 + * especially when policies are designed to work only under particular
198 + * capacity or queueing policies. </dd>
199 + *
200 + * <dt>Hook methods</dt>
201 + *
202 + * <dd>This class has <tt>protected</tt> overridable {@link
203 + * ThreadPoolExecutor#beforeExecute} and {@link
204 + * ThreadPoolExecutor#afterExecute} methods that are called before and
205 + * after execution of each task.  These can be used to manipulate the
206 + * execution environment, for example, reinitializing ThreadLocals,
207 + * gathering statistics, or adding log entries. Additionally, method
208 + * {@link ThreadPoolExecutor#terminated} can be overridden to perform
209 + * any special processing that needs to be done once the Executor has
210 + * fully terminated.</dd>
211 + *
212 + * <dt>Queue maintenance</dt>
213 + *
214 + * <dd> Method {@link ThreadPoolExecutor#getQueue} allows access
215 + * to the work queue for purposes of monitoring and debugging.
216 + * Use of this method for any other purpose is strongly discouraged.
217 + * Two supplied methods, {@link ThreadPoolExecutor#remove} and
218 + * {@link ThreadPoolExecutor#purge} are available to assist in
219 + * storage reclamation when large numbers of not-yet-executed
220 + * tasks become cancelled.</dd>
221   * </dl>
222   *
223   * @since 1.5
# Line 1151 | Line 1250 | public class ThreadPoolExecutor implemen
1250      protected void terminated() { }
1251  
1252      /**
1253 <     * A handler for unexecutable tasks that runs these tasks directly
1254 <     * in the calling thread of the <tt>execute</tt> method.  This is
1255 <     * the default <tt>RejectedExecutionHandler</tt>.
1253 >     * A handler for rejected tasks that runs the rejected task
1254 >     * directly in the calling thread of the <tt>execute</tt> method,
1255 >     * unless the executor has been shut down, in which case the task
1256 >     * is discarded.
1257       */
1258     public static class CallerRunsPolicy implements RejectedExecutionHandler {
1259  
# Line 1170 | Line 1270 | public class ThreadPoolExecutor implemen
1270      }
1271  
1272      /**
1273 <     * A handler for unexecutable tasks that throws a
1273 >     * A handler for rejected tasks that throws a
1274       * <tt>RejectedExecutionException</tt>.
1275       */
1276      public static class AbortPolicy implements RejectedExecutionHandler {
# Line 1186 | Line 1286 | public class ThreadPoolExecutor implemen
1286      }
1287  
1288      /**
1289 <     * A handler for unexecutable tasks that waits until the task can be
1290 <     * submitted for execution.
1191 <     */
1192 <    public static class WaitPolicy implements RejectedExecutionHandler {
1193 <        /**
1194 <         * Constructs a <tt>WaitPolicy</tt>.
1195 <         */
1196 <        public WaitPolicy() { }
1197 <
1198 <        public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1199 <            if (!e.isShutdown()) {
1200 <                try {
1201 <                    e.getQueue().put(r);
1202 <                } catch (InterruptedException ie) {
1203 <                    Thread.currentThread().interrupt();
1204 <                    throw new RejectedExecutionException(ie);
1205 <                }
1206 <            }
1207 <        }
1208 <    }
1209 <
1210 <    /**
1211 <     * A handler for unexecutable tasks that silently discards these tasks.
1289 >     * A handler for rejected tasks that silently discards the
1290 >     * rejected task.
1291       */
1292      public static class DiscardPolicy implements RejectedExecutionHandler {
1293  
# Line 1222 | Line 1301 | public class ThreadPoolExecutor implemen
1301      }
1302  
1303      /**
1304 <     * A handler for unexecutable tasks that discards the oldest
1305 <     * unhandled request.
1304 >     * A handler for rejected tasks that discards the oldest unhandled
1305 >     * request and then retries <tt>execute</tt>, unless the executor
1306 >     * is shut down, in which case the task is discarded.
1307       */
1308      public static class DiscardOldestPolicy implements RejectedExecutionHandler {
1309          /**

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines