ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ThreadPoolExecutor.java
(Generate patch)

Comparing jsr166/src/main/java/util/concurrent/ThreadPoolExecutor.java (file contents):
Revision 1.161 by jsr166, Mon Jul 6 06:26:14 2015 UTC vs.
Revision 1.162 by jsr166, Sun Sep 13 16:28:14 2015 UTC

# Line 114 | Line 114 | import java.util.concurrent.locks.Reentr
114   *
115   * <ul>
116   *
117 < * <li> If fewer than corePoolSize threads are running, the Executor
117 > * <li>If fewer than corePoolSize threads are running, the Executor
118   * always prefers adding a new thread
119 < * rather than queuing.</li>
119 > * rather than queuing.
120   *
121 < * <li> If corePoolSize or more threads are running, the Executor
121 > * <li>If corePoolSize or more threads are running, the Executor
122   * always prefers queuing a request rather than adding a new
123 < * thread.</li>
123 > * thread.
124   *
125 < * <li> If a request cannot be queued, a new thread is created unless
125 > * <li>If a request cannot be queued, a new thread is created unless
126   * this would exceed maximumPoolSize, in which case, the task will be
127 < * rejected.</li>
127 > * rejected.
128   *
129   * </ul>
130   *
131   * There are three general strategies for queuing:
132   * <ol>
133   *
134 < * <li> <em> Direct handoffs.</em> A good default choice for a work
134 > * <li><em> Direct handoffs.</em> A good default choice for a work
135   * queue is a {@link SynchronousQueue} that hands off tasks to threads
136   * without otherwise holding them. Here, an attempt to queue a task
137   * will fail if no threads are immediately available to run it, so a
# Line 140 | Line 140 | import java.util.concurrent.locks.Reentr
140   * Direct handoffs generally require unbounded maximumPoolSizes to
141   * avoid rejection of new submitted tasks. This in turn admits the
142   * possibility of unbounded thread growth when commands continue to
143 < * arrive on average faster than they can be processed.  </li>
143 > * arrive on average faster than they can be processed.
144   *
145   * <li><em> Unbounded queues.</em> Using an unbounded queue (for
146   * example a {@link LinkedBlockingQueue} without a predefined
# Line 153 | Line 153 | import java.util.concurrent.locks.Reentr
153   * While this style of queuing can be useful in smoothing out
154   * transient bursts of requests, it admits the possibility of
155   * unbounded work queue growth when commands continue to arrive on
156 < * average faster than they can be processed.  </li>
156 > * average faster than they can be processed.
157   *
158   * <li><em>Bounded queues.</em> A bounded queue (for example, an
159   * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
# Line 166 | Line 166 | import java.util.concurrent.locks.Reentr
166   * time for more threads than you otherwise allow. Use of small queues
167   * generally requires larger pool sizes, which keeps CPUs busier but
168   * may encounter unacceptable scheduling overhead, which also
169 < * decreases throughput.  </li>
169 > * decreases throughput.
170   *
171   * </ol>
172   *
# Line 185 | Line 185 | import java.util.concurrent.locks.Reentr
185   *
186   * <ol>
187   *
188 < * <li> In the default {@link ThreadPoolExecutor.AbortPolicy}, the
188 > * <li>In the default {@link ThreadPoolExecutor.AbortPolicy}, the
189   * handler throws a runtime {@link RejectedExecutionException} upon
190 < * rejection. </li>
190 > * rejection.
191   *
192 < * <li> In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread
192 > * <li>In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread
193   * that invokes {@code execute} itself runs the task. This provides a
194   * simple feedback control mechanism that will slow down the rate that
195 < * new tasks are submitted. </li>
195 > * new tasks are submitted.
196   *
197 < * <li> In {@link ThreadPoolExecutor.DiscardPolicy}, a task that
198 < * cannot be executed is simply dropped.  </li>
197 > * <li>In {@link ThreadPoolExecutor.DiscardPolicy}, a task that
198 > * cannot be executed is simply dropped.
199   *
200   * <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the
201   * executor is not shut down, the task at the head of the work queue
202   * is dropped, and then execution is retried (which can fail again,
203 < * causing this to be repeated.) </li>
203 > * causing this to be repeated.)
204   *
205   * </ol>
206   *

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines