ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ThreadPoolExecutor.java
(Generate patch)

Comparing jsr166/src/main/java/util/concurrent/ThreadPoolExecutor.java (file contents):
Revision 1.5 by dl, Tue Jun 3 16:44:36 2003 UTC vs.
Revision 1.6 by dl, Fri Jun 6 18:42:18 2003 UTC

# Line 64 | Line 64 | import java.util.*;
64   * <dt>Queueing</dt>
65   *
66   * <dd>You are free to specify the queuing mechanism used to handle
67 < * submitted tasks.  The newCachedThreadPool factory method uses
68 < * queueless synchronous channels to to hand off work to threads.
69 < * This is a safe, conservative policy that avoids lockups when
70 < * handling sets of requests that might have internal dependencies.
71 < * The newFixedThreadPool factory method uses a LinkedBlockingQueue,
72 < * which will cause new tasks to be queued in cases where all
73 < * MaximumPoolSize threads are busy.  Queues are sometimes appropriate
74 < * when each task is completely independent of others, so tasks cannot
75 < * affect each others execution. For example, in an http server.  When
76 < * given a choice, this pool always prefers adding a new thread rather
77 < * than queueing if there are currently fewer than the current
78 < * getCorePoolSize threads running, but otherwise always prefers
79 < * queuing a request rather than adding a new thread.
67 > * submitted tasks.  A good default is to use queueless synchronous
68 > * channels to to hand off work to threads.  This is a safe,
69 > * conservative policy that avoids lockups when handling sets of
70 > * requests that might have internal dependencies.  Using an unbounded
71 > * queue (for example a LinkedBlockingQueue) which will cause new
72 > * tasks to be queued in cases where all corePoolSize threads are
73 > * busy, so no more that corePoolSize threads will be craated.  This
74 > * may be appropriate when each task is completely independent of
75 > * others, so tasks cannot affect each others execution. For example,
76 > * in an http server.  When given a choice, this pool always prefers
77 > * adding a new thread rather than queueing if there are currently
78 > * fewer than the current getCorePoolSize threads running, but
79 > * otherwise always prefers queuing a request rather than adding a new
80 > * thread.
81   *
82   * <p>While queuing can be useful in smoothing out transient bursts of
83   * requests, especially in socket-based services, it is not very well

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines