17 |
|
* asynchronous tasks, due to reduced per-task invocation overhead, |
18 |
|
* and they provide a means of bounding and managing the resources, |
19 |
|
* including threads, consumed when executing a collection of tasks. |
20 |
+ |
* Each <tt>ThreadPoolExecutor</tt> also maintains some basic |
21 |
+ |
* statistics, such as the number of completed tasks, that may be |
22 |
+ |
* useful for monitoring and tuning. |
23 |
|
* |
24 |
|
* <p>To be useful across a wide range of contexts, this class |
25 |
|
* provides many adjustable parameters and extensibility hooks. For |
27 |
|
* or even to execute tasks sequentially in a single thread, in |
28 |
|
* addition to its most common configuration, which reuses a pool of |
29 |
|
* threads. However, programmers are urged to use the more convenient |
30 |
< |
* {@link Executors} factory methods <tt>newCachedThreadPool</tt> |
31 |
< |
* (unbounded thread pool, with automatic thread reclamation), |
32 |
< |
* <tt>newFixedThreadPool</tt> (fixed size thread pool), |
33 |
< |
* <tt>newSingleThreadPoolExecutor</tt> (single background thread for |
34 |
< |
* execution of tasks), and <tt>newThreadPerTaskExeceutor</tt> |
35 |
< |
* (execute each task in a new thread), that preconfigure settings for |
33 |
< |
* the most common usage scenarios. |
30 |
> |
* {@link Executors} factory methods {@link |
31 |
> |
* Executors#newCachedThreadPool} (unbounded thread pool, with |
32 |
> |
* automatic thread reclamation), {@link Executors#newFixedThreadPool} |
33 |
> |
* (fixed size thread pool) and {@link |
34 |
> |
* Executors#newSingleThreadExecutor} (single background thread), that |
35 |
> |
* preconfigure settings for the most common usage scenarios. |
36 |
|
* |
35 |
– |
* <p>Each <tt>ThreadPoolExecutor</tt> also maintains some basic |
36 |
– |
* statistics, such as the number of completed tasks, that may be |
37 |
– |
* useful for monitoring and tuning executors. |
37 |
|
* |
38 |
|
* <h3>Tuning guide</h3> |
39 |
|
* <dl> |
69 |
|
* have internal dependencies. Using an unbounded queue (for example |
70 |
|
* a {@link LinkedBlockingQueue}) will cause new tasks to be queued in |
71 |
|
* cases where all corePoolSize threads are busy, so no more than |
72 |
< |
* corePoolSize threads will be craated. This may be appropriate when |
72 |
> |
* corePoolSize threads will be created. This may be appropriate when |
73 |
|
* each task is completely independent of others, so tasks cannot |
74 |
|
* affect each others execution; for example, in a web page server. |
75 |
|
* When given a choice, a <tt>ThreadPoolExecutor</tt> always prefers |