249 |
|
* element getAndSet/CAS/setVolatile appear in any order, using |
250 |
|
* plain mode. But we must still preface some methods (mainly |
251 |
|
* those that may be accessed externally) with an acquireFence to |
252 |
< |
* avoid unbounded staleness. We use explicit acquiring reads |
253 |
< |
* (getSlot) rather than plain array access when acquire mode is |
254 |
< |
* required but not otherwise ensured by context. To reduce stalls |
255 |
< |
* by other stealers, we encourage timely writes to the base index |
256 |
< |
* by immediately following updates with a write of a volatile |
257 |
< |
* field that must be updated anyway, or an Opaque-mode write if |
258 |
< |
* there is no such opportunity. |
252 |
> |
* avoid unbounded staleness. This is equivalent to acting as if |
253 |
> |
* callers use an acquiring read of the reference to the pool or |
254 |
> |
* queue when invoking the method, even when they do not. We use |
255 |
> |
* explicit acquiring reads (getSlot) rather than plain array |
256 |
> |
* access when acquire mode is required but not otherwise ensured |
257 |
> |
* by context. To reduce stalls by other stealers, we encourage |
258 |
> |
* timely writes to the base index by immediately following |
259 |
> |
* updates with a write of a volatile field that must be updated |
260 |
> |
* anyway, or an Opaque-mode write if there is no such |
261 |
> |
* opportunity. |
262 |
|
* |
263 |
|
* Because indices and slot contents cannot always be consistent, |
264 |
|
* the emptiness check base == top is only quiescently accurate |
602 |
|
* present. These workers have no permissions set, do not belong |
603 |
|
* to any user-defined ThreadGroup, and erase all ThreadLocals |
604 |
|
* after executing any top-level task. The associated mechanics |
605 |
< |
* (mainly in ForkJoinWorkerThread) may be JVM-dependent and must |
606 |
< |
* access particular Thread class fields to achieve this effect. |
605 |
> |
* may be JVM-dependent and must access particular Thread class |
606 |
> |
* fields to achieve this effect. |
607 |
|
* |
608 |
|
* Memory placement |
609 |
|
* ================ |
675 |
|
* monitors and side tables. |
676 |
|
* * Scans probe slots (vs compare indices), along with related |
677 |
|
* changes that reduce performance differences across most |
678 |
< |
* garbage collectors, and reduces contention. |
678 |
> |
* garbage collectors, and reduce contention. |
679 |
|
* * Refactoring for better integration of special task types and |
680 |
|
* other capabilities that had been incrementally tacked on. Plus |
681 |
|
* many minor reworkings to improve consistency. |
2659 |
|
public <T> T invokeAny(Collection<? extends Callable<T>> tasks, |
2660 |
|
long timeout, TimeUnit unit) |
2661 |
|
throws InterruptedException, ExecutionException, TimeoutException { |
2659 |
– |
int par = mode & SMASK; |
2662 |
|
BulkTask<T>[] fs; BulkTask<T> root; |
2663 |
|
long deadline = unit.toNanos(timeout) + System.nanoTime(); |
2664 |
|
if ((fs = BulkTask.forkAll(tasks, true)) != null && fs.length > 0 && |
2665 |
|
(root = fs[0]) != null) { |
2666 |
|
TimeoutException tex = null; |
2667 |
|
try { |
2668 |
< |
if (par == 0) // if no workers, caller must execute |
2667 |
< |
root.get(); |
2668 |
< |
else |
2669 |
< |
root.get(deadline, TimeUnit.NANOSECONDS); |
2668 |
> |
root.get(deadline, TimeUnit.NANOSECONDS); |
2669 |
|
} catch (TimeoutException tx) { |
2670 |
|
tex = tx; |
2671 |
|
} catch (Throwable ignore) { |