ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/jdk8/java/util/concurrent/locks/AbstractQueuedSynchronizer.java
Revision: 1.5
Committed: Wed Jan 17 06:11:59 2018 UTC (6 years, 4 months ago) by jsr166
Branch: MAIN
CVS Tags: HEAD
Changes since 1.4: +20 -14 lines
Log Message:
backport 8191483: AbstractQueuedSynchronizer cancel/cancel race

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain, as explained at
4 * http://creativecommons.org/publicdomain/zero/1.0/
5 */
6
7 package java.util.concurrent.locks;
8
9 import java.util.ArrayList;
10 import java.util.Collection;
11 import java.util.Date;
12 import java.util.concurrent.TimeUnit;
13
14 /**
15 * Provides a framework for implementing blocking locks and related
16 * synchronizers (semaphores, events, etc) that rely on
17 * first-in-first-out (FIFO) wait queues. This class is designed to
18 * be a useful basis for most kinds of synchronizers that rely on a
19 * single atomic {@code int} value to represent state. Subclasses
20 * must define the protected methods that change this state, and which
21 * define what that state means in terms of this object being acquired
22 * or released. Given these, the other methods in this class carry
23 * out all queuing and blocking mechanics. Subclasses can maintain
24 * other state fields, but only the atomically updated {@code int}
25 * value manipulated using methods {@link #getState}, {@link
26 * #setState} and {@link #compareAndSetState} is tracked with respect
27 * to synchronization.
28 *
29 * <p>Subclasses should be defined as non-public internal helper
30 * classes that are used to implement the synchronization properties
31 * of their enclosing class. Class
32 * {@code AbstractQueuedSynchronizer} does not implement any
33 * synchronization interface. Instead it defines methods such as
34 * {@link #acquireInterruptibly} that can be invoked as
35 * appropriate by concrete locks and related synchronizers to
36 * implement their public methods.
37 *
38 * <p>This class supports either or both a default <em>exclusive</em>
39 * mode and a <em>shared</em> mode. When acquired in exclusive mode,
40 * attempted acquires by other threads cannot succeed. Shared mode
41 * acquires by multiple threads may (but need not) succeed. This class
42 * does not &quot;understand&quot; these differences except in the
43 * mechanical sense that when a shared mode acquire succeeds, the next
44 * waiting thread (if one exists) must also determine whether it can
45 * acquire as well. Threads waiting in the different modes share the
46 * same FIFO queue. Usually, implementation subclasses support only
47 * one of these modes, but both can come into play for example in a
48 * {@link ReadWriteLock}. Subclasses that support only exclusive or
49 * only shared modes need not define the methods supporting the unused mode.
50 *
51 * <p>This class defines a nested {@link ConditionObject} class that
52 * can be used as a {@link Condition} implementation by subclasses
53 * supporting exclusive mode for which method {@link
54 * #isHeldExclusively} reports whether synchronization is exclusively
55 * held with respect to the current thread, method {@link #release}
56 * invoked with the current {@link #getState} value fully releases
57 * this object, and {@link #acquire}, given this saved state value,
58 * eventually restores this object to its previous acquired state. No
59 * {@code AbstractQueuedSynchronizer} method otherwise creates such a
60 * condition, so if this constraint cannot be met, do not use it. The
61 * behavior of {@link ConditionObject} depends of course on the
62 * semantics of its synchronizer implementation.
63 *
64 * <p>This class provides inspection, instrumentation, and monitoring
65 * methods for the internal queue, as well as similar methods for
66 * condition objects. These can be exported as desired into classes
67 * using an {@code AbstractQueuedSynchronizer} for their
68 * synchronization mechanics.
69 *
70 * <p>Serialization of this class stores only the underlying atomic
71 * integer maintaining state, so deserialized objects have empty
72 * thread queues. Typical subclasses requiring serializability will
73 * define a {@code readObject} method that restores this to a known
74 * initial state upon deserialization.
75 *
76 * <h3>Usage</h3>
77 *
78 * <p>To use this class as the basis of a synchronizer, redefine the
79 * following methods, as applicable, by inspecting and/or modifying
80 * the synchronization state using {@link #getState}, {@link
81 * #setState} and/or {@link #compareAndSetState}:
82 *
83 * <ul>
84 * <li>{@link #tryAcquire}
85 * <li>{@link #tryRelease}
86 * <li>{@link #tryAcquireShared}
87 * <li>{@link #tryReleaseShared}
88 * <li>{@link #isHeldExclusively}
89 * </ul>
90 *
91 * Each of these methods by default throws {@link
92 * UnsupportedOperationException}. Implementations of these methods
93 * must be internally thread-safe, and should in general be short and
94 * not block. Defining these methods is the <em>only</em> supported
95 * means of using this class. All other methods are declared
96 * {@code final} because they cannot be independently varied.
97 *
98 * <p>You may also find the inherited methods from {@link
99 * AbstractOwnableSynchronizer} useful to keep track of the thread
100 * owning an exclusive synchronizer. You are encouraged to use them
101 * -- this enables monitoring and diagnostic tools to assist users in
102 * determining which threads hold locks.
103 *
104 * <p>Even though this class is based on an internal FIFO queue, it
105 * does not automatically enforce FIFO acquisition policies. The core
106 * of exclusive synchronization takes the form:
107 *
108 * <pre>
109 * Acquire:
110 * while (!tryAcquire(arg)) {
111 * <em>enqueue thread if it is not already queued</em>;
112 * <em>possibly block current thread</em>;
113 * }
114 *
115 * Release:
116 * if (tryRelease(arg))
117 * <em>unblock the first queued thread</em>;
118 * </pre>
119 *
120 * (Shared mode is similar but may involve cascading signals.)
121 *
122 * <p id="barging">Because checks in acquire are invoked before
123 * enqueuing, a newly acquiring thread may <em>barge</em> ahead of
124 * others that are blocked and queued. However, you can, if desired,
125 * define {@code tryAcquire} and/or {@code tryAcquireShared} to
126 * disable barging by internally invoking one or more of the inspection
127 * methods, thereby providing a <em>fair</em> FIFO acquisition order.
128 * In particular, most fair synchronizers can define {@code tryAcquire}
129 * to return {@code false} if {@link #hasQueuedPredecessors} (a method
130 * specifically designed to be used by fair synchronizers) returns
131 * {@code true}. Other variations are possible.
132 *
133 * <p>Throughput and scalability are generally highest for the
134 * default barging (also known as <em>greedy</em>,
135 * <em>renouncement</em>, and <em>convoy-avoidance</em>) strategy.
136 * While this is not guaranteed to be fair or starvation-free, earlier
137 * queued threads are allowed to recontend before later queued
138 * threads, and each recontention has an unbiased chance to succeed
139 * against incoming threads. Also, while acquires do not
140 * &quot;spin&quot; in the usual sense, they may perform multiple
141 * invocations of {@code tryAcquire} interspersed with other
142 * computations before blocking. This gives most of the benefits of
143 * spins when exclusive synchronization is only briefly held, without
144 * most of the liabilities when it isn't. If so desired, you can
145 * augment this by preceding calls to acquire methods with
146 * "fast-path" checks, possibly prechecking {@link #hasContended}
147 * and/or {@link #hasQueuedThreads} to only do so if the synchronizer
148 * is likely not to be contended.
149 *
150 * <p>This class provides an efficient and scalable basis for
151 * synchronization in part by specializing its range of use to
152 * synchronizers that can rely on {@code int} state, acquire, and
153 * release parameters, and an internal FIFO wait queue. When this does
154 * not suffice, you can build synchronizers from a lower level using
155 * {@link java.util.concurrent.atomic atomic} classes, your own custom
156 * {@link java.util.Queue} classes, and {@link LockSupport} blocking
157 * support.
158 *
159 * <h3>Usage Examples</h3>
160 *
161 * <p>Here is a non-reentrant mutual exclusion lock class that uses
162 * the value zero to represent the unlocked state, and one to
163 * represent the locked state. While a non-reentrant lock
164 * does not strictly require recording of the current owner
165 * thread, this class does so anyway to make usage easier to monitor.
166 * It also supports conditions and exposes some instrumentation methods:
167 *
168 * <pre> {@code
169 * class Mutex implements Lock, java.io.Serializable {
170 *
171 * // Our internal helper class
172 * private static class Sync extends AbstractQueuedSynchronizer {
173 *
174 * // Acquires the lock if state is zero
175 * public boolean tryAcquire(int acquires) {
176 * assert acquires == 1; // Otherwise unused
177 * if (compareAndSetState(0, 1)) {
178 * setExclusiveOwnerThread(Thread.currentThread());
179 * return true;
180 * }
181 * return false;
182 * }
183 *
184 * // Releases the lock by setting state to zero
185 * protected boolean tryRelease(int releases) {
186 * assert releases == 1; // Otherwise unused
187 * if (!isHeldExclusively())
188 * throw new IllegalMonitorStateException();
189 * setExclusiveOwnerThread(null);
190 * setState(0);
191 * return true;
192 * }
193 *
194 * // Reports whether in locked state
195 * public boolean isLocked() {
196 * return getState() != 0;
197 * }
198 *
199 * public boolean isHeldExclusively() {
200 * // a data race, but safe due to out-of-thin-air guarantees
201 * return getExclusiveOwnerThread() == Thread.currentThread();
202 * }
203 *
204 * // Provides a Condition
205 * public Condition newCondition() {
206 * return new ConditionObject();
207 * }
208 *
209 * // Deserializes properly
210 * private void readObject(ObjectInputStream s)
211 * throws IOException, ClassNotFoundException {
212 * s.defaultReadObject();
213 * setState(0); // reset to unlocked state
214 * }
215 * }
216 *
217 * // The sync object does all the hard work. We just forward to it.
218 * private final Sync sync = new Sync();
219 *
220 * public void lock() { sync.acquire(1); }
221 * public boolean tryLock() { return sync.tryAcquire(1); }
222 * public void unlock() { sync.release(1); }
223 * public Condition newCondition() { return sync.newCondition(); }
224 * public boolean isLocked() { return sync.isLocked(); }
225 * public boolean isHeldByCurrentThread() {
226 * return sync.isHeldExclusively();
227 * }
228 * public boolean hasQueuedThreads() {
229 * return sync.hasQueuedThreads();
230 * }
231 * public void lockInterruptibly() throws InterruptedException {
232 * sync.acquireInterruptibly(1);
233 * }
234 * public boolean tryLock(long timeout, TimeUnit unit)
235 * throws InterruptedException {
236 * return sync.tryAcquireNanos(1, unit.toNanos(timeout));
237 * }
238 * }}</pre>
239 *
240 * <p>Here is a latch class that is like a
241 * {@link java.util.concurrent.CountDownLatch CountDownLatch}
242 * except that it only requires a single {@code signal} to
243 * fire. Because a latch is non-exclusive, it uses the {@code shared}
244 * acquire and release methods.
245 *
246 * <pre> {@code
247 * class BooleanLatch {
248 *
249 * private static class Sync extends AbstractQueuedSynchronizer {
250 * boolean isSignalled() { return getState() != 0; }
251 *
252 * protected int tryAcquireShared(int ignore) {
253 * return isSignalled() ? 1 : -1;
254 * }
255 *
256 * protected boolean tryReleaseShared(int ignore) {
257 * setState(1);
258 * return true;
259 * }
260 * }
261 *
262 * private final Sync sync = new Sync();
263 * public boolean isSignalled() { return sync.isSignalled(); }
264 * public void signal() { sync.releaseShared(1); }
265 * public void await() throws InterruptedException {
266 * sync.acquireSharedInterruptibly(1);
267 * }
268 * }}</pre>
269 *
270 * @since 1.5
271 * @author Doug Lea
272 */
273 public abstract class AbstractQueuedSynchronizer
274 extends AbstractOwnableSynchronizer
275 implements java.io.Serializable {
276
277 private static final long serialVersionUID = 7373984972572414691L;
278
279 /**
280 * Creates a new {@code AbstractQueuedSynchronizer} instance
281 * with initial synchronization state of zero.
282 */
283 protected AbstractQueuedSynchronizer() { }
284
285 /**
286 * Wait queue node class.
287 *
288 * <p>The wait queue is a variant of a "CLH" (Craig, Landin, and
289 * Hagersten) lock queue. CLH locks are normally used for
290 * spinlocks. We instead use them for blocking synchronizers, but
291 * use the same basic tactic of holding some of the control
292 * information about a thread in the predecessor of its node. A
293 * "status" field in each node keeps track of whether a thread
294 * should block. A node is signalled when its predecessor
295 * releases. Each node of the queue otherwise serves as a
296 * specific-notification-style monitor holding a single waiting
297 * thread. The status field does NOT control whether threads are
298 * granted locks etc though. A thread may try to acquire if it is
299 * first in the queue. But being first does not guarantee success;
300 * it only gives the right to contend. So the currently released
301 * contender thread may need to rewait.
302 *
303 * <p>To enqueue into a CLH lock, you atomically splice it in as new
304 * tail. To dequeue, you just set the head field.
305 * <pre>
306 * +------+ prev +-----+ +-----+
307 * head | | <---- | | <---- | | tail
308 * +------+ +-----+ +-----+
309 * </pre>
310 *
311 * <p>Insertion into a CLH queue requires only a single atomic
312 * operation on "tail", so there is a simple atomic point of
313 * demarcation from unqueued to queued. Similarly, dequeuing
314 * involves only updating the "head". However, it takes a bit
315 * more work for nodes to determine who their successors are,
316 * in part to deal with possible cancellation due to timeouts
317 * and interrupts.
318 *
319 * <p>The "prev" links (not used in original CLH locks), are mainly
320 * needed to handle cancellation. If a node is cancelled, its
321 * successor is (normally) relinked to a non-cancelled
322 * predecessor. For explanation of similar mechanics in the case
323 * of spin locks, see the papers by Scott and Scherer at
324 * http://www.cs.rochester.edu/u/scott/synchronization/
325 *
326 * <p>We also use "next" links to implement blocking mechanics.
327 * The thread id for each node is kept in its own node, so a
328 * predecessor signals the next node to wake up by traversing
329 * next link to determine which thread it is. Determination of
330 * successor must avoid races with newly queued nodes to set
331 * the "next" fields of their predecessors. This is solved
332 * when necessary by checking backwards from the atomically
333 * updated "tail" when a node's successor appears to be null.
334 * (Or, said differently, the next-links are an optimization
335 * so that we don't usually need a backward scan.)
336 *
337 * <p>Cancellation introduces some conservatism to the basic
338 * algorithms. Since we must poll for cancellation of other
339 * nodes, we can miss noticing whether a cancelled node is
340 * ahead or behind us. This is dealt with by always unparking
341 * successors upon cancellation, allowing them to stabilize on
342 * a new predecessor, unless we can identify an uncancelled
343 * predecessor who will carry this responsibility.
344 *
345 * <p>CLH queues need a dummy header node to get started. But
346 * we don't create them on construction, because it would be wasted
347 * effort if there is never contention. Instead, the node
348 * is constructed and head and tail pointers are set upon first
349 * contention.
350 *
351 * <p>Threads waiting on Conditions use the same nodes, but
352 * use an additional link. Conditions only need to link nodes
353 * in simple (non-concurrent) linked queues because they are
354 * only accessed when exclusively held. Upon await, a node is
355 * inserted into a condition queue. Upon signal, the node is
356 * transferred to the main queue. A special value of status
357 * field is used to mark which queue a node is on.
358 *
359 * <p>Thanks go to Dave Dice, Mark Moir, Victor Luchangco, Bill
360 * Scherer and Michael Scott, along with members of JSR-166
361 * expert group, for helpful ideas, discussions, and critiques
362 * on the design of this class.
363 */
364 static final class Node {
365 /** Marker to indicate a node is waiting in shared mode */
366 static final Node SHARED = new Node();
367 /** Marker to indicate a node is waiting in exclusive mode */
368 static final Node EXCLUSIVE = null;
369
370 /** waitStatus value to indicate thread has cancelled. */
371 static final int CANCELLED = 1;
372 /** waitStatus value to indicate successor's thread needs unparking. */
373 static final int SIGNAL = -1;
374 /** waitStatus value to indicate thread is waiting on condition. */
375 static final int CONDITION = -2;
376 /**
377 * waitStatus value to indicate the next acquireShared should
378 * unconditionally propagate.
379 */
380 static final int PROPAGATE = -3;
381
382 /**
383 * Status field, taking on only the values:
384 * SIGNAL: The successor of this node is (or will soon be)
385 * blocked (via park), so the current node must
386 * unpark its successor when it releases or
387 * cancels. To avoid races, acquire methods must
388 * first indicate they need a signal,
389 * then retry the atomic acquire, and then,
390 * on failure, block.
391 * CANCELLED: This node is cancelled due to timeout or interrupt.
392 * Nodes never leave this state. In particular,
393 * a thread with cancelled node never again blocks.
394 * CONDITION: This node is currently on a condition queue.
395 * It will not be used as a sync queue node
396 * until transferred, at which time the status
397 * will be set to 0. (Use of this value here has
398 * nothing to do with the other uses of the
399 * field, but simplifies mechanics.)
400 * PROPAGATE: A releaseShared should be propagated to other
401 * nodes. This is set (for head node only) in
402 * doReleaseShared to ensure propagation
403 * continues, even if other operations have
404 * since intervened.
405 * 0: None of the above
406 *
407 * The values are arranged numerically to simplify use.
408 * Non-negative values mean that a node doesn't need to
409 * signal. So, most code doesn't need to check for particular
410 * values, just for sign.
411 *
412 * The field is initialized to 0 for normal sync nodes, and
413 * CONDITION for condition nodes. It is modified using CAS
414 * (or when possible, unconditional volatile writes).
415 */
416 volatile int waitStatus;
417
418 /**
419 * Link to predecessor node that current node/thread relies on
420 * for checking waitStatus. Assigned during enqueuing, and nulled
421 * out (for sake of GC) only upon dequeuing. Also, upon
422 * cancellation of a predecessor, we short-circuit while
423 * finding a non-cancelled one, which will always exist
424 * because the head node is never cancelled: A node becomes
425 * head only as a result of successful acquire. A
426 * cancelled thread never succeeds in acquiring, and a thread only
427 * cancels itself, not any other node.
428 */
429 volatile Node prev;
430
431 /**
432 * Link to the successor node that the current node/thread
433 * unparks upon release. Assigned during enqueuing, adjusted
434 * when bypassing cancelled predecessors, and nulled out (for
435 * sake of GC) when dequeued. The enq operation does not
436 * assign next field of a predecessor until after attachment,
437 * so seeing a null next field does not necessarily mean that
438 * node is at end of queue. However, if a next field appears
439 * to be null, we can scan prev's from the tail to
440 * double-check. The next field of cancelled nodes is set to
441 * point to the node itself instead of null, to make life
442 * easier for isOnSyncQueue.
443 */
444 volatile Node next;
445
446 /**
447 * The thread that enqueued this node. Initialized on
448 * construction and nulled out after use.
449 */
450 volatile Thread thread;
451
452 /**
453 * Link to next node waiting on condition, or the special
454 * value SHARED. Because condition queues are accessed only
455 * when holding in exclusive mode, we just need a simple
456 * linked queue to hold nodes while they are waiting on
457 * conditions. They are then transferred to the queue to
458 * re-acquire. And because conditions can only be exclusive,
459 * we save a field by using special value to indicate shared
460 * mode.
461 */
462 Node nextWaiter;
463
464 /**
465 * Returns true if node is waiting in shared mode.
466 */
467 final boolean isShared() {
468 return nextWaiter == SHARED;
469 }
470
471 /**
472 * Returns previous node, or throws NullPointerException if null.
473 * Use when predecessor cannot be null. The null check could
474 * be elided, but is present to help the VM.
475 *
476 * @return the predecessor of this node
477 */
478 final Node predecessor() {
479 Node p = prev;
480 if (p == null)
481 throw new NullPointerException();
482 else
483 return p;
484 }
485
486 /** Establishes initial head or SHARED marker. */
487 Node() {}
488
489 /** Constructor used by addWaiter. */
490 Node(Node nextWaiter) {
491 this.nextWaiter = nextWaiter;
492 U.putObject(this, THREAD, Thread.currentThread());
493 }
494
495 /** Constructor used by addConditionWaiter. */
496 Node(int waitStatus) {
497 U.putInt(this, WAITSTATUS, waitStatus);
498 U.putObject(this, THREAD, Thread.currentThread());
499 }
500
501 /** CASes waitStatus field. */
502 final boolean compareAndSetWaitStatus(int expect, int update) {
503 return U.compareAndSwapInt(this, WAITSTATUS, expect, update);
504 }
505
506 /** CASes next field. */
507 final boolean compareAndSetNext(Node expect, Node update) {
508 return U.compareAndSwapObject(this, NEXT, expect, update);
509 }
510
511 private static final sun.misc.Unsafe U = sun.misc.Unsafe.getUnsafe();
512 private static final long NEXT;
513 static final long PREV;
514 private static final long THREAD;
515 private static final long WAITSTATUS;
516 static {
517 try {
518 NEXT = U.objectFieldOffset
519 (Node.class.getDeclaredField("next"));
520 PREV = U.objectFieldOffset
521 (Node.class.getDeclaredField("prev"));
522 THREAD = U.objectFieldOffset
523 (Node.class.getDeclaredField("thread"));
524 WAITSTATUS = U.objectFieldOffset
525 (Node.class.getDeclaredField("waitStatus"));
526 } catch (ReflectiveOperationException e) {
527 throw new Error(e);
528 }
529 }
530 }
531
532 /**
533 * Head of the wait queue, lazily initialized. Except for
534 * initialization, it is modified only via method setHead. Note:
535 * If head exists, its waitStatus is guaranteed not to be
536 * CANCELLED.
537 */
538 private transient volatile Node head;
539
540 /**
541 * Tail of the wait queue, lazily initialized. Modified only via
542 * method enq to add new wait node.
543 */
544 private transient volatile Node tail;
545
546 /**
547 * The synchronization state.
548 */
549 private volatile int state;
550
551 /**
552 * Returns the current value of synchronization state.
553 * This operation has memory semantics of a {@code volatile} read.
554 * @return current state value
555 */
556 protected final int getState() {
557 return state;
558 }
559
560 /**
561 * Sets the value of synchronization state.
562 * This operation has memory semantics of a {@code volatile} write.
563 * @param newState the new state value
564 */
565 protected final void setState(int newState) {
566 state = newState;
567 }
568
569 /**
570 * Atomically sets synchronization state to the given updated
571 * value if the current state value equals the expected value.
572 * This operation has memory semantics of a {@code volatile} read
573 * and write.
574 *
575 * @param expect the expected value
576 * @param update the new value
577 * @return {@code true} if successful. False return indicates that the actual
578 * value was not equal to the expected value.
579 */
580 protected final boolean compareAndSetState(int expect, int update) {
581 return U.compareAndSwapInt(this, STATE, expect, update);
582 }
583
584 // Queuing utilities
585
586 /**
587 * The number of nanoseconds for which it is faster to spin
588 * rather than to use timed park. A rough estimate suffices
589 * to improve responsiveness with very short timeouts.
590 */
591 static final long SPIN_FOR_TIMEOUT_THRESHOLD = 1000L;
592
593 /**
594 * Inserts node into queue, initializing if necessary. See picture above.
595 * @param node the node to insert
596 * @return node's predecessor
597 */
598 private Node enq(Node node) {
599 for (;;) {
600 Node oldTail = tail;
601 if (oldTail != null) {
602 U.putObject(node, Node.PREV, oldTail);
603 if (compareAndSetTail(oldTail, node)) {
604 oldTail.next = node;
605 return oldTail;
606 }
607 } else {
608 initializeSyncQueue();
609 }
610 }
611 }
612
613 /**
614 * Creates and enqueues node for current thread and given mode.
615 *
616 * @param mode Node.EXCLUSIVE for exclusive, Node.SHARED for shared
617 * @return the new node
618 */
619 private Node addWaiter(Node mode) {
620 Node node = new Node(mode);
621
622 for (;;) {
623 Node oldTail = tail;
624 if (oldTail != null) {
625 U.putObject(node, Node.PREV, oldTail);
626 if (compareAndSetTail(oldTail, node)) {
627 oldTail.next = node;
628 return node;
629 }
630 } else {
631 initializeSyncQueue();
632 }
633 }
634 }
635
636 /**
637 * Sets head of queue to be node, thus dequeuing. Called only by
638 * acquire methods. Also nulls out unused fields for sake of GC
639 * and to suppress unnecessary signals and traversals.
640 *
641 * @param node the node
642 */
643 private void setHead(Node node) {
644 head = node;
645 node.thread = null;
646 node.prev = null;
647 }
648
649 /**
650 * Wakes up node's successor, if one exists.
651 *
652 * @param node the node
653 */
654 private void unparkSuccessor(Node node) {
655 /*
656 * If status is negative (i.e., possibly needing signal) try
657 * to clear in anticipation of signalling. It is OK if this
658 * fails or if status is changed by waiting thread.
659 */
660 int ws = node.waitStatus;
661 if (ws < 0)
662 node.compareAndSetWaitStatus(ws, 0);
663
664 /*
665 * Thread to unpark is held in successor, which is normally
666 * just the next node. But if cancelled or apparently null,
667 * traverse backwards from tail to find the actual
668 * non-cancelled successor.
669 */
670 Node s = node.next;
671 if (s == null || s.waitStatus > 0) {
672 s = null;
673 for (Node p = tail; p != node && p != null; p = p.prev)
674 if (p.waitStatus <= 0)
675 s = p;
676 }
677 if (s != null)
678 LockSupport.unpark(s.thread);
679 }
680
681 /**
682 * Release action for shared mode -- signals successor and ensures
683 * propagation. (Note: For exclusive mode, release just amounts
684 * to calling unparkSuccessor of head if it needs signal.)
685 */
686 private void doReleaseShared() {
687 /*
688 * Ensure that a release propagates, even if there are other
689 * in-progress acquires/releases. This proceeds in the usual
690 * way of trying to unparkSuccessor of head if it needs
691 * signal. But if it does not, status is set to PROPAGATE to
692 * ensure that upon release, propagation continues.
693 * Additionally, we must loop in case a new node is added
694 * while we are doing this. Also, unlike other uses of
695 * unparkSuccessor, we need to know if CAS to reset status
696 * fails, if so rechecking.
697 */
698 for (;;) {
699 Node h = head;
700 if (h != null && h != tail) {
701 int ws = h.waitStatus;
702 if (ws == Node.SIGNAL) {
703 if (!h.compareAndSetWaitStatus(Node.SIGNAL, 0))
704 continue; // loop to recheck cases
705 unparkSuccessor(h);
706 }
707 else if (ws == 0 &&
708 !h.compareAndSetWaitStatus(0, Node.PROPAGATE))
709 continue; // loop on failed CAS
710 }
711 if (h == head) // loop if head changed
712 break;
713 }
714 }
715
716 /**
717 * Sets head of queue, and checks if successor may be waiting
718 * in shared mode, if so propagating if either propagate > 0 or
719 * PROPAGATE status was set.
720 *
721 * @param node the node
722 * @param propagate the return value from a tryAcquireShared
723 */
724 private void setHeadAndPropagate(Node node, int propagate) {
725 Node h = head; // Record old head for check below
726 setHead(node);
727 /*
728 * Try to signal next queued node if:
729 * Propagation was indicated by caller,
730 * or was recorded (as h.waitStatus either before
731 * or after setHead) by a previous operation
732 * (note: this uses sign-check of waitStatus because
733 * PROPAGATE status may transition to SIGNAL.)
734 * and
735 * The next node is waiting in shared mode,
736 * or we don't know, because it appears null
737 *
738 * The conservatism in both of these checks may cause
739 * unnecessary wake-ups, but only when there are multiple
740 * racing acquires/releases, so most need signals now or soon
741 * anyway.
742 */
743 if (propagate > 0 || h == null || h.waitStatus < 0 ||
744 (h = head) == null || h.waitStatus < 0) {
745 Node s = node.next;
746 if (s == null || s.isShared())
747 doReleaseShared();
748 }
749 }
750
751 // Utilities for various versions of acquire
752
753 /**
754 * Cancels an ongoing attempt to acquire.
755 *
756 * @param node the node
757 */
758 private void cancelAcquire(Node node) {
759 // Ignore if node doesn't exist
760 if (node == null)
761 return;
762
763 node.thread = null;
764
765 // Skip cancelled predecessors
766 Node pred = node.prev;
767 while (pred.waitStatus > 0)
768 node.prev = pred = pred.prev;
769
770 // predNext is the apparent node to unsplice. CASes below will
771 // fail if not, in which case, we lost race vs another cancel
772 // or signal, so no further action is necessary, although with
773 // a possibility that a cancelled node may transiently remain
774 // reachable.
775 Node predNext = pred.next;
776
777 // Can use unconditional write instead of CAS here.
778 // After this atomic step, other Nodes can skip past us.
779 // Before, we are free of interference from other threads.
780 node.waitStatus = Node.CANCELLED;
781
782 // If we are the tail, remove ourselves.
783 if (node == tail && compareAndSetTail(node, pred)) {
784 pred.compareAndSetNext(predNext, null);
785 } else {
786 // If successor needs signal, try to set pred's next-link
787 // so it will get one. Otherwise wake it up to propagate.
788 int ws;
789 if (pred != head &&
790 ((ws = pred.waitStatus) == Node.SIGNAL ||
791 (ws <= 0 && pred.compareAndSetWaitStatus(ws, Node.SIGNAL))) &&
792 pred.thread != null) {
793 Node next = node.next;
794 if (next != null && next.waitStatus <= 0)
795 pred.compareAndSetNext(predNext, next);
796 } else {
797 unparkSuccessor(node);
798 }
799
800 node.next = node; // help GC
801 }
802 }
803
804 /**
805 * Checks and updates status for a node that failed to acquire.
806 * Returns true if thread should block. This is the main signal
807 * control in all acquire loops. Requires that pred == node.prev.
808 *
809 * @param pred node's predecessor holding status
810 * @param node the node
811 * @return {@code true} if thread should block
812 */
813 private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) {
814 int ws = pred.waitStatus;
815 if (ws == Node.SIGNAL)
816 /*
817 * This node has already set status asking a release
818 * to signal it, so it can safely park.
819 */
820 return true;
821 if (ws > 0) {
822 /*
823 * Predecessor was cancelled. Skip over predecessors and
824 * indicate retry.
825 */
826 do {
827 node.prev = pred = pred.prev;
828 } while (pred.waitStatus > 0);
829 pred.next = node;
830 } else {
831 /*
832 * waitStatus must be 0 or PROPAGATE. Indicate that we
833 * need a signal, but don't park yet. Caller will need to
834 * retry to make sure it cannot acquire before parking.
835 */
836 pred.compareAndSetWaitStatus(ws, Node.SIGNAL);
837 }
838 return false;
839 }
840
841 /**
842 * Convenience method to interrupt current thread.
843 */
844 static void selfInterrupt() {
845 Thread.currentThread().interrupt();
846 }
847
848 /**
849 * Convenience method to park and then check if interrupted.
850 *
851 * @return {@code true} if interrupted
852 */
853 private final boolean parkAndCheckInterrupt() {
854 LockSupport.park(this);
855 return Thread.interrupted();
856 }
857
858 /*
859 * Various flavors of acquire, varying in exclusive/shared and
860 * control modes. Each is mostly the same, but annoyingly
861 * different. Only a little bit of factoring is possible due to
862 * interactions of exception mechanics (including ensuring that we
863 * cancel if tryAcquire throws exception) and other control, at
864 * least not without hurting performance too much.
865 */
866
867 /**
868 * Acquires in exclusive uninterruptible mode for thread already in
869 * queue. Used by condition wait methods as well as acquire.
870 *
871 * @param node the node
872 * @param arg the acquire argument
873 * @return {@code true} if interrupted while waiting
874 */
875 final boolean acquireQueued(final Node node, int arg) {
876 boolean interrupted = false;
877 try {
878 for (;;) {
879 final Node p = node.predecessor();
880 if (p == head && tryAcquire(arg)) {
881 setHead(node);
882 p.next = null; // help GC
883 return interrupted;
884 }
885 if (shouldParkAfterFailedAcquire(p, node))
886 interrupted |= parkAndCheckInterrupt();
887 }
888 } catch (Throwable t) {
889 cancelAcquire(node);
890 if (interrupted)
891 selfInterrupt();
892 throw t;
893 }
894 }
895
896 /**
897 * Acquires in exclusive interruptible mode.
898 * @param arg the acquire argument
899 */
900 private void doAcquireInterruptibly(int arg)
901 throws InterruptedException {
902 final Node node = addWaiter(Node.EXCLUSIVE);
903 try {
904 for (;;) {
905 final Node p = node.predecessor();
906 if (p == head && tryAcquire(arg)) {
907 setHead(node);
908 p.next = null; // help GC
909 return;
910 }
911 if (shouldParkAfterFailedAcquire(p, node) &&
912 parkAndCheckInterrupt())
913 throw new InterruptedException();
914 }
915 } catch (Throwable t) {
916 cancelAcquire(node);
917 throw t;
918 }
919 }
920
921 /**
922 * Acquires in exclusive timed mode.
923 *
924 * @param arg the acquire argument
925 * @param nanosTimeout max wait time
926 * @return {@code true} if acquired
927 */
928 private boolean doAcquireNanos(int arg, long nanosTimeout)
929 throws InterruptedException {
930 if (nanosTimeout <= 0L)
931 return false;
932 final long deadline = System.nanoTime() + nanosTimeout;
933 final Node node = addWaiter(Node.EXCLUSIVE);
934 try {
935 for (;;) {
936 final Node p = node.predecessor();
937 if (p == head && tryAcquire(arg)) {
938 setHead(node);
939 p.next = null; // help GC
940 return true;
941 }
942 nanosTimeout = deadline - System.nanoTime();
943 if (nanosTimeout <= 0L) {
944 cancelAcquire(node);
945 return false;
946 }
947 if (shouldParkAfterFailedAcquire(p, node) &&
948 nanosTimeout > SPIN_FOR_TIMEOUT_THRESHOLD)
949 LockSupport.parkNanos(this, nanosTimeout);
950 if (Thread.interrupted())
951 throw new InterruptedException();
952 }
953 } catch (Throwable t) {
954 cancelAcquire(node);
955 throw t;
956 }
957 }
958
959 /**
960 * Acquires in shared uninterruptible mode.
961 * @param arg the acquire argument
962 */
963 private void doAcquireShared(int arg) {
964 final Node node = addWaiter(Node.SHARED);
965 boolean interrupted = false;
966 try {
967 for (;;) {
968 final Node p = node.predecessor();
969 if (p == head) {
970 int r = tryAcquireShared(arg);
971 if (r >= 0) {
972 setHeadAndPropagate(node, r);
973 p.next = null; // help GC
974 return;
975 }
976 }
977 if (shouldParkAfterFailedAcquire(p, node))
978 interrupted |= parkAndCheckInterrupt();
979 }
980 } catch (Throwable t) {
981 cancelAcquire(node);
982 throw t;
983 } finally {
984 if (interrupted)
985 selfInterrupt();
986 }
987 }
988
989 /**
990 * Acquires in shared interruptible mode.
991 * @param arg the acquire argument
992 */
993 private void doAcquireSharedInterruptibly(int arg)
994 throws InterruptedException {
995 final Node node = addWaiter(Node.SHARED);
996 try {
997 for (;;) {
998 final Node p = node.predecessor();
999 if (p == head) {
1000 int r = tryAcquireShared(arg);
1001 if (r >= 0) {
1002 setHeadAndPropagate(node, r);
1003 p.next = null; // help GC
1004 return;
1005 }
1006 }
1007 if (shouldParkAfterFailedAcquire(p, node) &&
1008 parkAndCheckInterrupt())
1009 throw new InterruptedException();
1010 }
1011 } catch (Throwable t) {
1012 cancelAcquire(node);
1013 throw t;
1014 }
1015 }
1016
1017 /**
1018 * Acquires in shared timed mode.
1019 *
1020 * @param arg the acquire argument
1021 * @param nanosTimeout max wait time
1022 * @return {@code true} if acquired
1023 */
1024 private boolean doAcquireSharedNanos(int arg, long nanosTimeout)
1025 throws InterruptedException {
1026 if (nanosTimeout <= 0L)
1027 return false;
1028 final long deadline = System.nanoTime() + nanosTimeout;
1029 final Node node = addWaiter(Node.SHARED);
1030 try {
1031 for (;;) {
1032 final Node p = node.predecessor();
1033 if (p == head) {
1034 int r = tryAcquireShared(arg);
1035 if (r >= 0) {
1036 setHeadAndPropagate(node, r);
1037 p.next = null; // help GC
1038 return true;
1039 }
1040 }
1041 nanosTimeout = deadline - System.nanoTime();
1042 if (nanosTimeout <= 0L) {
1043 cancelAcquire(node);
1044 return false;
1045 }
1046 if (shouldParkAfterFailedAcquire(p, node) &&
1047 nanosTimeout > SPIN_FOR_TIMEOUT_THRESHOLD)
1048 LockSupport.parkNanos(this, nanosTimeout);
1049 if (Thread.interrupted())
1050 throw new InterruptedException();
1051 }
1052 } catch (Throwable t) {
1053 cancelAcquire(node);
1054 throw t;
1055 }
1056 }
1057
1058 // Main exported methods
1059
1060 /**
1061 * Attempts to acquire in exclusive mode. This method should query
1062 * if the state of the object permits it to be acquired in the
1063 * exclusive mode, and if so to acquire it.
1064 *
1065 * <p>This method is always invoked by the thread performing
1066 * acquire. If this method reports failure, the acquire method
1067 * may queue the thread, if it is not already queued, until it is
1068 * signalled by a release from some other thread. This can be used
1069 * to implement method {@link Lock#tryLock()}.
1070 *
1071 * <p>The default
1072 * implementation throws {@link UnsupportedOperationException}.
1073 *
1074 * @param arg the acquire argument. This value is always the one
1075 * passed to an acquire method, or is the value saved on entry
1076 * to a condition wait. The value is otherwise uninterpreted
1077 * and can represent anything you like.
1078 * @return {@code true} if successful. Upon success, this object has
1079 * been acquired.
1080 * @throws IllegalMonitorStateException if acquiring would place this
1081 * synchronizer in an illegal state. This exception must be
1082 * thrown in a consistent fashion for synchronization to work
1083 * correctly.
1084 * @throws UnsupportedOperationException if exclusive mode is not supported
1085 */
1086 protected boolean tryAcquire(int arg) {
1087 throw new UnsupportedOperationException();
1088 }
1089
1090 /**
1091 * Attempts to set the state to reflect a release in exclusive
1092 * mode.
1093 *
1094 * <p>This method is always invoked by the thread performing release.
1095 *
1096 * <p>The default implementation throws
1097 * {@link UnsupportedOperationException}.
1098 *
1099 * @param arg the release argument. This value is always the one
1100 * passed to a release method, or the current state value upon
1101 * entry to a condition wait. The value is otherwise
1102 * uninterpreted and can represent anything you like.
1103 * @return {@code true} if this object is now in a fully released
1104 * state, so that any waiting threads may attempt to acquire;
1105 * and {@code false} otherwise.
1106 * @throws IllegalMonitorStateException if releasing would place this
1107 * synchronizer in an illegal state. This exception must be
1108 * thrown in a consistent fashion for synchronization to work
1109 * correctly.
1110 * @throws UnsupportedOperationException if exclusive mode is not supported
1111 */
1112 protected boolean tryRelease(int arg) {
1113 throw new UnsupportedOperationException();
1114 }
1115
1116 /**
1117 * Attempts to acquire in shared mode. This method should query if
1118 * the state of the object permits it to be acquired in the shared
1119 * mode, and if so to acquire it.
1120 *
1121 * <p>This method is always invoked by the thread performing
1122 * acquire. If this method reports failure, the acquire method
1123 * may queue the thread, if it is not already queued, until it is
1124 * signalled by a release from some other thread.
1125 *
1126 * <p>The default implementation throws {@link
1127 * UnsupportedOperationException}.
1128 *
1129 * @param arg the acquire argument. This value is always the one
1130 * passed to an acquire method, or is the value saved on entry
1131 * to a condition wait. The value is otherwise uninterpreted
1132 * and can represent anything you like.
1133 * @return a negative value on failure; zero if acquisition in shared
1134 * mode succeeded but no subsequent shared-mode acquire can
1135 * succeed; and a positive value if acquisition in shared
1136 * mode succeeded and subsequent shared-mode acquires might
1137 * also succeed, in which case a subsequent waiting thread
1138 * must check availability. (Support for three different
1139 * return values enables this method to be used in contexts
1140 * where acquires only sometimes act exclusively.) Upon
1141 * success, this object has been acquired.
1142 * @throws IllegalMonitorStateException if acquiring would place this
1143 * synchronizer in an illegal state. This exception must be
1144 * thrown in a consistent fashion for synchronization to work
1145 * correctly.
1146 * @throws UnsupportedOperationException if shared mode is not supported
1147 */
1148 protected int tryAcquireShared(int arg) {
1149 throw new UnsupportedOperationException();
1150 }
1151
1152 /**
1153 * Attempts to set the state to reflect a release in shared mode.
1154 *
1155 * <p>This method is always invoked by the thread performing release.
1156 *
1157 * <p>The default implementation throws
1158 * {@link UnsupportedOperationException}.
1159 *
1160 * @param arg the release argument. This value is always the one
1161 * passed to a release method, or the current state value upon
1162 * entry to a condition wait. The value is otherwise
1163 * uninterpreted and can represent anything you like.
1164 * @return {@code true} if this release of shared mode may permit a
1165 * waiting acquire (shared or exclusive) to succeed; and
1166 * {@code false} otherwise
1167 * @throws IllegalMonitorStateException if releasing would place this
1168 * synchronizer in an illegal state. This exception must be
1169 * thrown in a consistent fashion for synchronization to work
1170 * correctly.
1171 * @throws UnsupportedOperationException if shared mode is not supported
1172 */
1173 protected boolean tryReleaseShared(int arg) {
1174 throw new UnsupportedOperationException();
1175 }
1176
1177 /**
1178 * Returns {@code true} if synchronization is held exclusively with
1179 * respect to the current (calling) thread. This method is invoked
1180 * upon each call to a {@link ConditionObject} method.
1181 *
1182 * <p>The default implementation throws {@link
1183 * UnsupportedOperationException}. This method is invoked
1184 * internally only within {@link ConditionObject} methods, so need
1185 * not be defined if conditions are not used.
1186 *
1187 * @return {@code true} if synchronization is held exclusively;
1188 * {@code false} otherwise
1189 * @throws UnsupportedOperationException if conditions are not supported
1190 */
1191 protected boolean isHeldExclusively() {
1192 throw new UnsupportedOperationException();
1193 }
1194
1195 /**
1196 * Acquires in exclusive mode, ignoring interrupts. Implemented
1197 * by invoking at least once {@link #tryAcquire},
1198 * returning on success. Otherwise the thread is queued, possibly
1199 * repeatedly blocking and unblocking, invoking {@link
1200 * #tryAcquire} until success. This method can be used
1201 * to implement method {@link Lock#lock}.
1202 *
1203 * @param arg the acquire argument. This value is conveyed to
1204 * {@link #tryAcquire} but is otherwise uninterpreted and
1205 * can represent anything you like.
1206 */
1207 public final void acquire(int arg) {
1208 if (!tryAcquire(arg) &&
1209 acquireQueued(addWaiter(Node.EXCLUSIVE), arg))
1210 selfInterrupt();
1211 }
1212
1213 /**
1214 * Acquires in exclusive mode, aborting if interrupted.
1215 * Implemented by first checking interrupt status, then invoking
1216 * at least once {@link #tryAcquire}, returning on
1217 * success. Otherwise the thread is queued, possibly repeatedly
1218 * blocking and unblocking, invoking {@link #tryAcquire}
1219 * until success or the thread is interrupted. This method can be
1220 * used to implement method {@link Lock#lockInterruptibly}.
1221 *
1222 * @param arg the acquire argument. This value is conveyed to
1223 * {@link #tryAcquire} but is otherwise uninterpreted and
1224 * can represent anything you like.
1225 * @throws InterruptedException if the current thread is interrupted
1226 */
1227 public final void acquireInterruptibly(int arg)
1228 throws InterruptedException {
1229 if (Thread.interrupted())
1230 throw new InterruptedException();
1231 if (!tryAcquire(arg))
1232 doAcquireInterruptibly(arg);
1233 }
1234
1235 /**
1236 * Attempts to acquire in exclusive mode, aborting if interrupted,
1237 * and failing if the given timeout elapses. Implemented by first
1238 * checking interrupt status, then invoking at least once {@link
1239 * #tryAcquire}, returning on success. Otherwise, the thread is
1240 * queued, possibly repeatedly blocking and unblocking, invoking
1241 * {@link #tryAcquire} until success or the thread is interrupted
1242 * or the timeout elapses. This method can be used to implement
1243 * method {@link Lock#tryLock(long, TimeUnit)}.
1244 *
1245 * @param arg the acquire argument. This value is conveyed to
1246 * {@link #tryAcquire} but is otherwise uninterpreted and
1247 * can represent anything you like.
1248 * @param nanosTimeout the maximum number of nanoseconds to wait
1249 * @return {@code true} if acquired; {@code false} if timed out
1250 * @throws InterruptedException if the current thread is interrupted
1251 */
1252 public final boolean tryAcquireNanos(int arg, long nanosTimeout)
1253 throws InterruptedException {
1254 if (Thread.interrupted())
1255 throw new InterruptedException();
1256 return tryAcquire(arg) ||
1257 doAcquireNanos(arg, nanosTimeout);
1258 }
1259
1260 /**
1261 * Releases in exclusive mode. Implemented by unblocking one or
1262 * more threads if {@link #tryRelease} returns true.
1263 * This method can be used to implement method {@link Lock#unlock}.
1264 *
1265 * @param arg the release argument. This value is conveyed to
1266 * {@link #tryRelease} but is otherwise uninterpreted and
1267 * can represent anything you like.
1268 * @return the value returned from {@link #tryRelease}
1269 */
1270 public final boolean release(int arg) {
1271 if (tryRelease(arg)) {
1272 Node h = head;
1273 if (h != null && h.waitStatus != 0)
1274 unparkSuccessor(h);
1275 return true;
1276 }
1277 return false;
1278 }
1279
1280 /**
1281 * Acquires in shared mode, ignoring interrupts. Implemented by
1282 * first invoking at least once {@link #tryAcquireShared},
1283 * returning on success. Otherwise the thread is queued, possibly
1284 * repeatedly blocking and unblocking, invoking {@link
1285 * #tryAcquireShared} until success.
1286 *
1287 * @param arg the acquire argument. This value is conveyed to
1288 * {@link #tryAcquireShared} but is otherwise uninterpreted
1289 * and can represent anything you like.
1290 */
1291 public final void acquireShared(int arg) {
1292 if (tryAcquireShared(arg) < 0)
1293 doAcquireShared(arg);
1294 }
1295
1296 /**
1297 * Acquires in shared mode, aborting if interrupted. Implemented
1298 * by first checking interrupt status, then invoking at least once
1299 * {@link #tryAcquireShared}, returning on success. Otherwise the
1300 * thread is queued, possibly repeatedly blocking and unblocking,
1301 * invoking {@link #tryAcquireShared} until success or the thread
1302 * is interrupted.
1303 * @param arg the acquire argument.
1304 * This value is conveyed to {@link #tryAcquireShared} but is
1305 * otherwise uninterpreted and can represent anything
1306 * you like.
1307 * @throws InterruptedException if the current thread is interrupted
1308 */
1309 public final void acquireSharedInterruptibly(int arg)
1310 throws InterruptedException {
1311 if (Thread.interrupted())
1312 throw new InterruptedException();
1313 if (tryAcquireShared(arg) < 0)
1314 doAcquireSharedInterruptibly(arg);
1315 }
1316
1317 /**
1318 * Attempts to acquire in shared mode, aborting if interrupted, and
1319 * failing if the given timeout elapses. Implemented by first
1320 * checking interrupt status, then invoking at least once {@link
1321 * #tryAcquireShared}, returning on success. Otherwise, the
1322 * thread is queued, possibly repeatedly blocking and unblocking,
1323 * invoking {@link #tryAcquireShared} until success or the thread
1324 * is interrupted or the timeout elapses.
1325 *
1326 * @param arg the acquire argument. This value is conveyed to
1327 * {@link #tryAcquireShared} but is otherwise uninterpreted
1328 * and can represent anything you like.
1329 * @param nanosTimeout the maximum number of nanoseconds to wait
1330 * @return {@code true} if acquired; {@code false} if timed out
1331 * @throws InterruptedException if the current thread is interrupted
1332 */
1333 public final boolean tryAcquireSharedNanos(int arg, long nanosTimeout)
1334 throws InterruptedException {
1335 if (Thread.interrupted())
1336 throw new InterruptedException();
1337 return tryAcquireShared(arg) >= 0 ||
1338 doAcquireSharedNanos(arg, nanosTimeout);
1339 }
1340
1341 /**
1342 * Releases in shared mode. Implemented by unblocking one or more
1343 * threads if {@link #tryReleaseShared} returns true.
1344 *
1345 * @param arg the release argument. This value is conveyed to
1346 * {@link #tryReleaseShared} but is otherwise uninterpreted
1347 * and can represent anything you like.
1348 * @return the value returned from {@link #tryReleaseShared}
1349 */
1350 public final boolean releaseShared(int arg) {
1351 if (tryReleaseShared(arg)) {
1352 doReleaseShared();
1353 return true;
1354 }
1355 return false;
1356 }
1357
1358 // Queue inspection methods
1359
1360 /**
1361 * Queries whether any threads are waiting to acquire. Note that
1362 * because cancellations due to interrupts and timeouts may occur
1363 * at any time, a {@code true} return does not guarantee that any
1364 * other thread will ever acquire.
1365 *
1366 * @return {@code true} if there may be other threads waiting to acquire
1367 */
1368 public final boolean hasQueuedThreads() {
1369 for (Node p = tail, h = head; p != h && p != null; p = p.prev)
1370 if (p.waitStatus <= 0)
1371 return true;
1372 return false;
1373 }
1374
1375 /**
1376 * Queries whether any threads have ever contended to acquire this
1377 * synchronizer; that is, if an acquire method has ever blocked.
1378 *
1379 * <p>In this implementation, this operation returns in
1380 * constant time.
1381 *
1382 * @return {@code true} if there has ever been contention
1383 */
1384 public final boolean hasContended() {
1385 return head != null;
1386 }
1387
1388 /**
1389 * Returns the first (longest-waiting) thread in the queue, or
1390 * {@code null} if no threads are currently queued.
1391 *
1392 * <p>In this implementation, this operation normally returns in
1393 * constant time, but may iterate upon contention if other threads are
1394 * concurrently modifying the queue.
1395 *
1396 * @return the first (longest-waiting) thread in the queue, or
1397 * {@code null} if no threads are currently queued
1398 */
1399 public final Thread getFirstQueuedThread() {
1400 // handle only fast path, else relay
1401 return (head == tail) ? null : fullGetFirstQueuedThread();
1402 }
1403
1404 /**
1405 * Version of getFirstQueuedThread called when fastpath fails.
1406 */
1407 private Thread fullGetFirstQueuedThread() {
1408 /*
1409 * The first node is normally head.next. Try to get its
1410 * thread field, ensuring consistent reads: If thread
1411 * field is nulled out or s.prev is no longer head, then
1412 * some other thread(s) concurrently performed setHead in
1413 * between some of our reads. We try this twice before
1414 * resorting to traversal.
1415 */
1416 Node h, s;
1417 Thread st;
1418 if (((h = head) != null && (s = h.next) != null &&
1419 s.prev == head && (st = s.thread) != null) ||
1420 ((h = head) != null && (s = h.next) != null &&
1421 s.prev == head && (st = s.thread) != null))
1422 return st;
1423
1424 /*
1425 * Head's next field might not have been set yet, or may have
1426 * been unset after setHead. So we must check to see if tail
1427 * is actually first node. If not, we continue on, safely
1428 * traversing from tail back to head to find first,
1429 * guaranteeing termination.
1430 */
1431
1432 Thread firstThread = null;
1433 for (Node p = tail; p != null && p != head; p = p.prev) {
1434 Thread t = p.thread;
1435 if (t != null)
1436 firstThread = t;
1437 }
1438 return firstThread;
1439 }
1440
1441 /**
1442 * Returns true if the given thread is currently queued.
1443 *
1444 * <p>This implementation traverses the queue to determine
1445 * presence of the given thread.
1446 *
1447 * @param thread the thread
1448 * @return {@code true} if the given thread is on the queue
1449 * @throws NullPointerException if the thread is null
1450 */
1451 public final boolean isQueued(Thread thread) {
1452 if (thread == null)
1453 throw new NullPointerException();
1454 for (Node p = tail; p != null; p = p.prev)
1455 if (p.thread == thread)
1456 return true;
1457 return false;
1458 }
1459
1460 /**
1461 * Returns {@code true} if the apparent first queued thread, if one
1462 * exists, is waiting in exclusive mode. If this method returns
1463 * {@code true}, and the current thread is attempting to acquire in
1464 * shared mode (that is, this method is invoked from {@link
1465 * #tryAcquireShared}) then it is guaranteed that the current thread
1466 * is not the first queued thread. Used only as a heuristic in
1467 * ReentrantReadWriteLock.
1468 */
1469 final boolean apparentlyFirstQueuedIsExclusive() {
1470 Node h, s;
1471 return (h = head) != null &&
1472 (s = h.next) != null &&
1473 !s.isShared() &&
1474 s.thread != null;
1475 }
1476
1477 /**
1478 * Queries whether any threads have been waiting to acquire longer
1479 * than the current thread.
1480 *
1481 * <p>An invocation of this method is equivalent to (but may be
1482 * more efficient than):
1483 * <pre> {@code
1484 * getFirstQueuedThread() != Thread.currentThread()
1485 * && hasQueuedThreads()}</pre>
1486 *
1487 * <p>Note that because cancellations due to interrupts and
1488 * timeouts may occur at any time, a {@code true} return does not
1489 * guarantee that some other thread will acquire before the current
1490 * thread. Likewise, it is possible for another thread to win a
1491 * race to enqueue after this method has returned {@code false},
1492 * due to the queue being empty.
1493 *
1494 * <p>This method is designed to be used by a fair synchronizer to
1495 * avoid <a href="AbstractQueuedSynchronizer.html#barging">barging</a>.
1496 * Such a synchronizer's {@link #tryAcquire} method should return
1497 * {@code false}, and its {@link #tryAcquireShared} method should
1498 * return a negative value, if this method returns {@code true}
1499 * (unless this is a reentrant acquire). For example, the {@code
1500 * tryAcquire} method for a fair, reentrant, exclusive mode
1501 * synchronizer might look like this:
1502 *
1503 * <pre> {@code
1504 * protected boolean tryAcquire(int arg) {
1505 * if (isHeldExclusively()) {
1506 * // A reentrant acquire; increment hold count
1507 * return true;
1508 * } else if (hasQueuedPredecessors()) {
1509 * return false;
1510 * } else {
1511 * // try to acquire normally
1512 * }
1513 * }}</pre>
1514 *
1515 * @return {@code true} if there is a queued thread preceding the
1516 * current thread, and {@code false} if the current thread
1517 * is at the head of the queue or the queue is empty
1518 * @since 1.7
1519 */
1520 public final boolean hasQueuedPredecessors() {
1521 Node h, s;
1522 if ((h = head) != null) {
1523 if ((s = h.next) == null || s.waitStatus > 0) {
1524 s = null; // traverse in case of concurrent cancellation
1525 for (Node p = tail; p != h && p != null; p = p.prev) {
1526 if (p.waitStatus <= 0)
1527 s = p;
1528 }
1529 }
1530 if (s != null && s.thread != Thread.currentThread())
1531 return true;
1532 }
1533 return false;
1534 }
1535
1536 // Instrumentation and monitoring methods
1537
1538 /**
1539 * Returns an estimate of the number of threads waiting to
1540 * acquire. The value is only an estimate because the number of
1541 * threads may change dynamically while this method traverses
1542 * internal data structures. This method is designed for use in
1543 * monitoring system state, not for synchronization control.
1544 *
1545 * @return the estimated number of threads waiting to acquire
1546 */
1547 public final int getQueueLength() {
1548 int n = 0;
1549 for (Node p = tail; p != null; p = p.prev) {
1550 if (p.thread != null)
1551 ++n;
1552 }
1553 return n;
1554 }
1555
1556 /**
1557 * Returns a collection containing threads that may be waiting to
1558 * acquire. Because the actual set of threads may change
1559 * dynamically while constructing this result, the returned
1560 * collection is only a best-effort estimate. The elements of the
1561 * returned collection are in no particular order. This method is
1562 * designed to facilitate construction of subclasses that provide
1563 * more extensive monitoring facilities.
1564 *
1565 * @return the collection of threads
1566 */
1567 public final Collection<Thread> getQueuedThreads() {
1568 ArrayList<Thread> list = new ArrayList<>();
1569 for (Node p = tail; p != null; p = p.prev) {
1570 Thread t = p.thread;
1571 if (t != null)
1572 list.add(t);
1573 }
1574 return list;
1575 }
1576
1577 /**
1578 * Returns a collection containing threads that may be waiting to
1579 * acquire in exclusive mode. This has the same properties
1580 * as {@link #getQueuedThreads} except that it only returns
1581 * those threads waiting due to an exclusive acquire.
1582 *
1583 * @return the collection of threads
1584 */
1585 public final Collection<Thread> getExclusiveQueuedThreads() {
1586 ArrayList<Thread> list = new ArrayList<>();
1587 for (Node p = tail; p != null; p = p.prev) {
1588 if (!p.isShared()) {
1589 Thread t = p.thread;
1590 if (t != null)
1591 list.add(t);
1592 }
1593 }
1594 return list;
1595 }
1596
1597 /**
1598 * Returns a collection containing threads that may be waiting to
1599 * acquire in shared mode. This has the same properties
1600 * as {@link #getQueuedThreads} except that it only returns
1601 * those threads waiting due to a shared acquire.
1602 *
1603 * @return the collection of threads
1604 */
1605 public final Collection<Thread> getSharedQueuedThreads() {
1606 ArrayList<Thread> list = new ArrayList<>();
1607 for (Node p = tail; p != null; p = p.prev) {
1608 if (p.isShared()) {
1609 Thread t = p.thread;
1610 if (t != null)
1611 list.add(t);
1612 }
1613 }
1614 return list;
1615 }
1616
1617 /**
1618 * Returns a string identifying this synchronizer, as well as its state.
1619 * The state, in brackets, includes the String {@code "State ="}
1620 * followed by the current value of {@link #getState}, and either
1621 * {@code "nonempty"} or {@code "empty"} depending on whether the
1622 * queue is empty.
1623 *
1624 * @return a string identifying this synchronizer, as well as its state
1625 */
1626 public String toString() {
1627 return super.toString()
1628 + "[State = " + getState() + ", "
1629 + (hasQueuedThreads() ? "non" : "") + "empty queue]";
1630 }
1631
1632
1633 // Internal support methods for Conditions
1634
1635 /**
1636 * Returns true if a node, always one that was initially placed on
1637 * a condition queue, is now waiting to reacquire on sync queue.
1638 * @param node the node
1639 * @return true if is reacquiring
1640 */
1641 final boolean isOnSyncQueue(Node node) {
1642 if (node.waitStatus == Node.CONDITION || node.prev == null)
1643 return false;
1644 if (node.next != null) // If has successor, it must be on queue
1645 return true;
1646 /*
1647 * node.prev can be non-null, but not yet on queue because
1648 * the CAS to place it on queue can fail. So we have to
1649 * traverse from tail to make sure it actually made it. It
1650 * will always be near the tail in calls to this method, and
1651 * unless the CAS failed (which is unlikely), it will be
1652 * there, so we hardly ever traverse much.
1653 */
1654 return findNodeFromTail(node);
1655 }
1656
1657 /**
1658 * Returns true if node is on sync queue by searching backwards from tail.
1659 * Called only when needed by isOnSyncQueue.
1660 * @return true if present
1661 */
1662 private boolean findNodeFromTail(Node node) {
1663 // We check for node first, since it's likely to be at or near tail.
1664 // tail is known to be non-null, so we could re-order to "save"
1665 // one null check, but we leave it this way to help the VM.
1666 for (Node p = tail;;) {
1667 if (p == node)
1668 return true;
1669 if (p == null)
1670 return false;
1671 p = p.prev;
1672 }
1673 }
1674
1675 /**
1676 * Transfers a node from a condition queue onto sync queue.
1677 * Returns true if successful.
1678 * @param node the node
1679 * @return true if successfully transferred (else the node was
1680 * cancelled before signal)
1681 */
1682 final boolean transferForSignal(Node node) {
1683 /*
1684 * If cannot change waitStatus, the node has been cancelled.
1685 */
1686 if (!node.compareAndSetWaitStatus(Node.CONDITION, 0))
1687 return false;
1688
1689 /*
1690 * Splice onto queue and try to set waitStatus of predecessor to
1691 * indicate that thread is (probably) waiting. If cancelled or
1692 * attempt to set waitStatus fails, wake up to resync (in which
1693 * case the waitStatus can be transiently and harmlessly wrong).
1694 */
1695 Node p = enq(node);
1696 int ws = p.waitStatus;
1697 if (ws > 0 || !p.compareAndSetWaitStatus(ws, Node.SIGNAL))
1698 LockSupport.unpark(node.thread);
1699 return true;
1700 }
1701
1702 /**
1703 * Transfers node, if necessary, to sync queue after a cancelled wait.
1704 * Returns true if thread was cancelled before being signalled.
1705 *
1706 * @param node the node
1707 * @return true if cancelled before the node was signalled
1708 */
1709 final boolean transferAfterCancelledWait(Node node) {
1710 if (node.compareAndSetWaitStatus(Node.CONDITION, 0)) {
1711 enq(node);
1712 return true;
1713 }
1714 /*
1715 * If we lost out to a signal(), then we can't proceed
1716 * until it finishes its enq(). Cancelling during an
1717 * incomplete transfer is both rare and transient, so just
1718 * spin.
1719 */
1720 while (!isOnSyncQueue(node))
1721 Thread.yield();
1722 return false;
1723 }
1724
1725 /**
1726 * Invokes release with current state value; returns saved state.
1727 * Cancels node and throws exception on failure.
1728 * @param node the condition node for this wait
1729 * @return previous sync state
1730 */
1731 final int fullyRelease(Node node) {
1732 try {
1733 int savedState = getState();
1734 if (release(savedState))
1735 return savedState;
1736 throw new IllegalMonitorStateException();
1737 } catch (Throwable t) {
1738 node.waitStatus = Node.CANCELLED;
1739 throw t;
1740 }
1741 }
1742
1743 // Instrumentation methods for conditions
1744
1745 /**
1746 * Queries whether the given ConditionObject
1747 * uses this synchronizer as its lock.
1748 *
1749 * @param condition the condition
1750 * @return {@code true} if owned
1751 * @throws NullPointerException if the condition is null
1752 */
1753 public final boolean owns(ConditionObject condition) {
1754 return condition.isOwnedBy(this);
1755 }
1756
1757 /**
1758 * Queries whether any threads are waiting on the given condition
1759 * associated with this synchronizer. Note that because timeouts
1760 * and interrupts may occur at any time, a {@code true} return
1761 * does not guarantee that a future {@code signal} will awaken
1762 * any threads. This method is designed primarily for use in
1763 * monitoring of the system state.
1764 *
1765 * @param condition the condition
1766 * @return {@code true} if there are any waiting threads
1767 * @throws IllegalMonitorStateException if exclusive synchronization
1768 * is not held
1769 * @throws IllegalArgumentException if the given condition is
1770 * not associated with this synchronizer
1771 * @throws NullPointerException if the condition is null
1772 */
1773 public final boolean hasWaiters(ConditionObject condition) {
1774 if (!owns(condition))
1775 throw new IllegalArgumentException("Not owner");
1776 return condition.hasWaiters();
1777 }
1778
1779 /**
1780 * Returns an estimate of the number of threads waiting on the
1781 * given condition associated with this synchronizer. Note that
1782 * because timeouts and interrupts may occur at any time, the
1783 * estimate serves only as an upper bound on the actual number of
1784 * waiters. This method is designed for use in monitoring system
1785 * state, not for synchronization control.
1786 *
1787 * @param condition the condition
1788 * @return the estimated number of waiting threads
1789 * @throws IllegalMonitorStateException if exclusive synchronization
1790 * is not held
1791 * @throws IllegalArgumentException if the given condition is
1792 * not associated with this synchronizer
1793 * @throws NullPointerException if the condition is null
1794 */
1795 public final int getWaitQueueLength(ConditionObject condition) {
1796 if (!owns(condition))
1797 throw new IllegalArgumentException("Not owner");
1798 return condition.getWaitQueueLength();
1799 }
1800
1801 /**
1802 * Returns a collection containing those threads that may be
1803 * waiting on the given condition associated with this
1804 * synchronizer. Because the actual set of threads may change
1805 * dynamically while constructing this result, the returned
1806 * collection is only a best-effort estimate. The elements of the
1807 * returned collection are in no particular order.
1808 *
1809 * @param condition the condition
1810 * @return the collection of threads
1811 * @throws IllegalMonitorStateException if exclusive synchronization
1812 * is not held
1813 * @throws IllegalArgumentException if the given condition is
1814 * not associated with this synchronizer
1815 * @throws NullPointerException if the condition is null
1816 */
1817 public final Collection<Thread> getWaitingThreads(ConditionObject condition) {
1818 if (!owns(condition))
1819 throw new IllegalArgumentException("Not owner");
1820 return condition.getWaitingThreads();
1821 }
1822
1823 /**
1824 * Condition implementation for a {@link AbstractQueuedSynchronizer}
1825 * serving as the basis of a {@link Lock} implementation.
1826 *
1827 * <p>Method documentation for this class describes mechanics,
1828 * not behavioral specifications from the point of view of Lock
1829 * and Condition users. Exported versions of this class will in
1830 * general need to be accompanied by documentation describing
1831 * condition semantics that rely on those of the associated
1832 * {@code AbstractQueuedSynchronizer}.
1833 *
1834 * <p>This class is Serializable, but all fields are transient,
1835 * so deserialized conditions have no waiters.
1836 */
1837 public class ConditionObject implements Condition, java.io.Serializable {
1838 private static final long serialVersionUID = 1173984872572414699L;
1839 /** First node of condition queue. */
1840 private transient Node firstWaiter;
1841 /** Last node of condition queue. */
1842 private transient Node lastWaiter;
1843
1844 /**
1845 * Creates a new {@code ConditionObject} instance.
1846 */
1847 public ConditionObject() { }
1848
1849 // Internal methods
1850
1851 /**
1852 * Adds a new waiter to wait queue.
1853 * @return its new wait node
1854 */
1855 private Node addConditionWaiter() {
1856 if (!isHeldExclusively())
1857 throw new IllegalMonitorStateException();
1858 Node t = lastWaiter;
1859 // If lastWaiter is cancelled, clean out.
1860 if (t != null && t.waitStatus != Node.CONDITION) {
1861 unlinkCancelledWaiters();
1862 t = lastWaiter;
1863 }
1864
1865 Node node = new Node(Node.CONDITION);
1866
1867 if (t == null)
1868 firstWaiter = node;
1869 else
1870 t.nextWaiter = node;
1871 lastWaiter = node;
1872 return node;
1873 }
1874
1875 /**
1876 * Removes and transfers nodes until hit non-cancelled one or
1877 * null. Split out from signal in part to encourage compilers
1878 * to inline the case of no waiters.
1879 * @param first (non-null) the first node on condition queue
1880 */
1881 private void doSignal(Node first) {
1882 do {
1883 if ( (firstWaiter = first.nextWaiter) == null)
1884 lastWaiter = null;
1885 first.nextWaiter = null;
1886 } while (!transferForSignal(first) &&
1887 (first = firstWaiter) != null);
1888 }
1889
1890 /**
1891 * Removes and transfers all nodes.
1892 * @param first (non-null) the first node on condition queue
1893 */
1894 private void doSignalAll(Node first) {
1895 lastWaiter = firstWaiter = null;
1896 do {
1897 Node next = first.nextWaiter;
1898 first.nextWaiter = null;
1899 transferForSignal(first);
1900 first = next;
1901 } while (first != null);
1902 }
1903
1904 /**
1905 * Unlinks cancelled waiter nodes from condition queue.
1906 * Called only while holding lock. This is called when
1907 * cancellation occurred during condition wait, and upon
1908 * insertion of a new waiter when lastWaiter is seen to have
1909 * been cancelled. This method is needed to avoid garbage
1910 * retention in the absence of signals. So even though it may
1911 * require a full traversal, it comes into play only when
1912 * timeouts or cancellations occur in the absence of
1913 * signals. It traverses all nodes rather than stopping at a
1914 * particular target to unlink all pointers to garbage nodes
1915 * without requiring many re-traversals during cancellation
1916 * storms.
1917 */
1918 private void unlinkCancelledWaiters() {
1919 Node t = firstWaiter;
1920 Node trail = null;
1921 while (t != null) {
1922 Node next = t.nextWaiter;
1923 if (t.waitStatus != Node.CONDITION) {
1924 t.nextWaiter = null;
1925 if (trail == null)
1926 firstWaiter = next;
1927 else
1928 trail.nextWaiter = next;
1929 if (next == null)
1930 lastWaiter = trail;
1931 }
1932 else
1933 trail = t;
1934 t = next;
1935 }
1936 }
1937
1938 // public methods
1939
1940 /**
1941 * Moves the longest-waiting thread, if one exists, from the
1942 * wait queue for this condition to the wait queue for the
1943 * owning lock.
1944 *
1945 * @throws IllegalMonitorStateException if {@link #isHeldExclusively}
1946 * returns {@code false}
1947 */
1948 public final void signal() {
1949 if (!isHeldExclusively())
1950 throw new IllegalMonitorStateException();
1951 Node first = firstWaiter;
1952 if (first != null)
1953 doSignal(first);
1954 }
1955
1956 /**
1957 * Moves all threads from the wait queue for this condition to
1958 * the wait queue for the owning lock.
1959 *
1960 * @throws IllegalMonitorStateException if {@link #isHeldExclusively}
1961 * returns {@code false}
1962 */
1963 public final void signalAll() {
1964 if (!isHeldExclusively())
1965 throw new IllegalMonitorStateException();
1966 Node first = firstWaiter;
1967 if (first != null)
1968 doSignalAll(first);
1969 }
1970
1971 /**
1972 * Implements uninterruptible condition wait.
1973 * <ol>
1974 * <li>Save lock state returned by {@link #getState}.
1975 * <li>Invoke {@link #release} with saved state as argument,
1976 * throwing IllegalMonitorStateException if it fails.
1977 * <li>Block until signalled.
1978 * <li>Reacquire by invoking specialized version of
1979 * {@link #acquire} with saved state as argument.
1980 * </ol>
1981 */
1982 public final void awaitUninterruptibly() {
1983 Node node = addConditionWaiter();
1984 int savedState = fullyRelease(node);
1985 boolean interrupted = false;
1986 while (!isOnSyncQueue(node)) {
1987 LockSupport.park(this);
1988 if (Thread.interrupted())
1989 interrupted = true;
1990 }
1991 if (acquireQueued(node, savedState) || interrupted)
1992 selfInterrupt();
1993 }
1994
1995 /*
1996 * For interruptible waits, we need to track whether to throw
1997 * InterruptedException, if interrupted while blocked on
1998 * condition, versus reinterrupt current thread, if
1999 * interrupted while blocked waiting to re-acquire.
2000 */
2001
2002 /** Mode meaning to reinterrupt on exit from wait */
2003 private static final int REINTERRUPT = 1;
2004 /** Mode meaning to throw InterruptedException on exit from wait */
2005 private static final int THROW_IE = -1;
2006
2007 /**
2008 * Checks for interrupt, returning THROW_IE if interrupted
2009 * before signalled, REINTERRUPT if after signalled, or
2010 * 0 if not interrupted.
2011 */
2012 private int checkInterruptWhileWaiting(Node node) {
2013 return Thread.interrupted() ?
2014 (transferAfterCancelledWait(node) ? THROW_IE : REINTERRUPT) :
2015 0;
2016 }
2017
2018 /**
2019 * Throws InterruptedException, reinterrupts current thread, or
2020 * does nothing, depending on mode.
2021 */
2022 private void reportInterruptAfterWait(int interruptMode)
2023 throws InterruptedException {
2024 if (interruptMode == THROW_IE)
2025 throw new InterruptedException();
2026 else if (interruptMode == REINTERRUPT)
2027 selfInterrupt();
2028 }
2029
2030 /**
2031 * Implements interruptible condition wait.
2032 * <ol>
2033 * <li>If current thread is interrupted, throw InterruptedException.
2034 * <li>Save lock state returned by {@link #getState}.
2035 * <li>Invoke {@link #release} with saved state as argument,
2036 * throwing IllegalMonitorStateException if it fails.
2037 * <li>Block until signalled or interrupted.
2038 * <li>Reacquire by invoking specialized version of
2039 * {@link #acquire} with saved state as argument.
2040 * <li>If interrupted while blocked in step 4, throw InterruptedException.
2041 * </ol>
2042 */
2043 public final void await() throws InterruptedException {
2044 if (Thread.interrupted())
2045 throw new InterruptedException();
2046 Node node = addConditionWaiter();
2047 int savedState = fullyRelease(node);
2048 int interruptMode = 0;
2049 while (!isOnSyncQueue(node)) {
2050 LockSupport.park(this);
2051 if ((interruptMode = checkInterruptWhileWaiting(node)) != 0)
2052 break;
2053 }
2054 if (acquireQueued(node, savedState) && interruptMode != THROW_IE)
2055 interruptMode = REINTERRUPT;
2056 if (node.nextWaiter != null) // clean up if cancelled
2057 unlinkCancelledWaiters();
2058 if (interruptMode != 0)
2059 reportInterruptAfterWait(interruptMode);
2060 }
2061
2062 /**
2063 * Implements timed condition wait.
2064 * <ol>
2065 * <li>If current thread is interrupted, throw InterruptedException.
2066 * <li>Save lock state returned by {@link #getState}.
2067 * <li>Invoke {@link #release} with saved state as argument,
2068 * throwing IllegalMonitorStateException if it fails.
2069 * <li>Block until signalled, interrupted, or timed out.
2070 * <li>Reacquire by invoking specialized version of
2071 * {@link #acquire} with saved state as argument.
2072 * <li>If interrupted while blocked in step 4, throw InterruptedException.
2073 * </ol>
2074 */
2075 public final long awaitNanos(long nanosTimeout)
2076 throws InterruptedException {
2077 if (Thread.interrupted())
2078 throw new InterruptedException();
2079 // We don't check for nanosTimeout <= 0L here, to allow
2080 // awaitNanos(0) as a way to "yield the lock".
2081 final long deadline = System.nanoTime() + nanosTimeout;
2082 long initialNanos = nanosTimeout;
2083 Node node = addConditionWaiter();
2084 int savedState = fullyRelease(node);
2085 int interruptMode = 0;
2086 while (!isOnSyncQueue(node)) {
2087 if (nanosTimeout <= 0L) {
2088 transferAfterCancelledWait(node);
2089 break;
2090 }
2091 if (nanosTimeout > SPIN_FOR_TIMEOUT_THRESHOLD)
2092 LockSupport.parkNanos(this, nanosTimeout);
2093 if ((interruptMode = checkInterruptWhileWaiting(node)) != 0)
2094 break;
2095 nanosTimeout = deadline - System.nanoTime();
2096 }
2097 if (acquireQueued(node, savedState) && interruptMode != THROW_IE)
2098 interruptMode = REINTERRUPT;
2099 if (node.nextWaiter != null)
2100 unlinkCancelledWaiters();
2101 if (interruptMode != 0)
2102 reportInterruptAfterWait(interruptMode);
2103 long remaining = deadline - System.nanoTime(); // avoid overflow
2104 return (remaining <= initialNanos) ? remaining : Long.MIN_VALUE;
2105 }
2106
2107 /**
2108 * Implements absolute timed condition wait.
2109 * <ol>
2110 * <li>If current thread is interrupted, throw InterruptedException.
2111 * <li>Save lock state returned by {@link #getState}.
2112 * <li>Invoke {@link #release} with saved state as argument,
2113 * throwing IllegalMonitorStateException if it fails.
2114 * <li>Block until signalled, interrupted, or timed out.
2115 * <li>Reacquire by invoking specialized version of
2116 * {@link #acquire} with saved state as argument.
2117 * <li>If interrupted while blocked in step 4, throw InterruptedException.
2118 * <li>If timed out while blocked in step 4, return false, else true.
2119 * </ol>
2120 */
2121 public final boolean awaitUntil(Date deadline)
2122 throws InterruptedException {
2123 long abstime = deadline.getTime();
2124 if (Thread.interrupted())
2125 throw new InterruptedException();
2126 Node node = addConditionWaiter();
2127 int savedState = fullyRelease(node);
2128 boolean timedout = false;
2129 int interruptMode = 0;
2130 while (!isOnSyncQueue(node)) {
2131 if (System.currentTimeMillis() >= abstime) {
2132 timedout = transferAfterCancelledWait(node);
2133 break;
2134 }
2135 LockSupport.parkUntil(this, abstime);
2136 if ((interruptMode = checkInterruptWhileWaiting(node)) != 0)
2137 break;
2138 }
2139 if (acquireQueued(node, savedState) && interruptMode != THROW_IE)
2140 interruptMode = REINTERRUPT;
2141 if (node.nextWaiter != null)
2142 unlinkCancelledWaiters();
2143 if (interruptMode != 0)
2144 reportInterruptAfterWait(interruptMode);
2145 return !timedout;
2146 }
2147
2148 /**
2149 * Implements timed condition wait.
2150 * <ol>
2151 * <li>If current thread is interrupted, throw InterruptedException.
2152 * <li>Save lock state returned by {@link #getState}.
2153 * <li>Invoke {@link #release} with saved state as argument,
2154 * throwing IllegalMonitorStateException if it fails.
2155 * <li>Block until signalled, interrupted, or timed out.
2156 * <li>Reacquire by invoking specialized version of
2157 * {@link #acquire} with saved state as argument.
2158 * <li>If interrupted while blocked in step 4, throw InterruptedException.
2159 * <li>If timed out while blocked in step 4, return false, else true.
2160 * </ol>
2161 */
2162 public final boolean await(long time, TimeUnit unit)
2163 throws InterruptedException {
2164 long nanosTimeout = unit.toNanos(time);
2165 if (Thread.interrupted())
2166 throw new InterruptedException();
2167 // We don't check for nanosTimeout <= 0L here, to allow
2168 // await(0, unit) as a way to "yield the lock".
2169 final long deadline = System.nanoTime() + nanosTimeout;
2170 Node node = addConditionWaiter();
2171 int savedState = fullyRelease(node);
2172 boolean timedout = false;
2173 int interruptMode = 0;
2174 while (!isOnSyncQueue(node)) {
2175 if (nanosTimeout <= 0L) {
2176 timedout = transferAfterCancelledWait(node);
2177 break;
2178 }
2179 if (nanosTimeout > SPIN_FOR_TIMEOUT_THRESHOLD)
2180 LockSupport.parkNanos(this, nanosTimeout);
2181 if ((interruptMode = checkInterruptWhileWaiting(node)) != 0)
2182 break;
2183 nanosTimeout = deadline - System.nanoTime();
2184 }
2185 if (acquireQueued(node, savedState) && interruptMode != THROW_IE)
2186 interruptMode = REINTERRUPT;
2187 if (node.nextWaiter != null)
2188 unlinkCancelledWaiters();
2189 if (interruptMode != 0)
2190 reportInterruptAfterWait(interruptMode);
2191 return !timedout;
2192 }
2193
2194 // support for instrumentation
2195
2196 /**
2197 * Returns true if this condition was created by the given
2198 * synchronization object.
2199 *
2200 * @return {@code true} if owned
2201 */
2202 final boolean isOwnedBy(AbstractQueuedSynchronizer sync) {
2203 return sync == AbstractQueuedSynchronizer.this;
2204 }
2205
2206 /**
2207 * Queries whether any threads are waiting on this condition.
2208 * Implements {@link AbstractQueuedSynchronizer#hasWaiters(ConditionObject)}.
2209 *
2210 * @return {@code true} if there are any waiting threads
2211 * @throws IllegalMonitorStateException if {@link #isHeldExclusively}
2212 * returns {@code false}
2213 */
2214 protected final boolean hasWaiters() {
2215 if (!isHeldExclusively())
2216 throw new IllegalMonitorStateException();
2217 for (Node w = firstWaiter; w != null; w = w.nextWaiter) {
2218 if (w.waitStatus == Node.CONDITION)
2219 return true;
2220 }
2221 return false;
2222 }
2223
2224 /**
2225 * Returns an estimate of the number of threads waiting on
2226 * this condition.
2227 * Implements {@link AbstractQueuedSynchronizer#getWaitQueueLength(ConditionObject)}.
2228 *
2229 * @return the estimated number of waiting threads
2230 * @throws IllegalMonitorStateException if {@link #isHeldExclusively}
2231 * returns {@code false}
2232 */
2233 protected final int getWaitQueueLength() {
2234 if (!isHeldExclusively())
2235 throw new IllegalMonitorStateException();
2236 int n = 0;
2237 for (Node w = firstWaiter; w != null; w = w.nextWaiter) {
2238 if (w.waitStatus == Node.CONDITION)
2239 ++n;
2240 }
2241 return n;
2242 }
2243
2244 /**
2245 * Returns a collection containing those threads that may be
2246 * waiting on this Condition.
2247 * Implements {@link AbstractQueuedSynchronizer#getWaitingThreads(ConditionObject)}.
2248 *
2249 * @return the collection of threads
2250 * @throws IllegalMonitorStateException if {@link #isHeldExclusively}
2251 * returns {@code false}
2252 */
2253 protected final Collection<Thread> getWaitingThreads() {
2254 if (!isHeldExclusively())
2255 throw new IllegalMonitorStateException();
2256 ArrayList<Thread> list = new ArrayList<>();
2257 for (Node w = firstWaiter; w != null; w = w.nextWaiter) {
2258 if (w.waitStatus == Node.CONDITION) {
2259 Thread t = w.thread;
2260 if (t != null)
2261 list.add(t);
2262 }
2263 }
2264 return list;
2265 }
2266 }
2267
2268 /**
2269 * Setup to support compareAndSet. We need to natively implement
2270 * this here: For the sake of permitting future enhancements, we
2271 * cannot explicitly subclass AtomicInteger, which would be
2272 * efficient and useful otherwise. So, as the lesser of evils, we
2273 * natively implement using hotspot intrinsics API. And while we
2274 * are at it, we do the same for other CASable fields (which could
2275 * otherwise be done with atomic field updaters).
2276 */
2277 private static final sun.misc.Unsafe U = sun.misc.Unsafe.getUnsafe();
2278 private static final long STATE;
2279 private static final long HEAD;
2280 private static final long TAIL;
2281
2282 static {
2283 try {
2284 STATE = U.objectFieldOffset
2285 (AbstractQueuedSynchronizer.class.getDeclaredField("state"));
2286 HEAD = U.objectFieldOffset
2287 (AbstractQueuedSynchronizer.class.getDeclaredField("head"));
2288 TAIL = U.objectFieldOffset
2289 (AbstractQueuedSynchronizer.class.getDeclaredField("tail"));
2290 } catch (ReflectiveOperationException e) {
2291 throw new Error(e);
2292 }
2293
2294 // Reduce the risk of rare disastrous classloading in first call to
2295 // LockSupport.park: https://bugs.openjdk.java.net/browse/JDK-8074773
2296 Class<?> ensureLoaded = LockSupport.class;
2297 }
2298
2299 /**
2300 * Initializes head and tail fields on first contention.
2301 */
2302 private final void initializeSyncQueue() {
2303 Node h;
2304 if (U.compareAndSwapObject(this, HEAD, null, (h = new Node())))
2305 tail = h;
2306 }
2307
2308 /**
2309 * CASes tail field.
2310 */
2311 private final boolean compareAndSetTail(Node expect, Node update) {
2312 return U.compareAndSwapObject(this, TAIL, expect, update);
2313 }
2314 }