1 |
/* |
2 |
* Written by Doug Lea with assistance from members of JCP JSR-166 |
3 |
* Expert Group and released to the public domain. Use, modify, and |
4 |
* redistribute this code in any way without acknowledgement. |
5 |
*/ |
6 |
|
7 |
package java.util.concurrent.locks; |
8 |
import java.util.concurrent.*; |
9 |
import java.util.concurrent.atomic.*; |
10 |
import java.util.*; |
11 |
import java.lang.reflect.*; |
12 |
import sun.misc.*; |
13 |
|
14 |
/** |
15 |
* Provides shared data structures, utility methods, base {@link |
16 |
* Condition} implementations, and instrumentation methods for the |
17 |
* reentrant lock classes defined in this package. This class is not |
18 |
* designed to be directly subclassed outside of this |
19 |
* package. However, subclasses {@link ReentrantLock} and {@link |
20 |
* ReentrantReadWriteLock} may in turn be usefully extended. |
21 |
* |
22 |
* @since 1.5 |
23 |
* @author Doug Lea |
24 |
* |
25 |
*/ |
26 |
public abstract class AbstractReentrantLock implements java.io.Serializable { |
27 |
/* |
28 |
* General description and notes. |
29 |
* |
30 |
* The basic idea, ignoring all sorts of things |
31 |
* (reentrance, modes, cancellation, timeouts, error checking etc) is: |
32 |
* Lock: |
33 |
* if (atomically set lock status) // fastpath |
34 |
* return; |
35 |
* node = create and enq a wait node; |
36 |
* for (;;) { |
37 |
* if (node is first on queue) { |
38 |
* if (atomically set lock status) |
39 |
* deq(node); |
40 |
* return; |
41 |
* } |
42 |
* } |
43 |
* park(currentThread); |
44 |
* } |
45 |
* |
46 |
* Unlock: |
47 |
* atomically release lockStatus; |
48 |
* h = first node on queue; |
49 |
* if (h != null) unpark(h's successor's thread); |
50 |
* |
51 |
* * The particular atomic actions needed in the Reentrant |
52 |
* vs ReentrantReadWrite subclasses differ, but both |
53 |
* follow the same basic logic. This base class contains |
54 |
* the code that doesn't need to vary with respect to |
55 |
* these. |
56 |
* |
57 |
* * By default, contended locks use a kind of "greedy" / |
58 |
* "renouncement" / barging / convoy-avoidance strategy: |
59 |
* When a lock is released, a waiting thread is signalled |
60 |
* so that it can (re)contend for the lock. It might lose |
61 |
* and thus need to rewait. This strategy has much higher |
62 |
* throughput than "directed handoff" because it reduces |
63 |
* blocking of running threads, but poorer fairness. The |
64 |
* wait queue is FIFO, but newly entering threads can barge |
65 |
* ahead and grab lock before woken waiters, so acquires |
66 |
* are not strictly FIFO, and transfer is not |
67 |
* deterministically fair. It is probabilistically fair in |
68 |
* the sense that earlier queued threads are allowed to |
69 |
* recontend before later queued threads, and each |
70 |
* recontention has an unbiased chance to succeed against |
71 |
* any incoming barging threads. |
72 |
* |
73 |
* * Even non-fair locks don't do a bare atomic CAS in the |
74 |
* fast path (except in tryLock). Instead, if the wait queue |
75 |
* appears to be non-empty, they use a test-and-test-and-set |
76 |
* approach, which avoids most failed CASes. |
77 |
* |
78 |
* * The "fair" variant differs only in that barging is disabled |
79 |
* when there is contention, so locks proceed FIFO. There can be |
80 |
* some races in detecting contention, but it is still FIFO from |
81 |
* a definable (although complicated to describe) single point, |
82 |
* so qualifies as a FIFO lock. |
83 |
* |
84 |
* * While this lock never "spins" in the usual sense, it |
85 |
* perfroms multiple test-and-test-and sets (four in the most |
86 |
* common case of a call from <tt>lock</tt>) interspersed with |
87 |
* other computations before the first call to <tt>park</tt>. |
88 |
* This gives most of the benefits of spins when locks are only |
89 |
* briefly held without most of the liabilities when they |
90 |
* aren't. |
91 |
* |
92 |
* * The wait queue is a variant of a "CLH" (Craig, Landin, and |
93 |
* Hagersten) lock. CLH locks are normally used for spinlocks. |
94 |
* We instead use them for blocking locks, but use the same |
95 |
* basic tactic of holding some of the control information |
96 |
* about a thread in the predecessor of its node. A "status" |
97 |
* field in each node keeps track of whether a thread is/should |
98 |
* block. A node is signalled when its predecessor releases |
99 |
* the lock. Each node of the queue otherwise serves as a |
100 |
* specific-notification-style monitor holding a single waiting |
101 |
* thread. The status field does NOT control whether threads |
102 |
* are granted locks though. A thread may try to acquire |
103 |
* lock if it is first in the queue. But being first does |
104 |
* not guarantee the lock; it only gives the right to contend |
105 |
* for it. So the currently released |
106 |
* contender thread may need to rewait. |
107 |
* |
108 |
* To enqueue into a CLH lock, you atomically splice it in as new |
109 |
* tail. To dequeue, you just set the head field. |
110 |
* |
111 |
* +------+ prev +-----+ +-----+ |
112 |
* head | | <---- | | <---- | | tail |
113 |
* +------+ +-----+ +-----+ |
114 |
* |
115 |
* Insertion into a CLH queue requires only a single atomic |
116 |
* operation on "tail", so there is a simple atomic point of |
117 |
* demarcation from unqueued to queued. Similarly, dequeing |
118 |
* involves only updating the "head". However, it takes a bit |
119 |
* more work for nodes to determine who their successors are, |
120 |
* in part to deal with possible cancellation due to timeouts |
121 |
* and interrupts. |
122 |
* |
123 |
* The "prev" links (not used in original CLH locks), are mainly |
124 |
* needed to handle cancellation. If a node is cancelled, its |
125 |
* successor is (normally) relinked to a non-cancelled |
126 |
* predecessor. For explanation of similar mechanics in the case |
127 |
* of spin locks, see the papers by Scott & Scherer at |
128 |
* http://www.cs.rochester.edu/u/scott/synchronization/ |
129 |
* |
130 |
* We also use "next" links to implement blocking mechanics. |
131 |
* The thread id for each node is kept in its node, so a |
132 |
* predecessor signals the next node to wake up by traversing |
133 |
* next link to determine which thread it is. Determination of |
134 |
* successor must avoid races with newly queued nodes to set |
135 |
* the "next" fields of their predecessors. This is solved |
136 |
* when necessary by checking backwards from the atomically |
137 |
* updated "tail" when a node's successor appears to be null. |
138 |
* (Or, said differently, the next-links are an optimization |
139 |
* so that we don't usually need a backward scan.) |
140 |
* |
141 |
* Cancellation introduces some conservatism to the basic |
142 |
* algorithms. Since we must poll for cancellation of other |
143 |
* nodes, we can miss noticing whether a cancelled node is |
144 |
* ahead or behind us. This is dealt with by always unparking |
145 |
* successors upon cancellation, allowing them to stabilize on |
146 |
* a new predecessor. |
147 |
* |
148 |
* * CLH queues need a dummy header node to get started. But |
149 |
* we don't create them on construction, because it would be wasted |
150 |
* effort if the lock is never contended. Instead, the node |
151 |
* is constructed and head and tail pointers are set upon first |
152 |
* contention. |
153 |
* |
154 |
* * Threads waiting on Conditions use the same nodes, but |
155 |
* use an additional link. Conditions only need to link nodes |
156 |
* in simple (non-concurrent) linked queues because they are |
157 |
* only accessed when lock is held. Upon await, a node is |
158 |
* inserted into a condition queue. Upon signal, the node is |
159 |
* transferred to the lock queue. A special value of status |
160 |
* field is used to mark which queue a node is on. |
161 |
* |
162 |
* * All suspension and resumption of threads uses the JSR166 |
163 |
* native park/unpark API. These are safe versions of |
164 |
* suspend/resume (plus timeout support) that avoid the intrinsic |
165 |
* race problems with suspend/resume: Park suspends if not |
166 |
* preceded by an unpark. Unpark resumes if waiting, else causes |
167 |
* next park not to suspend. |
168 |
* |
169 |
* * Thanks go to Dave Dice, Mark Moir, Victor Luchangco, Bill |
170 |
* Scherer and Michael Scott, along with members of JSR-166 |
171 |
* expert group, for helpful ideas, discussions, and critiques. |
172 |
*/ |
173 |
|
174 |
/** |
175 |
* Serialization ID. Note that all fields are defined in a way so |
176 |
* that deserialized locks are in initial unlocked state, and |
177 |
* there is no explicit serialization code. |
178 |
*/ |
179 |
private static final long serialVersionUID = 7373984872572414691L; |
180 |
|
181 |
/** Node status value to indicate thread has cancelled */ |
182 |
static final int CANCELLED = 1; |
183 |
/** Node status value to indicate thread needs unparking */ |
184 |
static final int SIGNAL = -1; |
185 |
/** Node status value to indicate thread is waiting on condition */ |
186 |
static final int CONDITION = -2; |
187 |
|
188 |
/** |
189 |
* Node class for threads waiting for locks or conditions. Rather |
190 |
* than using special node subtypes for r/w locks and conditions, |
191 |
* fields are declared that are only used for these purposes, |
192 |
* and ignored when not needed. |
193 |
*/ |
194 |
static final class Node { |
195 |
/** |
196 |
* Status field, taking on only the values: |
197 |
* SIGNAL: The successor of this node is (or will soon be) |
198 |
* blocked (via park), so the current node must |
199 |
* unpark its successor when it releases lock or |
200 |
* cancels. |
201 |
* CANCELLED: Node is cancelled due to timeout or interrupt |
202 |
* Nodes never leave this state. In particular, |
203 |
* a thread with cancelled node never again blocks. |
204 |
* CONDITION: Node is currently on a condition queue |
205 |
* It will not be used as a lock queue node until |
206 |
* transferred. (Use of this value here |
207 |
* has nothing to do with the other uses |
208 |
* of the field, but simplifies mechanics.) |
209 |
* 0: None of the above |
210 |
* |
211 |
* The values are arranged numerically to simplify use. |
212 |
* Non-negative values mean that a node doesn't need to |
213 |
* signal. So, some code doesn't need to check for particular |
214 |
* values, just for sign. |
215 |
* |
216 |
* The field is initialized to 0 for normal lock nodes, and |
217 |
* CONDITION for condition nodes. It is modified only using |
218 |
* CAS, except for transitions to CANCELLED, which are |
219 |
* unconditionally, "finally" assigned. |
220 |
*/ |
221 |
volatile int status; |
222 |
|
223 |
/** |
224 |
* Link to predecessor node that current node/thread relies on |
225 |
* for checking status. Assigned during enqueing, and nulled |
226 |
* out (for sake of GC) only upon dequeuing. Also, upon |
227 |
* cancellation of a predecessor, we short-circuit while |
228 |
* finding a non-cancelled one, which will always exist |
229 |
* because the head node is never cancelled: A node becomes |
230 |
* head only as a result of a thread getting the lock. A |
231 |
* cancelled thread never gets the lock, and a thread only |
232 |
* cancels itself, not any other node. |
233 |
*/ |
234 |
volatile Node prev; |
235 |
|
236 |
/** |
237 |
* Link to the successor node that the current node/thread |
238 |
* unparks upon lock release. Assigned once during enqueuing, |
239 |
* and nulled out (for sake of GC) when no longer needed. |
240 |
* Upon cancellation, we do NOT adjust this field, but simply |
241 |
* traverse through next's until we hit a non-cancelled node, |
242 |
* (or null if at end of queue). The enq operation does not |
243 |
* assign next field of a predecessor until after attachment, |
244 |
* so seeing a null next field does not necessarily mean that |
245 |
* node is at end of queue. However, if a next field appears |
246 |
* to be null, we can scan prev's from the tail to |
247 |
* double-check. |
248 |
* |
249 |
*/ |
250 |
volatile Node next; |
251 |
|
252 |
/** |
253 |
* Type of lock, used to distinguish readers from writers |
254 |
* in read-write locks |
255 |
*/ |
256 |
final int mode; |
257 |
|
258 |
/** |
259 |
* Link to next node waiting on condition. Because condition |
260 |
* queues are accessed only when locks are already held, we |
261 |
* just need a simple linked queue to hold nodes while they |
262 |
* are waiting on conditions. They are then transferred to the |
263 |
* lock queue to re-acquire locks. |
264 |
*/ |
265 |
Node nextWaiter; |
266 |
|
267 |
/** |
268 |
* The thread that enqueued this node. Initialized on |
269 |
* construction and nulled out after use. Note that this need |
270 |
* not be declared volatile since it is always accessed after |
271 |
* traversing volatile links, and written before writing |
272 |
* links. |
273 |
*/ |
274 |
Thread thread; |
275 |
|
276 |
Node(Thread thread) { |
277 |
this.thread = thread; |
278 |
this.mode = 0; |
279 |
} |
280 |
|
281 |
Node(Thread thread, int mode) { |
282 |
this.thread = thread; |
283 |
this.mode = mode; |
284 |
} |
285 |
|
286 |
Node(Thread thread, int mode, int status) { |
287 |
this.thread = thread; |
288 |
this.mode = mode; |
289 |
this.status = status; |
290 |
} |
291 |
} |
292 |
|
293 |
/** true if barging disabled */ |
294 |
private final boolean fair; |
295 |
|
296 |
/** |
297 |
* Lock hold status is kept in a separate AtomicInteger. It is |
298 |
* logically divided into two shorts: The lower one representing |
299 |
* the exclusive (write) lock hold count, and the upper the shared |
300 |
* hold count. |
301 |
*/ |
302 |
private final AtomicInteger count = new AtomicInteger(); |
303 |
|
304 |
// shared vs write count extraction constants and functions |
305 |
|
306 |
static final int SHARED_SHIFT = 16; |
307 |
static final int SHARED_UNIT = (1 << SHARED_SHIFT); |
308 |
static final int EXCLUSIVE_MASK = (1 << SHARED_SHIFT) - 1; |
309 |
|
310 |
/** |
311 |
* Return true if count indicates lock is held in exclusive mode |
312 |
* @param c a lock status count |
313 |
* @return true if count indicates lock is held in exclusive mode |
314 |
*/ |
315 |
static final boolean isExclusive(int c) { return (c & EXCLUSIVE_MASK) != 0; } |
316 |
|
317 |
/** |
318 |
* Return the number of shared holds represented in count |
319 |
*/ |
320 |
static final int sharedCount(int c) { return c >>> SHARED_SHIFT; } |
321 |
|
322 |
/** |
323 |
* Return the number of exclusive holds represented in count |
324 |
*/ |
325 |
static final int exclusiveCount(int c) { return c & EXCLUSIVE_MASK; } |
326 |
|
327 |
/** Return current shared count */ |
328 |
final int getSharedCount() { return sharedCount(count.get()); } |
329 |
/** Return current exclusive count */ |
330 |
final int getExclusiveCount() { return exclusiveCount(count.get()) ; } |
331 |
|
332 |
/** Current (exclusive) owner thread */ |
333 |
private transient Thread owner; |
334 |
|
335 |
/** |
336 |
* Head of the wait queue, lazily initialized. Except for |
337 |
* initialization, it is modified only by a thread upon acquiring |
338 |
* the lock. If head exists, its node status is guaranteed not to |
339 |
* be CANCELLED. |
340 |
*/ |
341 |
private transient volatile Node head; |
342 |
|
343 |
/** |
344 |
* Tail of the wait queue, lazily initialized. Modified only via |
345 |
* method enq to add new wait node. |
346 |
*/ |
347 |
private transient volatile Node tail; |
348 |
|
349 |
// Atomics support |
350 |
|
351 |
private static final |
352 |
AtomicReferenceFieldUpdater<AbstractReentrantLock, Node> tailUpdater = |
353 |
AtomicReferenceFieldUpdater.<AbstractReentrantLock, Node>newUpdater |
354 |
(AbstractReentrantLock.class, Node.class, "tail"); |
355 |
private static final |
356 |
AtomicReferenceFieldUpdater<AbstractReentrantLock, Node> headUpdater = |
357 |
AtomicReferenceFieldUpdater.<AbstractReentrantLock, Node>newUpdater |
358 |
(AbstractReentrantLock.class, Node.class, "head"); |
359 |
static final |
360 |
AtomicIntegerFieldUpdater<Node> statusUpdater = |
361 |
AtomicIntegerFieldUpdater.newUpdater |
362 |
(Node.class, "status"); |
363 |
|
364 |
/** |
365 |
* Creates an instance of <tt>AbstractReentrantLock</tt> with |
366 |
* non-fair fairness policy. |
367 |
*/ |
368 |
protected AbstractReentrantLock() { |
369 |
fair = false; |
370 |
} |
371 |
|
372 |
/** |
373 |
* Creates an instance of <tt>AbstractReentrantLock</tt> with the |
374 |
* given fairness policy. |
375 |
*/ |
376 |
protected AbstractReentrantLock(boolean fair) { |
377 |
this.fair = fair; |
378 |
} |
379 |
|
380 |
/* |
381 |
* Mode words are used to handle all of the combinations of r/w |
382 |
* interrupt, timeout, etc for lock methods. These are OR'ed |
383 |
* together as appropriate for arguments, status fields, and |
384 |
* results. |
385 |
*/ |
386 |
|
387 |
/** As arg or node field, lock in exclusive mode */ |
388 |
private static final int EXCLUSIVE = 0; |
389 |
/** As arg or node field, lock in shared mode */ |
390 |
private static final int SHARED = 1; |
391 |
/** As arg, don't interrupt; as result, was not interrupted */ |
392 |
private static final int UNINTERRUPTED = 0; |
393 |
/** As arg, allow interrupt; as result, was interrupted */ |
394 |
private static final int INTERRUPT = 2; |
395 |
/** As arg, allow timeouts; as result, did time out */ |
396 |
private static final int TIMEOUT = 4; |
397 |
/** As arg, reset interrupt status upon return */ |
398 |
private static final int REINTERRUPT = 8; |
399 |
/** As arg, don't acquire unowned lock unless unfair or queue is empty */ |
400 |
private static final int CHECK_FAIRNESS = 16; |
401 |
|
402 |
/** |
403 |
* Return true if mode denotes a shared-lock |
404 |
* @param mode mode |
405 |
* @return true if shared mode |
406 |
*/ |
407 |
static final boolean isSharedMode(int mode) { return (mode & SHARED) != 0; } |
408 |
|
409 |
/** |
410 |
* Try to set lock status |
411 |
* @param mode lock mode |
412 |
* @param current current thread |
413 |
* @return true if successful |
414 |
*/ |
415 |
private boolean tryAcquire(int mode, Thread current) { |
416 |
final AtomicInteger count = this.count; |
417 |
boolean nobarge = (mode & CHECK_FAIRNESS) != 0 && fair; |
418 |
for (;;) { |
419 |
int c = count.get(); |
420 |
int w = exclusiveCount(c); |
421 |
int r = sharedCount(c); |
422 |
if (isSharedMode(mode)) { |
423 |
if (w != 0 && current != owner) |
424 |
return false; |
425 |
if (nobarge && head != tail) |
426 |
return false; |
427 |
int nextc = c + SHARED_UNIT; |
428 |
if (nextc < c) |
429 |
throw new Error("Maximum lock count exceeded"); |
430 |
if (count.compareAndSet(c, nextc)) |
431 |
return true; |
432 |
} |
433 |
else { |
434 |
if (r != 0) |
435 |
return false; |
436 |
if (w != 0) { |
437 |
if (current != owner) |
438 |
return false; |
439 |
if (w + 1 >= SHARED_UNIT) |
440 |
throw new Error("Maximum lock count exceeded"); |
441 |
if (count.compareAndSet(c, c + 1)) |
442 |
return true; |
443 |
} |
444 |
else { |
445 |
if (nobarge && head != tail) |
446 |
return false; |
447 |
if (count.compareAndSet(c, c + 1)) { |
448 |
owner = current; |
449 |
return true; |
450 |
} |
451 |
} |
452 |
} |
453 |
// Recheck count if lost any of the above CAS's |
454 |
} |
455 |
} |
456 |
|
457 |
// Queuing utilities |
458 |
|
459 |
/** |
460 |
* Initialize queue. Called on first contended lock attempt |
461 |
*/ |
462 |
private void initializeQueue() { |
463 |
Node t = tail; |
464 |
if (t == null) { |
465 |
Node h = new Node(null); |
466 |
while ((t = tail) == null) { |
467 |
if (headUpdater.compareAndSet(this, null, h)) |
468 |
tail = h; |
469 |
else |
470 |
Thread.yield(); |
471 |
} |
472 |
} |
473 |
} |
474 |
|
475 |
/** |
476 |
* Insert node into queue, initializing head and tail if necessary. |
477 |
* @param node the node to insert |
478 |
*/ |
479 |
private void enq(Node node) { |
480 |
Node t = tail; |
481 |
if (t == null) { // Must initialize first |
482 |
initializeQueue(); |
483 |
t = tail; |
484 |
} |
485 |
for (;;) { |
486 |
node.prev = t; // Prev field must be valid before/upon CAS |
487 |
if (tailUpdater.compareAndSet(this, t, node)) { |
488 |
t.next = node; // Next field assignment lags CAS |
489 |
return; |
490 |
} |
491 |
t = tail; |
492 |
} |
493 |
} |
494 |
|
495 |
/** |
496 |
* Wake up node's successor, if one exists. |
497 |
* @param node the node |
498 |
*/ |
499 |
private void unparkSuccessor(Node node) { |
500 |
/* |
501 |
* Reset status before unparking. This improves performance |
502 |
* when called from unlock to release next thread: A given |
503 |
* head-node can be in effect across multiple unlockers |
504 |
* that acquired by barging. When they do so, and later |
505 |
* unlock, the successor that lost a previous race and |
506 |
* re-parked must be re-unparked. But otherwise, we'd like to |
507 |
* minimize unnecessary calls to unpark, which may be |
508 |
* relatively expensive. We don't bother to loop on failed CAS |
509 |
* here though, since the reset is just for performance. Note |
510 |
* that the CAS will fail when this method is called from |
511 |
* cancellation code since status will be set to CANCELLED. |
512 |
* This doesn't occur frequently enough to bother avoiding. |
513 |
*/ |
514 |
statusUpdater.compareAndSet(node, SIGNAL, 0); |
515 |
|
516 |
/* |
517 |
* Successor is normally just the next node. But if cancelled |
518 |
* or apparently null, traverse backwards from tail to find |
519 |
* the actual non-cancelled successor. |
520 |
*/ |
521 |
Node s = node.next; |
522 |
if ((s != null && s.status != CANCELLED) || |
523 |
(s = findSuccessorFromTail(node)) != null) |
524 |
LockSupport.unpark(s.thread); |
525 |
} |
526 |
|
527 |
/** |
528 |
* Find the successor of a node, working backwards from the tail |
529 |
* @param node the node |
530 |
* @return successor, or null if there isn't one. |
531 |
*/ |
532 |
private Node findSuccessorFromTail(Node node) { |
533 |
Node s = tail; |
534 |
if (s == null || s == node) |
535 |
return null; |
536 |
Node p = s.prev; |
537 |
for (;;) { |
538 |
if (p == null || p == node) |
539 |
return s; |
540 |
if (p.status != CANCELLED) |
541 |
s = p; |
542 |
p = p.prev; |
543 |
} |
544 |
} |
545 |
|
546 |
/** |
547 |
* Main locking code. |
548 |
* @param mode mode representing r/w, interrupt, timeout control |
549 |
* @param nanos timeout time |
550 |
* @return UNINTERRUPTED on success, INTERRUPT on interrupt, |
551 |
* TIMEOUT on timeout |
552 |
*/ |
553 |
private int doLock(int mode, long nanos) { |
554 |
final Thread current = Thread.currentThread(); |
555 |
|
556 |
if ((mode & INTERRUPT) != 0 && Thread.interrupted()) |
557 |
return INTERRUPT; |
558 |
|
559 |
// Try initial barging or recursive acquire |
560 |
if (tryAcquire(mode | CHECK_FAIRNESS, current)) |
561 |
return UNINTERRUPTED; |
562 |
|
563 |
long lastTime = ((mode & TIMEOUT) == 0)? 0 : System.nanoTime(); |
564 |
|
565 |
final Node node = new Node(current, mode & SHARED); |
566 |
enq(node); |
567 |
|
568 |
/* |
569 |
* Repeatedly try to set lock status if first in queue; block |
570 |
* (park) on failure. If we are the first thread in queue, we |
571 |
* must try to get the lock, and we must not try to get lock |
572 |
* if we are not first. We can park only if we are sure that |
573 |
* some other thread holds lock and will signal us. Along the |
574 |
* way, make sure that the predecessor hasn't been |
575 |
* cancelled. If it has, relink to its predecessor. When |
576 |
* about to park, first try to set status enabling lock-holder |
577 |
* to signal, and then recheck one final time before actually |
578 |
* blocking. This also has effect of retrying failed status |
579 |
* CAS due to contention. |
580 |
*/ |
581 |
|
582 |
for (;;) { |
583 |
Node p = node.prev; |
584 |
if (p == head && tryAcquire(mode, current)) { |
585 |
p.next = null; |
586 |
node.thread = null; |
587 |
node.prev = null; |
588 |
head = node; |
589 |
if (isSharedMode(mode) && node.status < 0) { |
590 |
Node s = node.next; // wake up other readers |
591 |
if (s == null || isSharedMode(s.mode)) |
592 |
unparkSuccessor(node); |
593 |
} |
594 |
|
595 |
if ((mode & REINTERRUPT) != 0) |
596 |
current.interrupt(); |
597 |
return UNINTERRUPTED; |
598 |
} |
599 |
|
600 |
int status = p.status; |
601 |
if (status == 0) |
602 |
statusUpdater.compareAndSet(p, 0, SIGNAL); |
603 |
else if (status == CANCELLED) |
604 |
node.prev = p.prev; |
605 |
else { |
606 |
if ((mode & TIMEOUT) != 0) { |
607 |
if (nanos > 0) { |
608 |
long now = System.nanoTime(); |
609 |
nanos -= now - lastTime; |
610 |
lastTime = now; |
611 |
} |
612 |
if (nanos <= 0) { |
613 |
node.thread = null; |
614 |
node.status = CANCELLED; |
615 |
unparkSuccessor(node); |
616 |
return TIMEOUT; |
617 |
} |
618 |
else |
619 |
LockSupport.parkNanos(nanos); |
620 |
} |
621 |
else |
622 |
LockSupport.park(); |
623 |
|
624 |
if (Thread.interrupted()) { |
625 |
if ((mode & INTERRUPT) != 0) { |
626 |
node.thread = null; |
627 |
node.status = CANCELLED; |
628 |
unparkSuccessor(node); |
629 |
return INTERRUPT; |
630 |
} |
631 |
else |
632 |
mode |= REINTERRUPT; |
633 |
} |
634 |
} |
635 |
} |
636 |
} |
637 |
|
638 |
/** |
639 |
* Re-acquire lock after a wait, resetting lock count. |
640 |
* Assumes (does not check) that lock was, and will be, held in |
641 |
* exclusive-mode. |
642 |
* @param current the waiting thread |
643 |
* @param node its node |
644 |
* @param holds number of holds on lock before entering wait |
645 |
* @return true if interrupted while re-acquiring lock |
646 |
*/ |
647 |
boolean relock(Thread current, Node node, int holds) { |
648 |
boolean interrupted = false; |
649 |
for (;;) { |
650 |
Node p = node.prev; |
651 |
if (p == head && tryAcquire(EXCLUSIVE, current)) { |
652 |
if (holds != 1) |
653 |
count.set(holds); |
654 |
p.next = null; |
655 |
node.thread = null; |
656 |
node.prev = null; |
657 |
head = node; |
658 |
return interrupted; |
659 |
} |
660 |
|
661 |
int status = p.status; |
662 |
if (status == 0) |
663 |
statusUpdater.compareAndSet(p, 0, SIGNAL); |
664 |
else if (status == CANCELLED) |
665 |
node.prev = p.prev; |
666 |
else { |
667 |
LockSupport.park(); |
668 |
if (Thread.interrupted()) |
669 |
interrupted = true; |
670 |
} |
671 |
} |
672 |
} |
673 |
|
674 |
// Exportable versions of main lock/unlock methods |
675 |
|
676 |
/** Acquire shared lock */ |
677 |
final void lockShared() { |
678 |
if (!fair || head == tail) { // fast path |
679 |
final AtomicInteger count = this.count; |
680 |
int c = count.get(); |
681 |
if (!isExclusive(c) && count.compareAndSet(c, c + SHARED_UNIT)) |
682 |
return; |
683 |
} |
684 |
doLock(SHARED | UNINTERRUPTED, 0); |
685 |
} |
686 |
|
687 |
/** trylock for shared lock */ |
688 |
final boolean tryLockShared() { |
689 |
final AtomicInteger count = this.count; |
690 |
int c = count.get(); |
691 |
if (!isExclusive(c) && count.compareAndSet(c, c + SHARED_UNIT)) |
692 |
return true; |
693 |
return tryAcquire(SHARED, Thread.currentThread()); |
694 |
} |
695 |
|
696 |
/** Trylock for shared lock */ |
697 |
final boolean tryLockShared(long timeout, TimeUnit unit) throws InterruptedException { |
698 |
int s = doLock(SHARED | INTERRUPT | TIMEOUT, unit.toNanos(timeout)); |
699 |
if (s == UNINTERRUPTED) |
700 |
return true; |
701 |
if (s != INTERRUPT) |
702 |
return false; |
703 |
throw new InterruptedException(); |
704 |
} |
705 |
|
706 |
/** Interruptibly lock shared lock */ |
707 |
final void lockInterruptiblyShared() throws InterruptedException { |
708 |
if (doLock(SHARED | INTERRUPT, 0) == INTERRUPT) |
709 |
throw new InterruptedException(); |
710 |
} |
711 |
|
712 |
|
713 |
/** Release shared lock */ |
714 |
final void unlockShared() { |
715 |
final AtomicInteger count = this.count; |
716 |
for (;;) { |
717 |
int c = count.get(); |
718 |
int nextc = c - SHARED_UNIT; |
719 |
if (nextc < 0) |
720 |
throw new IllegalMonitorStateException(); |
721 |
if (count.compareAndSet(c, nextc)) { |
722 |
if (nextc == 0) { |
723 |
Node h = head; |
724 |
if (h != null && h.status < 0) |
725 |
unparkSuccessor(h); |
726 |
} |
727 |
return; |
728 |
} |
729 |
} |
730 |
} |
731 |
|
732 |
/** Acquire exclusive (write) lock */ |
733 |
final void lockExclusive() { |
734 |
if ((!fair || head == tail) && count.compareAndSet(0, 1)) |
735 |
owner = Thread.currentThread(); |
736 |
else |
737 |
doLock(UNINTERRUPTED, 0); |
738 |
} |
739 |
|
740 |
/** Trylock for exclusive (write) lock */ |
741 |
final boolean tryLockExclusive() { |
742 |
if (count.compareAndSet(0, 1)) { |
743 |
owner = Thread.currentThread(); |
744 |
return true; |
745 |
} |
746 |
return tryAcquire(0, Thread.currentThread()); |
747 |
} |
748 |
|
749 |
/** Release exclusive (write) lock */ |
750 |
final void unlockExclusive() { |
751 |
Node h; |
752 |
final AtomicInteger count = this.count; |
753 |
Thread current = Thread.currentThread(); |
754 |
if (count.get() != 1 || current != owner) |
755 |
slowUnlockExclusive(current); |
756 |
else { |
757 |
owner = null; |
758 |
count.set(0); |
759 |
if ((h = head) != null && h.status < 0) |
760 |
unparkSuccessor(h); |
761 |
} |
762 |
} |
763 |
|
764 |
/** |
765 |
* Handle uncommon cases for unlockExclusive |
766 |
* @param current current Thread |
767 |
*/ |
768 |
private void slowUnlockExclusive(Thread current) { |
769 |
Node h; |
770 |
int c = count.get(); |
771 |
int w = exclusiveCount(c) - 1; |
772 |
if (w < 0 || owner != current) |
773 |
throw new IllegalMonitorStateException(); |
774 |
if (w == 0) |
775 |
owner = null; |
776 |
count.set(c - 1); |
777 |
if (w == 0 && (h = head) != null && h.status < 0) |
778 |
unparkSuccessor(h); |
779 |
} |
780 |
|
781 |
/** Trylock for exclusive (write) lock */ |
782 |
final boolean tryLockExclusive(long timeout, TimeUnit unit) throws InterruptedException { |
783 |
int s = doLock(INTERRUPT | TIMEOUT, unit.toNanos(timeout)); |
784 |
if (s == UNINTERRUPTED) |
785 |
return true; |
786 |
if (s != INTERRUPT) |
787 |
return false; |
788 |
throw new InterruptedException(); |
789 |
} |
790 |
|
791 |
/** Interruptible lock exclusive (write) lock */ |
792 |
final void lockInterruptiblyExclusive() throws InterruptedException { |
793 |
if (doLock(INTERRUPT, 0) == INTERRUPT) |
794 |
throw new InterruptedException(); |
795 |
} |
796 |
|
797 |
/** |
798 |
* Fully unlock the lock, setting lock holds to zero. Assumes |
799 |
* (does not check) that lock is held in exclusive-mode. |
800 |
* @return current hold count. |
801 |
*/ |
802 |
final int fullyUnlock() { |
803 |
int holds = count.get(); |
804 |
owner = null; |
805 |
count.set(0); |
806 |
Node h = head; |
807 |
if (h != null && h.status < 0) |
808 |
unparkSuccessor(h); |
809 |
return holds; |
810 |
} |
811 |
|
812 |
// Instrumentation and status |
813 |
|
814 |
/** |
815 |
* Return true if this lock has fairness set true. |
816 |
* @return true if this lock has fairness set true. |
817 |
*/ |
818 |
public boolean isFair() { |
819 |
return fair; |
820 |
} |
821 |
|
822 |
/** |
823 |
* Returns an estimate of the number of threads waiting to acquire |
824 |
* this lock. The value is only an estimate because the number of |
825 |
* threads may change dynamically while this method traverses |
826 |
* internal data structures. This method is designed for use in |
827 |
* monitoring of the system state, not for synchronization |
828 |
* control. |
829 |
* @return the estimated number of threads waiting for this lock |
830 |
*/ |
831 |
public int getQueueLength() { |
832 |
int n = 0; |
833 |
for (Node p = tail; p != null && p != head; p = p.prev) |
834 |
++n; |
835 |
return n; |
836 |
} |
837 |
|
838 |
/** |
839 |
* Returns a collection containing threads that may be waiting to |
840 |
* acquire this lock. Because the actual set of threads may |
841 |
* change dynamically while constructing this result, the returned |
842 |
* collection is only a best-effort estimate. The elements of the |
843 |
* returned collection are in no particular order. This method is |
844 |
* designed to facilitate construction of subclasses that provide |
845 |
* more extensive lock monitoring facilities. |
846 |
* @return the collection of threads |
847 |
*/ |
848 |
protected Collection<Thread> getQueuedThreads() { |
849 |
ArrayList<Thread> list = new ArrayList<Thread>(); |
850 |
for (Node p = tail; p != null; p = p.prev) { |
851 |
Thread t = p.thread; |
852 |
if (t != null) |
853 |
list.add(t); |
854 |
} |
855 |
return list; |
856 |
} |
857 |
|
858 |
/** |
859 |
* Returns a collection containing threads that may be waiting to |
860 |
* acquire lock with given mode. |
861 |
* @param shared true if shared mode, else exclusive |
862 |
* @return the collection of threads |
863 |
*/ |
864 |
Collection<Thread> getQueuedThreads(boolean shared) { |
865 |
int mode = shared? SHARED : EXCLUSIVE; |
866 |
ArrayList<Thread> list = new ArrayList<Thread>(); |
867 |
for (Node p = tail; p != null; p = p.prev) { |
868 |
if (p.mode == mode) { |
869 |
Thread t = p.thread; |
870 |
if (t != null) |
871 |
list.add(t); |
872 |
} |
873 |
} |
874 |
return list; |
875 |
} |
876 |
|
877 |
/** |
878 |
* Returns the thread that currently owns the exclusive lock, or |
879 |
* <tt>null</tt> if not owned. Note that the owner may be |
880 |
* momentarily <tt>null</tt> even if there are threads trying to |
881 |
* acquire the lock but have not yet done so. This method is |
882 |
* designed to facilitate construction of subclasses that provide |
883 |
* more extensive lock monitoring facilities. |
884 |
* @return the owner, or <tt>null</tt> if not owned. |
885 |
*/ |
886 |
protected Thread getOwner() { |
887 |
return (isExclusive(count.get()))? owner : null; |
888 |
} |
889 |
|
890 |
/** |
891 |
* Throw IllegalMonitorStateException if given thread is not owner |
892 |
* @param thread the thread |
893 |
* @throws IllegalMonitorStateException if thread not owner |
894 |
*/ |
895 |
final void checkOwner(Thread thread) { |
896 |
if (!isExclusive(count.get()) || owner != thread) |
897 |
throw new IllegalMonitorStateException(); |
898 |
} |
899 |
|
900 |
/** |
901 |
* Throw IllegalMonitorStateException if given thread cannot |
902 |
* wait on condition |
903 |
* @param thread the thread |
904 |
* @throws IllegalMonitorStateException if thread cannot wait |
905 |
*/ |
906 |
final void checkOwnerForWait(Thread thread) { |
907 |
// Waiters cannot hold shared locks |
908 |
if (count.get() != 1 || owner != thread) |
909 |
throw new IllegalMonitorStateException(); |
910 |
} |
911 |
|
912 |
/** |
913 |
* Return true if a node, always one that was initially placed on |
914 |
* a condition queue, is now on the lock queue. |
915 |
* @param node the node |
916 |
* @return true if on lock queue |
917 |
*/ |
918 |
final boolean isOnLockQueue(Node node) { |
919 |
if (node.status == CONDITION || node.prev == null) |
920 |
return false; |
921 |
// If node has successor, it must be on lock queue |
922 |
if (node.next != null) |
923 |
return true; |
924 |
/* |
925 |
* node.prev can be non-null, but not yet on lock queue because |
926 |
* the CAS to place it on queue can fail. So we have to |
927 |
* traverse from tail to make sure it actually made it. It |
928 |
* will always be near the tail in calls to this method, and |
929 |
* unless the CAS failed (which is unlikely), it will be |
930 |
* there, so we hardly ever traverse much. |
931 |
*/ |
932 |
Node t = tail; |
933 |
for (;;) { |
934 |
if (t == node) |
935 |
return true; |
936 |
if (t == null) |
937 |
return false; |
938 |
t = t.prev; |
939 |
} |
940 |
} |
941 |
|
942 |
/** |
943 |
* Transfer a node from a condition queue onto lock queue. |
944 |
* Return true if successful. |
945 |
* @param node the node |
946 |
* @return true if succesfully transferred (else the node was |
947 |
* cancelled before signal). |
948 |
*/ |
949 |
final boolean transferForSignal(Node node) { |
950 |
/* |
951 |
* If cannot change status, the node has been cancelled. |
952 |
*/ |
953 |
if (!statusUpdater.compareAndSet(node, CONDITION, 0)) |
954 |
return false; |
955 |
|
956 |
/* |
957 |
* Splice onto queue and try to set status of predecessor to |
958 |
* indicate that thread is (probably) waiting. If cancelled or |
959 |
* attempt to set status fails, wake up to resynch (in which |
960 |
* case the status can be transiently/harmlessly wrong). |
961 |
*/ |
962 |
enq(node); |
963 |
Node p = node.prev; |
964 |
int c = p.status; |
965 |
if (c == CANCELLED || !statusUpdater.compareAndSet(p, c, SIGNAL)) |
966 |
LockSupport.unpark(node.thread); |
967 |
|
968 |
return true; |
969 |
} |
970 |
|
971 |
/** |
972 |
* Transfer node, if necessary, to lock queue after a cancelled |
973 |
* wait. Return true if thread was cancelled before being |
974 |
* signalled. |
975 |
* @param current the waiting thread |
976 |
* @param node its node |
977 |
* @return true if cancelled before the node was signalled. |
978 |
*/ |
979 |
final boolean transferAfterCancelledWait(Thread current, Node node) { |
980 |
if (statusUpdater.compareAndSet(node, CONDITION, 0)) { |
981 |
enq(node); |
982 |
return true; |
983 |
} |
984 |
else { |
985 |
/* |
986 |
* If we lost out to a signal(), then we can't proceed |
987 |
* until it finishes its enq(). Cancelling during an |
988 |
* incomplete transfer is both rare and transient, so just |
989 |
* spin. |
990 |
*/ |
991 |
while (!isOnLockQueue(node)) |
992 |
Thread.yield(); |
993 |
return false; |
994 |
} |
995 |
} |
996 |
|
997 |
// Serialization support |
998 |
|
999 |
/** |
1000 |
* Reconstitute this lock instance from a stream (that is, |
1001 |
* deserialize it). |
1002 |
* @param s the stream |
1003 |
*/ |
1004 |
private void readObject(java.io.ObjectInputStream s) |
1005 |
throws java.io.IOException, ClassNotFoundException { |
1006 |
s.defaultReadObject(); |
1007 |
count.set(0); // reset to unlocked state |
1008 |
} |
1009 |
|
1010 |
/** |
1011 |
* Condition implementation for use with <tt>AbstractReentrantLock</tt>. |
1012 |
* Instances of this class can be constructed only by subclasses. |
1013 |
* |
1014 |
* <p>In addition to implementing the {@link Condition} interface, |
1015 |
* this class defines methods <tt>hasWaiters</tt> and |
1016 |
* <tt>getWaitQueueLength</tt>, as well as associated |
1017 |
* <tt>protected</tt> access methods that may be useful for |
1018 |
* instrumentation and monitoring. |
1019 |
*/ |
1020 |
protected static class AbstractConditionObject implements Condition, java.io.Serializable { |
1021 |
|
1022 |
private static final long serialVersionUID = 1173984872572414699L; |
1023 |
|
1024 |
/** The lock we are serving as a condition for. */ |
1025 |
private final AbstractReentrantLock lock; |
1026 |
/** First node of condition queue. */ |
1027 |
private transient Node firstWaiter; |
1028 |
/** Last node of condition queue. */ |
1029 |
private transient Node lastWaiter; |
1030 |
|
1031 |
/** |
1032 |
* Constructor for use by subclasses to create a |
1033 |
* AbstractConditionObject associated with given lock. |
1034 |
* @param lock the lock for this condition |
1035 |
* @throws NullPointerException if lock null |
1036 |
*/ |
1037 |
protected AbstractConditionObject(AbstractReentrantLock lock) { |
1038 |
if (lock == null) |
1039 |
throw new NullPointerException(); |
1040 |
this.lock = lock; |
1041 |
} |
1042 |
|
1043 |
// Internal methods |
1044 |
|
1045 |
/** |
1046 |
* Add a new waiter to wait queue |
1047 |
* @param current the thread that will be waiting |
1048 |
* @return its new wait node |
1049 |
*/ |
1050 |
private Node addConditionWaiter(Thread current) { |
1051 |
Node w = new Node(current, 0, CONDITION); |
1052 |
Node t = lastWaiter; |
1053 |
if (t == null) |
1054 |
firstWaiter = w; |
1055 |
else |
1056 |
t.nextWaiter = w; |
1057 |
lastWaiter = w; |
1058 |
return w; |
1059 |
} |
1060 |
|
1061 |
/** |
1062 |
* Remove and transfer nodes until hit non-cancelled one or |
1063 |
* null. Split out from signal in part to encourage compilers |
1064 |
* to inline the case of no waiters. |
1065 |
* @param first (non-null) the first node on condition queue |
1066 |
*/ |
1067 |
private void doSignal(Node first) { |
1068 |
do { |
1069 |
if ( (firstWaiter = first.nextWaiter) == null) |
1070 |
lastWaiter = null; |
1071 |
first.nextWaiter = null; |
1072 |
} while (!lock.transferForSignal(first) && |
1073 |
(first = firstWaiter) != null); |
1074 |
} |
1075 |
|
1076 |
/** |
1077 |
* Remove and transfer all nodes. |
1078 |
* @param first (non-null) the first node on condition queue |
1079 |
*/ |
1080 |
private void doSignalAll(Node first) { |
1081 |
lastWaiter = firstWaiter = null; |
1082 |
do { |
1083 |
Node next = first.nextWaiter; |
1084 |
first.nextWaiter = null; |
1085 |
lock.transferForSignal(first); |
1086 |
first = next; |
1087 |
} while (first != null); |
1088 |
} |
1089 |
|
1090 |
// public methods |
1091 |
|
1092 |
/** |
1093 |
* Wakes up one waiting thread. |
1094 |
* |
1095 |
* <p>If any threads are waiting on this condition then one is |
1096 |
* selected for waking up. This implementation always chooses |
1097 |
* to wake up the longest-waiting thread whose wait has not |
1098 |
* been interrupted or timed out. That thread must then |
1099 |
* re-acquire the lock before it returns. The order in which |
1100 |
* it will do so is the same as that for threads initially |
1101 |
* acquiring the lock, which is in the default case not |
1102 |
* specified, but for <em>fair</em> locks favors those threads |
1103 |
* that have been waiting the longest. Note that an awakened |
1104 |
* thread can return, at the soonest, only after the current |
1105 |
* thread releases the lock associated with this Condition. |
1106 |
* |
1107 |
* @throws IllegalMonitorStateException if the lock associated |
1108 |
* with this Condition is not held by the current thread |
1109 |
**/ |
1110 |
public void signal() { |
1111 |
lock.checkOwner(Thread.currentThread()); |
1112 |
Node w = firstWaiter; |
1113 |
if (w != null) |
1114 |
doSignal(w); |
1115 |
} |
1116 |
|
1117 |
/** |
1118 |
* Wake up all waiting threads. |
1119 |
* |
1120 |
* <p>If any threads are waiting on this condition then they |
1121 |
* are all woken up. Each thread must re-acquire the lock |
1122 |
* before it returns. |
1123 |
* @throws IllegalMonitorStateException if the lock associated |
1124 |
* with this Condition is not held by the current thread |
1125 |
*/ |
1126 |
public void signalAll() { |
1127 |
lock.checkOwner(Thread.currentThread()); |
1128 |
Node w = firstWaiter; |
1129 |
if (w != null) |
1130 |
doSignalAll(w); |
1131 |
} |
1132 |
|
1133 |
/* |
1134 |
* Various flavors of wait. Each almost the same, but |
1135 |
* annoyingly different. |
1136 |
*/ |
1137 |
|
1138 |
|
1139 |
/** |
1140 |
* Causes the current thread to wait until it is signalled. |
1141 |
* |
1142 |
* <p>The lock associated with this condition is atomically |
1143 |
* released and the current thread becomes disabled for thread |
1144 |
* scheduling purposes and lies dormant until <em>one</em> of |
1145 |
* the following happens: |
1146 |
* |
1147 |
* <ul> |
1148 |
* |
1149 |
* <li>Some other thread invokes the {@link #signal} method |
1150 |
* for this <tt>Condition</tt> and the current thread |
1151 |
* has been waiting the longest of all waiting threads; or |
1152 |
* |
1153 |
* <li>Some other thread invokes the {@link #signalAll} method |
1154 |
* for this <tt>Condition</tt> |
1155 |
* |
1156 |
* </ul> |
1157 |
* |
1158 |
* <p>In all cases, before this method can return the current |
1159 |
* thread must re-acquire the lock associated with this |
1160 |
* condition. When the thread returns it is |
1161 |
* <em>guaranteed</em> to hold this lock. |
1162 |
* |
1163 |
* <p>If the current thread's interrupt status is set when it |
1164 |
* enters this method, or it is {@link Thread#interrupt |
1165 |
* interrupted} while waiting, it will continue to wait until |
1166 |
* signalled. When it finally returns from this method its |
1167 |
* <em>interrupted status</em> will still be set. |
1168 |
* |
1169 |
* @throws IllegalMonitorStateException if the lock associated |
1170 |
* with this Condition is not held by the current thread |
1171 |
*/ |
1172 |
public void awaitUninterruptibly() { |
1173 |
Thread current = Thread.currentThread(); |
1174 |
lock.checkOwnerForWait(current); |
1175 |
Node w = addConditionWaiter(current); |
1176 |
int holds = lock.fullyUnlock(); |
1177 |
boolean interrupted = false; |
1178 |
while (!lock.isOnLockQueue(w)) { |
1179 |
LockSupport.park(); |
1180 |
if (Thread.interrupted()) |
1181 |
interrupted = true; |
1182 |
} |
1183 |
if (lock.relock(current, w, holds)) |
1184 |
interrupted = true; |
1185 |
if (interrupted) |
1186 |
current.interrupt(); |
1187 |
} |
1188 |
|
1189 |
/** |
1190 |
* Causes the current thread to wait until it is signalled or |
1191 |
* {@link Thread#interrupt interrupted}. |
1192 |
* |
1193 |
* <p>The lock associated with this <tt>Condition</tt> is |
1194 |
* atomically released and the current thread becomes disabled |
1195 |
* for thread scheduling purposes and lies dormant until |
1196 |
* <em>one</em> of the following happens: |
1197 |
* |
1198 |
* <ul> |
1199 |
* |
1200 |
* <li>Some other thread invokes the {@link #signal} method |
1201 |
* for this <tt>Condition</tt> and the current thread |
1202 |
* has been waiting the longest of all waiting threads; or |
1203 |
* |
1204 |
* <li>Some other thread invokes the {@link #signalAll} method |
1205 |
* for this <tt>Condition</tt>; or |
1206 |
* |
1207 |
* <li>Some other thread {@link Thread#interrupt interrupts} |
1208 |
* the current thread |
1209 |
* |
1210 |
* </ul> |
1211 |
* |
1212 |
* <p>In all cases, before this method can return the current |
1213 |
* thread must re-acquire the lock associated with this |
1214 |
* condition. When the thread returns it is |
1215 |
* <em>guaranteed</em> to hold this lock. |
1216 |
* |
1217 |
* <p>If the current thread has its interrupted status set on |
1218 |
* entry to this method or is {@link Thread#interrupt |
1219 |
* interrupted} while waiting, then {@link |
1220 |
* InterruptedException} is thrown and the current thread's |
1221 |
* interrupted status is cleared. |
1222 |
* |
1223 |
* @throws InterruptedException if the current thread is |
1224 |
* interrupted |
1225 |
* @throws IllegalMonitorStateException if the lock associated |
1226 |
* with this Condition is not held by the current thread |
1227 |
**/ |
1228 |
public void await() throws InterruptedException { |
1229 |
Thread current = Thread.currentThread(); |
1230 |
lock.checkOwnerForWait(current); |
1231 |
if (Thread.interrupted()) |
1232 |
throw new InterruptedException(); |
1233 |
|
1234 |
Node w = addConditionWaiter(current); |
1235 |
int holds = lock.fullyUnlock(); |
1236 |
boolean throwIE = false; |
1237 |
boolean interrupted = false; |
1238 |
|
1239 |
for (;;) { |
1240 |
if (Thread.interrupted()) { |
1241 |
if (lock.transferAfterCancelledWait(current, w)) |
1242 |
throwIE = true; |
1243 |
else |
1244 |
interrupted = true; |
1245 |
break; |
1246 |
} |
1247 |
if (lock.isOnLockQueue(w)) |
1248 |
break; |
1249 |
LockSupport.park(); |
1250 |
} |
1251 |
|
1252 |
if (lock.relock(current, w, holds)) |
1253 |
interrupted = true; |
1254 |
if (throwIE) |
1255 |
throw new InterruptedException(); |
1256 |
if (interrupted) |
1257 |
current.interrupt(); |
1258 |
} |
1259 |
|
1260 |
/** |
1261 |
* Causes the current thread to wait until it is signalled or |
1262 |
* interrupted, or the specified waiting time elapses. |
1263 |
* |
1264 |
* <p>The lock associated with this condition is atomically |
1265 |
* released and the current thread becomes disabled for thread |
1266 |
* scheduling purposes and lies dormant until <em>one</em> of |
1267 |
* the following happens: |
1268 |
* |
1269 |
* <ul> |
1270 |
* |
1271 |
* <li>Some other thread invokes the {@link #signal} method |
1272 |
* for this <tt>Condition</tt> and the current thread |
1273 |
* has been waiting the longest of all waiting threads; or |
1274 |
* |
1275 |
* <li>Some other thread invokes the {@link #signalAll} method |
1276 |
* for this <tt>Condition</tt>; or |
1277 |
* |
1278 |
* <li>Some other thread {@link Thread#interrupt interrupts} |
1279 |
* the current thread; or |
1280 |
* |
1281 |
* <li>The specified waiting time elapses |
1282 |
* |
1283 |
* </ul> |
1284 |
* |
1285 |
* <p>In all cases, before this method can return the current |
1286 |
* thread must re-acquire the lock associated with this |
1287 |
* condition. When the thread returns it is |
1288 |
* <em>guaranteed</em> to hold this lock. |
1289 |
* |
1290 |
* <p>If the current thread has its interrupted status set on |
1291 |
* entry to this method or is {@link Thread#interrupt |
1292 |
* interrupted} while waiting, then {@link |
1293 |
* InterruptedException} is thrown and the current thread's |
1294 |
* interrupted status is cleared. |
1295 |
* |
1296 |
* <p>The method returns an estimate of the number of nanoseconds |
1297 |
* remaining to wait given the supplied <tt>nanosTimeout</tt> |
1298 |
* value upon return, or a value less than or equal to zero if it |
1299 |
* timed out. This value can be used to determine whether and how |
1300 |
* long to re-wait in cases where the wait returns but an awaited |
1301 |
* condition still does not hold. |
1302 |
* |
1303 |
* @param nanosTimeout the maximum time to wait, in nanoseconds |
1304 |
* @return A value less than or equal to zero if the wait has |
1305 |
* timed out; otherwise an estimate, that |
1306 |
* is strictly less than the <tt>nanosTimeout</tt> argument, |
1307 |
* of the time still remaining when this method returned. |
1308 |
* |
1309 |
* @throws InterruptedException if the current thread is |
1310 |
* interrupted. |
1311 |
* @throws IllegalMonitorStateException if the lock associated |
1312 |
* with this Condition is not held by the current thread |
1313 |
*/ |
1314 |
public long awaitNanos(long nanosTimeout) throws InterruptedException { |
1315 |
Thread current = Thread.currentThread(); |
1316 |
lock.checkOwnerForWait(current); |
1317 |
if (Thread.interrupted()) |
1318 |
throw new InterruptedException(); |
1319 |
|
1320 |
Node w = addConditionWaiter(current); |
1321 |
int holds = lock.fullyUnlock(); |
1322 |
long lastTime = System.nanoTime(); |
1323 |
boolean throwIE = false; |
1324 |
boolean interrupted = false; |
1325 |
|
1326 |
for (;;) { |
1327 |
if (Thread.interrupted()) { |
1328 |
if (lock.transferAfterCancelledWait(current, w)) |
1329 |
throwIE = true; |
1330 |
else |
1331 |
interrupted = true; |
1332 |
break; |
1333 |
} |
1334 |
if (nanosTimeout <= 0L) { |
1335 |
lock.transferAfterCancelledWait(current, w); |
1336 |
break; |
1337 |
} |
1338 |
if (lock.isOnLockQueue(w)) |
1339 |
break; |
1340 |
LockSupport.parkNanos(nanosTimeout); |
1341 |
long now = System.nanoTime(); |
1342 |
nanosTimeout -= now - lastTime; |
1343 |
lastTime = now; |
1344 |
} |
1345 |
|
1346 |
if (lock.relock(current, w, holds)) |
1347 |
interrupted = true; |
1348 |
if (throwIE) |
1349 |
throw new InterruptedException(); |
1350 |
if (interrupted) |
1351 |
current.interrupt(); |
1352 |
return nanosTimeout - (System.nanoTime() - lastTime); |
1353 |
} |
1354 |
|
1355 |
|
1356 |
/** |
1357 |
* Causes the current thread to wait until it is signalled or |
1358 |
* interrupted, or the specified deadline elapses. |
1359 |
* |
1360 |
* <p>The lock associated with this condition is atomically |
1361 |
* released and the current thread becomes disabled for thread |
1362 |
* scheduling purposes and lies dormant until <em>one</em> of |
1363 |
* the following happens: |
1364 |
* |
1365 |
* <ul> |
1366 |
* |
1367 |
* <li>Some other thread invokes the {@link #signal} method |
1368 |
* for this <tt>Condition</tt> and the current thread |
1369 |
* has been waiting the longest of all waiting threads; or |
1370 |
* |
1371 |
* <li>Some other thread invokes the {@link #signalAll} method |
1372 |
* for this <tt>Condition</tt>; or |
1373 |
* |
1374 |
* <li>Some other thread {@link Thread#interrupt interrupts} |
1375 |
* the current thread; or |
1376 |
* |
1377 |
* <li>The specified deadline elapses |
1378 |
* |
1379 |
* </ul> |
1380 |
* |
1381 |
* <p>In all cases, before this method can return the current |
1382 |
* thread must re-acquire the lock associated with this |
1383 |
* condition. When the thread returns it is |
1384 |
* <em>guaranteed</em> to hold this lock. |
1385 |
* |
1386 |
* <p>If the current thread has its interrupted status set on |
1387 |
* entry to this method or is {@link Thread#interrupt |
1388 |
* interrupted} while waiting, then {@link |
1389 |
* InterruptedException} is thrown and the current thread's |
1390 |
* interrupted status is cleared. |
1391 |
* |
1392 |
* @param deadline the absolute time to wait until |
1393 |
* @return <tt>false</tt> if the deadline has |
1394 |
* elapsed upon return, else <tt>true</tt>. |
1395 |
* |
1396 |
* @throws InterruptedException if the current thread is interrupted |
1397 |
* @throws IllegalMonitorStateException if the lock associated |
1398 |
* with this Condition is not held by the current thread |
1399 |
* @throws NullPointerException if deadline is null |
1400 |
*/ |
1401 |
public boolean awaitUntil(Date deadline) throws InterruptedException { |
1402 |
if (deadline == null) |
1403 |
throw new NullPointerException(); |
1404 |
Thread current = Thread.currentThread(); |
1405 |
lock.checkOwnerForWait(current); |
1406 |
if (Thread.interrupted()) |
1407 |
throw new InterruptedException(); |
1408 |
|
1409 |
Node w = addConditionWaiter(current); |
1410 |
int holds = lock.fullyUnlock(); |
1411 |
long abstime = deadline.getTime(); |
1412 |
boolean timedout = false; |
1413 |
boolean throwIE = false; |
1414 |
boolean interrupted = false; |
1415 |
|
1416 |
for (;;) { |
1417 |
if (Thread.interrupted()) { |
1418 |
if (lock.transferAfterCancelledWait(current, w)) |
1419 |
throwIE = true; |
1420 |
else |
1421 |
interrupted = true; |
1422 |
break; |
1423 |
} |
1424 |
if (System.currentTimeMillis() > abstime) { |
1425 |
timedout = lock.transferAfterCancelledWait(current, w); |
1426 |
break; |
1427 |
} |
1428 |
if (lock.isOnLockQueue(w)) |
1429 |
break; |
1430 |
LockSupport.parkUntil(abstime); |
1431 |
} |
1432 |
|
1433 |
if (lock.relock(current, w, holds)) |
1434 |
interrupted = true; |
1435 |
if (throwIE) |
1436 |
throw new InterruptedException(); |
1437 |
if (interrupted) |
1438 |
current.interrupt(); |
1439 |
return !timedout; |
1440 |
} |
1441 |
|
1442 |
/** |
1443 |
* Causes the current thread to wait until it is signalled or |
1444 |
* interrupted, or the specified waiting time elapses. This |
1445 |
* method is behaviorally equivalent to:<br> |
1446 |
* |
1447 |
* <pre> |
1448 |
* awaitNanos(unit.toNanos(time)) > 0 |
1449 |
* </pre> |
1450 |
* |
1451 |
* @param time the maximum time to wait |
1452 |
* @param unit the time unit of the <tt>time</tt> argument. |
1453 |
* @return <tt>false</tt> if the waiting time detectably |
1454 |
* elapsed before return from the method, else <tt>true</tt>. |
1455 |
* @throws InterruptedException if the current thread is |
1456 |
* interrupted |
1457 |
* @throws IllegalMonitorStateException if the lock associated |
1458 |
* with this Condition is not held by the current thread |
1459 |
* @throws NullPointerException if unit is null |
1460 |
*/ |
1461 |
public boolean await(long time, TimeUnit unit) throws InterruptedException { |
1462 |
if (unit == null) |
1463 |
throw new NullPointerException(); |
1464 |
|
1465 |
long nanosTimeout = unit.toNanos(time); |
1466 |
Thread current = Thread.currentThread(); |
1467 |
lock.checkOwnerForWait(current); |
1468 |
if (Thread.interrupted()) |
1469 |
throw new InterruptedException(); |
1470 |
|
1471 |
Node w = addConditionWaiter(current); |
1472 |
int holds = lock.fullyUnlock(); |
1473 |
long lastTime = System.nanoTime(); |
1474 |
boolean timedout = false; |
1475 |
boolean throwIE = false; |
1476 |
boolean interrupted = false; |
1477 |
|
1478 |
for (;;) { |
1479 |
if (Thread.interrupted()) { |
1480 |
if (lock.transferAfterCancelledWait(current, w)) |
1481 |
throwIE = true; |
1482 |
else |
1483 |
interrupted = true; |
1484 |
break; |
1485 |
} |
1486 |
if (nanosTimeout <= 0L) { |
1487 |
timedout = lock.transferAfterCancelledWait(current, w); |
1488 |
break; |
1489 |
} |
1490 |
if (lock.isOnLockQueue(w)) |
1491 |
break; |
1492 |
LockSupport.parkNanos(nanosTimeout); |
1493 |
long now = System.nanoTime(); |
1494 |
nanosTimeout -= now - lastTime; |
1495 |
lastTime = now; |
1496 |
} |
1497 |
|
1498 |
if (lock.relock(current, w, holds)) |
1499 |
interrupted = true; |
1500 |
if (throwIE) |
1501 |
throw new InterruptedException(); |
1502 |
if (interrupted) |
1503 |
current.interrupt(); |
1504 |
return !timedout; |
1505 |
} |
1506 |
|
1507 |
/** |
1508 |
* Queries whether any threads are waiting on this |
1509 |
* condition. Note that because timeouts and interrupts may |
1510 |
* occur at any time, a <tt>true</tt> return does not |
1511 |
* guarantee that a future <tt>signal</tt> will awaken any |
1512 |
* threads. This method is designed primarily for use in |
1513 |
* monitoring of the system state. |
1514 |
* @return <tt>true</tt> if there are any waiting threads. |
1515 |
* @throws IllegalMonitorStateException if the lock associated |
1516 |
* with this Condition is not held by the current thread |
1517 |
*/ |
1518 |
public boolean hasWaiters() { |
1519 |
lock.checkOwner(Thread.currentThread()); |
1520 |
for (Node w = firstWaiter; w != null; w = w.nextWaiter) { |
1521 |
if (w.status == CONDITION) |
1522 |
return true; |
1523 |
} |
1524 |
return false; |
1525 |
} |
1526 |
|
1527 |
/** |
1528 |
* Returns an estimate of the number of threads waiting on |
1529 |
* this condition. Note that because timeouts and interrupts |
1530 |
* may occur at any time, the estimate serves only as an upper |
1531 |
* bound on the actual number of waiters. This method is |
1532 |
* designed for use in monitoring of the system state, not for |
1533 |
* synchronization control. |
1534 |
* @return the estimated number of waiting threads. |
1535 |
* @throws IllegalMonitorStateException if the lock associated |
1536 |
* with this Condition is not held by the current thread |
1537 |
*/ |
1538 |
public int getWaitQueueLength() { |
1539 |
lock.checkOwner(Thread.currentThread()); |
1540 |
int n = 0; |
1541 |
for (Node w = firstWaiter; w != null; w = w.nextWaiter) { |
1542 |
if (w.status == CONDITION) |
1543 |
++n; |
1544 |
} |
1545 |
return n; |
1546 |
} |
1547 |
|
1548 |
/** |
1549 |
* Returns a collection containing those threads that may be |
1550 |
* waiting on this Condition. Because the actual set of |
1551 |
* threads may change dynamically while constructing this |
1552 |
* result, the returned collection is only a best-effort |
1553 |
* estimate. The elements of the returned collection are in no |
1554 |
* particular order. This method is designed to facilitate |
1555 |
* construction of subclasses that provide more extensive |
1556 |
* condition monitoring facilities. |
1557 |
* @return the collection of threads |
1558 |
* @throws IllegalMonitorStateException if the lock associated |
1559 |
* with this Condition is not held by the current thread |
1560 |
*/ |
1561 |
protected Collection<Thread> getWaitingThreads() { |
1562 |
lock.checkOwner(Thread.currentThread()); |
1563 |
ArrayList<Thread> list = new ArrayList<Thread>(); |
1564 |
for (Node w = firstWaiter; w != null; w = w.nextWaiter) { |
1565 |
if (w.status == CONDITION) { |
1566 |
Thread t = w.thread; |
1567 |
if (t != null) |
1568 |
list.add(t); |
1569 |
} |
1570 |
} |
1571 |
return list; |
1572 |
} |
1573 |
} |
1574 |
} |