ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ReentrantLock.java
Revision: 1.16
Committed: Thu Jun 26 10:47:35 2003 UTC (20 years, 11 months ago) by dl
Branch: MAIN
Changes since 1.15: +3 -3 lines
Log Message:
Added "Concurrent" prefixes

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain. Use, modify, and
4 * redistribute this code in any way without acknowledgement.
5 */
6
7 package java.util.concurrent;
8 import java.util.concurrent.atomic.*;
9 import java.util.Date;
10
11
12 /**
13 * A reentrant mutual exclusion {@link Lock} with the same basic
14 * behavior and semantics as the implicit monitor lock accessed by the
15 * use of <tt>synchronized</tt> methods and statements, but without
16 * the forced block-structured locking and unlocking that occurs with
17 * <tt>synchronized</tt> methods and statements.
18 *
19 * <p>The order in which blocked threads are granted the lock is not
20 * specified.
21 *
22 * <p>If you want a non-reentrant mutual exclusion lock then it is a simple
23 * matter to use a reentrant lock in a non-reentrant way by ensuring that
24 * the lock is not held by the current thread prior to locking.
25 * See {@link #getHoldCount} for a way to check this.
26 *
27 *
28 * <p><tt>ReentrantLock</tt> instances are intended to be used primarily
29 * in before/after constructions such as:
30 *
31 * <pre>
32 * class X {
33 * ReentrantLock lock;
34 * // ...
35 *
36 * public void m() {
37 * lock.lock(); // block until condition holds
38 * try {
39 * // ... method body
40 * } finally {
41 * lock.unlock()
42 * }
43 * }
44 * }
45 * </pre>
46 *
47 * <p>Except where noted, passing a <tt>null</tt> value for any parameter
48 * will result in a {@link NullPointerException} being thrown.
49 *
50 * <h3>Implementation Notes</h3>
51 * <p>This implementation supports the interruption of lock acquisition and
52 * provides a
53 * {@link #newCondition Condition} implementation that supports the
54 * interruption of thread suspension.
55 * It also favors interruption over normal method return.
56 *
57 *
58 * @since 1.5
59 * @spec JSR-166
60 * @revised $Date: 2003/06/26 05:50:50 $
61 * @editor $Author: dholmes $
62 * @author Doug Lea
63 *
64 **/
65 public class ReentrantLock extends ReentrantLockQueueNode
66 implements Lock, java.io.Serializable {
67 /*
68 The basic fastpath/slowpath algorithm looks like this, ignoring
69 reentrance, cancellation, timeouts, error checking etc:
70 Lock:
71 if (!fair && casOwner(null, currentThread)) // fastpath
72 return;
73 node = create and enq a wait node;
74 for (;;) {
75 if (node is first on queue) {
76 if (casOwner(null, currentThread)) {
77 deq(node);
78 return;
79 }
80 }
81 park(currentThread);
82 }
83
84 Unlock:
85 owner = null; // volatile assignment
86 h = first node on queue;
87 if (h != null) unpark(h's successor's thread);
88
89 * The fast path uses one atomic CAS operation, plus one
90 StoreLoad barrier (i.e., volatile-write-barrier) per
91 lock/unlock pair. The "owner" field is handled as a simple
92 spinlock. To lock, the owner field is set to current thread
93 using conditional atomic update. To unlock, the owner field
94 is set to null, checking if anyone needs waking up, if so
95 doing so. Recursive locks/unlocks instead increment/decrement
96 recursion field.
97
98 * By default, contended locks use a kind of "greedy" /
99 "renouncement" / barging / convoy-avoidance strategy: When a
100 lock is released, a waiting thread is signalled so that it can
101 (re)contend for the lock. It might lose and thus need to
102 rewait. This strategy has much higher throughput than
103 "directed handoff" because it reduces blocking of running
104 threads, but poorer fairness. The wait queue is FIFO, but
105 newly entering threads can barge ahead and grab lock before
106 woken waiters, so acquires are not strictly FIFO, and transfer
107 is not deterministically fair. It is probablistically fair
108 though. Earlier queued threads are allowed to recontend before
109 later queued threads, and each recontention has an unbiased
110 chance to succeed.
111
112 * The base class also sets up support for FairReentrantLock
113 subclass, that differs only in that barging is disabled when
114 there is contention, so locks proceed FIFO. There can be
115 some races in detecting contention, but it is still FIFO from
116 a definable (although complicated to describe) single point,
117 so qualifies as a FIFO lock.
118
119 * The wait queue is a variant of a "CLH" (Craig, Landin, and
120 Hagersten) lock. CLH locks are normally used for spinlocks.
121 We instead use them for blocking locks, but use the same basic
122 tactic of holding the information about whether a thread is
123 released (i.e, eligible to contend for ownership lock) in the
124 predecessor of each node. A node waits until its predecessor
125 says it is released. It is signalled when its predecessor
126 releases the lock. Each node of the queue otherwise serves as
127 a specific-notification-style monitor holding a single waiting
128 thread.
129
130 To enqueue into a CLH lock, you atomically splice it in as new
131 tail. To dequeue, you just set the head field.
132
133 +------+ prev +-----+ +-----+
134 head | | <---- | | <---- | | tail
135 +------+ +-----+ +-----+
136
137 The great thing about CLH Locks is that insertion into a CLH
138 queue requires only a single atomic operation on "tail", so
139 there is a simple atomic point of demarcation from unqueued to
140 queued. Similarly, dequeing involves only updating the
141 "head". However, it takes a bit more work for nodes to
142 determine who their successors are, in part to deal with
143 possible cancellation due to timeouts and interrupts.
144
145 The "prev" links (not used in original CLH locks), are mainly
146 needed to handle cancellation. If a node is cancelled, its
147 successor must be relinked to a non-cancelled predecessor. For
148 explanation in the case of spin locks, see the papers by Scott
149 & Scherer at
150 http://www.cs.rochester.edu/u/scott/synchronization/
151
152 Being first in the queue does not mean that you have the lock,
153 only that you may contend for it (by CAS'ing owner field). So
154 the currently released contender thread may need to rewait.
155
156 We also use "next" links to implement blocking mechanics. The
157 thread id for each node is kept in its node, so a predecessor
158 signals the next node to wake up by traversing next link to
159 determine which thread it is. Determination of successor must
160 avoid races with newly queued nodes to set the "next" fields
161 of their predecessors. This is solved by checking backwards
162 from the atomically updated "tail" when a node's successor
163 appears to be null.
164
165 Cancellation introduces some conservatism to the basic
166 algorithms. Since we must poll for cancellation of other
167 nodes, we can miss noticing whether a cancelled node is ahead
168 or behind us. This is dealt with by always unparking
169 successors upon cancellation, and not letting them park again
170 (by saturating release counts) until they stabilize on a new
171 predecessor.
172
173 * Threads waiting on Conditions use the same kind of nodes, but
174 only need to link them in simple (non-concurrent) linked
175 queues because they are only accessed when lock is held. To
176 wait, a thread makes a node inserted into a condition queue.
177 Upon signal, the node is transferred to the lock queue.
178 Special values of releaseStatus fields are used to mark which
179 queue a node is on.
180
181 * All suspension and resumption of threads uses the internal
182 JSR166 native park/unpark API. These are safe versions of
183 suspend/resume (plus timeout support) that avoid the intrinsic
184 race problems with suspend/resume: Park suspends if not
185 preceded by an unpark. Unpark resumes if waiting, else causes
186 next park not to suspend. While safe and efficient, these are
187 not general-purpose public operations because we cannot allow
188 code outside this package to randomly call these methods --
189 parks and unparks should be matched up. (It is OK to have more
190 unparks than unparks, but it causes threads to spuriously wake
191 up. So minimizing excessive unparks is a performance concern.)
192
193 * The ReentrantLock class extends package-private
194 ReentrantLockQueueNode class as an expedient and efficient
195 (although slightly sleazy) solution to serialization and
196 initialization problems -- we need head and tail nodes to be
197 initialized to an otherwise useless dummy node, so use the
198 ReeantrantLock itself as that node.
199 */
200
201 /*
202 Note that all fields are transient and defined in a way that
203 deserialized locks are in initial unlocked state.
204 */
205
206 /**
207 * Creates an instance of <tt>ReentrantLock</tt>.
208 */
209 public ReentrantLock() { }
210
211 /**
212 * Current owner of lock, or null iff the lock is free. Acquired
213 * only using CAS.
214 */
215 private transient volatile Thread owner;
216
217 /** Number of recursive acquires. Note: total holds = recursions+1 */
218 private transient int recursions;
219
220 /** Head of the wait queue, initialized to point to self as dummy node */
221 private transient volatile ReentrantLockQueueNode head = this;
222
223 /** Tail of the wait queue, initialized to point to self as dummy node */
224 private transient volatile ReentrantLockQueueNode tail = this;
225
226 // Atomics support
227
228 private final static AtomicReferenceFieldUpdater ownerUpdater = new AtomicReferenceFieldUpdater (new ReentrantLock[0], new Thread[0], "owner");
229 private final static AtomicReferenceFieldUpdater tailUpdater = new AtomicReferenceFieldUpdater (new ReentrantLock[0], new ReentrantLockQueueNode[0], "tail");
230 private final static AtomicReferenceFieldUpdater headUpdater = new AtomicReferenceFieldUpdater (new ReentrantLock[0], new ReentrantLockQueueNode[0], "head");
231 private final static AtomicIntegerFieldUpdater releaseStatusUpdater =
232 new AtomicIntegerFieldUpdater (new ReentrantLockQueueNode[0], "releaseStatus");
233
234 private boolean acquireOwner(Thread current) {
235 return ownerUpdater.compareAndSet(this, null, current);
236 }
237
238 private boolean casTail(ReentrantLockQueueNode cmp, ReentrantLockQueueNode val) {
239 return tailUpdater.compareAndSet(this, cmp, val);
240 }
241
242 private boolean casHead(ReentrantLockQueueNode cmp, ReentrantLockQueueNode val) {
243 return headUpdater.compareAndSet(this, cmp, val);
244 }
245
246 // casReleaseStatus non-private because also accessed by Conditions
247 final boolean casReleaseStatus(ReentrantLockQueueNode node, int cmp, int val) {
248 return releaseStatusUpdater.compareAndSet(node, cmp, val);
249 }
250
251 /**
252 * Special value for releaseStatus indicating that node is cancelled.
253 * Must be a large positive number.
254 */
255 private static final int CANCELLED = Integer.MAX_VALUE;
256
257 /**
258 * Special value for node releaseStatus indicating that node is on a
259 * condition queue. Must be large negative number.
260 */
261 private static final int ON_CONDITION_QUEUE = Integer.MIN_VALUE;
262
263 /**
264 * Special value for node releaseStatus indicating that node is in
265 * process of transfer. Must be negative and greater than
266 * ON_CONDITION_QUEUE.
267 */
268 private static final int TRANSFERRING = ON_CONDITION_QUEUE + 1;
269
270 /**
271 * Return whether lock wait queue is empty
272 * @return true if no threads are waiting for lock
273 */
274 final boolean queueEmpty() {
275 ReentrantLockQueueNode h = head; // force order of the volatile reads
276 return h == tail;
277 }
278
279 /**
280 * Insert node into queue. Return predecessor.
281 * @param node the node to insert
282 * @return node's predecessor
283 */
284 private ReentrantLockQueueNode enq(ReentrantLockQueueNode node) {
285 for (;;) {
286 ReentrantLockQueueNode p = tail;
287 node.prev = p; // prev must be valid before/upon CAS
288 if (casTail(p, node)) {
289 p.next = node; // Note: next field assignment lags CAS
290 return p;
291 }
292 }
293 }
294
295 /**
296 * Return true if it is OK to take fast path to lock.
297 * Overridden in FairReentrantLock.
298 * @return true (for non-fair locks)
299 */
300 boolean canBarge() {
301 return true;
302 }
303
304 /**
305 * Main locking code, parameterized across different policies.
306 * @param current current thread
307 * @param node its wait-node, if it already exists; else null in
308 * which case it is created,
309 * @param interruptible - true if can abort for interrupt or timeout
310 * @param nanos time to wait, or zero if untimed
311 * @return true if lock acquired (can be false only if interruptible)
312 */
313 final boolean doLock(Thread current,
314 ReentrantLockQueueNode node,
315 boolean interruptible,
316 long nanos) {
317 /*
318 * Bypass queueing if a recursive lock
319 */
320 if (owner == current) {
321 ++recursions;
322 return true;
323 }
324
325 long lastTime = 0; // for adjusting timeouts, below
326 boolean interrupted = false; // for restoring interrupt status on exit
327
328 /*
329 * p is our predecessor node, that holds releaseStatus giving
330 * permission to try to obtain lock if we are first in queue.
331 */
332 ReentrantLockQueueNode p;
333
334 /*
335 * Create and enqueue node if not already created. Nodes
336 * transferred from condition queues will already be created
337 * and queued.
338 */
339 if (node == null) {
340 node = new ReentrantLockQueueNode(current);
341 p = enq(node);
342 }
343 else
344 p = node.prev;
345
346 /*
347 * Repeatedly try to get ownership if first in queue, else
348 * block.
349 */
350 for (;;) {
351 /*
352 * If we are the first thread in queue, try to get the
353 * lock. (We must not try to get lock if we are not
354 * first.) Note that if we become first after p == head
355 * check, all is well -- we can be sure an unlocking
356 * thread will signal us.
357 */
358
359 if (p == head && acquireOwner(current)) {
360 if (interrupted) // re-interrupt on exit
361 current.interrupt();
362
363 p.next = null; // clear for GC and to suppress signals
364 node.thread = null;
365 node.prev = null;
366 head = node;
367 return true;
368 }
369
370 int releaseStatus = p.releaseStatus;
371
372 /*
373 * If our predecessor was cancelled, use its predecessor.
374 * There will always be a non-cancelled one somewhere
375 * because head node is never cancelled, so at worst we
376 * will hit it. (Note that because head is never
377 * cancelled, we can perform this check after trying to
378 * acquire ownership).
379 */
380 if (releaseStatus == CANCELLED) {
381 node.prev = p = p.prev;
382 }
383
384 /*
385 * Wait if we are not not first in queue, or if we are
386 * first, we have tried to acquire owner and failed since
387 * either entry or last release. (Note that releaseStatus can
388 * already be less than zero if we spuriously returned
389 * from a previous park or got new a predecessor due to
390 * cancellation.)
391 *
392 * We also don't wait if atomic decrement of releaseStatus
393 * fails. We just continue main loop on failure to
394 * atomically update releaseStatus because interference
395 * causing failure is almost surely due to someone
396 * releasing us anyway.
397 *
398 * Each wait consumes all available releases. Normally
399 * there is only one anyway because release doesn't bother
400 * incrementing if already positive.
401 *
402 */
403 else if (casReleaseStatus(p, releaseStatus,
404 (releaseStatus > 0) ? 0 : -1) &&
405 releaseStatus <= 0) {
406
407 // Update and check timeout value
408 if (nanos > 0) {
409 long now = TimeUnit.nanoTime();
410 if (lastTime != 0) {
411 nanos -= now - lastTime;
412 if (nanos == 0) // avoid zero
413 nanos = -1;
414 }
415 lastTime = now;
416 }
417
418 if (nanos >= 0)
419 JSR166Support.park(false, nanos);
420
421 if (!interruptible) {
422 if (Thread.interrupted()) // consume interrupt for now
423 interrupted = true;
424 }
425 else if (nanos < 0 || current.isInterrupted()) {
426 node.thread = null; // disable signals
427 // don't need CAS here:
428 releaseStatusUpdater.set(node, CANCELLED);
429 signalSuccessor(node);
430 return false;
431 }
432 }
433 }
434 }
435
436 /**
437 * Wake up node's successor, if one exists
438 * @param node the current node
439 */
440 private void signalSuccessor(ReentrantLockQueueNode node) {
441 /*
442 * Find successor -- normally just node.next.
443 */
444 ReentrantLockQueueNode s = node.next;
445
446 /*
447 * if s is cancelled, traverse through next's.
448 */
449
450 while (s != null && s.releaseStatus == CANCELLED) {
451 node = s;
452 s = s.next;
453 }
454
455 /*
456 * If successor appears to be null, check to see if a newly
457 * queued node is successor by starting at tail and working
458 * backwards. If so, help out the enqueing thread by setting
459 * next field. We don't expect this loop to trigger often,
460 * and hardly ever to iterate.
461 */
462
463 if (s == null) {
464 ReentrantLockQueueNode t = tail;
465 for (;;) {
466 /*
467 * If t == node, there is no successor.
468 */
469 if (t == node)
470 return;
471
472 ReentrantLockQueueNode tp = t.prev;
473
474 /*
475 * t's predecessor is null if we are lagging so far
476 * behind the actions of other nodes/threads that an
477 * intervening head.prev was nulled by some
478 * non-cancelled successor of node. In which case,
479 * there's no live successor.
480 */
481
482 if (tp == null)
483 return;
484
485 /*
486 * If we find successor, we can do the assignment to
487 * next (don't even need CAS) on behalf of enqueuing
488 * thread. At worst we will stall now and lag behind
489 * both the setting and the later clearing of next
490 * field. But if so, we will reattach an internal link
491 * in soon-to-be unreachable set of nodes, so no harm
492 * done.
493 */
494
495 if (tp == node) {
496 node.next = s = t;
497 break;
498 }
499
500 t = tp;
501
502 /*
503 * Before iterating, check to see if link has
504 * appeared.
505 */
506 ReentrantLockQueueNode n = node.next;
507 if (n != null) {
508 s = n;
509 break;
510 }
511 }
512 }
513
514 Thread thr = s.thread;
515 // don't bother signalling if has lock
516 if (thr != null && thr != owner)
517 JSR166Support.unpark(thr);
518 }
519
520
521 /**
522 * Increment releaseStatus and signal next thread in queue if one
523 * exists and is waiting. Called only by unlock. This code is split
524 * out from unlock to encourage inlining of non-contended cases.
525 */
526 private void releaseFirst() {
527 for (;;) {
528 ReentrantLockQueueNode h = head;
529 if (h == tail) // No successor
530 return;
531
532 int c = h.releaseStatus;
533 if (c > 0) // Don't need signal if already positive
534 return;
535 if (owner != null) // Don't bother if some thread got lock
536 return;
537
538 if (casReleaseStatus(h, c, (c < 0) ? 0 : 1)) { // saturate at 1
539 if (c < 0)
540 signalSuccessor(h);
541 return;
542 }
543 // else retry if CAS fails
544 }
545 }
546
547 /**
548 * Attempts to release this lock.
549 * <p>If the current thread is the
550 * holder of this lock then the hold count is decremented. If the
551 * hold count is now zero then the lock is released. If the
552 * current thread is not the holder of this lock then {@link
553 * IllegalMonitorStateException} is thrown.
554 * @throws IllegalMonitorStateException if the current thread does not
555 * hold this lock.
556 */
557 public void unlock() {
558 if (Thread.currentThread() != owner)
559 throw new IllegalMonitorStateException();
560
561 if (recursions > 0)
562 --recursions;
563 else {
564 ownerUpdater.set(this, null);
565 if (tail != this) // don't bother if never contended
566 releaseFirst();
567 }
568 }
569
570 /**
571 * Acquire the lock.
572 * <p>Acquires the lock if it is not held by another thread and returns
573 * immediately, setting the lock hold count to one.
574 * <p>If the current thread
575 * already holds the lock then the hold count is incremented by one and
576 * the method returns immediately.
577 * <p>If the lock is held by another thread then the
578 * current thread becomes disabled for thread scheduling
579 * purposes and lies dormant until the lock has been acquired,
580 * at which time the lock hold count is set to one.
581 */
582 public void lock() {
583 Thread current = Thread.currentThread();
584 if (!canBarge() || !acquireOwner(current))
585 doLock(current, null, false, 0);
586
587 }
588
589 /**
590 * Acquires the lock unless the current thread is
591 * {@link Thread#interrupt interrupted}.
592 * <p>Acquires the lock if it is not held by another thread and returns
593 * immediately, setting the lock hold count to one.
594 * <p>If the current thread already holds this lock then the hold count
595 * is incremented by one and the method returns immediately.
596 * <p>If the lock is held by another thread then the
597 * current thread becomes disabled for thread scheduling
598 * purposes and lies dormant until one of two things happens:
599 * <ul>
600 * <li> The lock is acquired by the current thread; or
601 * <li> Some other thread {@link Thread#interrupt interrupts} the current
602 * thread.
603 * </ul>
604 * <p>If the lock is acquired by the current thread then the lock hold
605 * count is set to one.
606 * <p>If the current thread:
607 * <ul>
608 * <li>has its interrupted status set on entry to this method; or
609 * <li>is {@link Thread#interrupt interrupted} while acquiring
610 * the lock,
611 * </ul>
612 * then {@link InterruptedException} is thrown and the current thread's
613 * interrupted status is cleared.
614 * <p>In this implementation, as this method is an explicit interruption
615 * point, preference is
616 * given to responding to the interrupt over normal or reentrant
617 * acquisition of the lock.
618 *
619 * @throws InterruptedException if the current thread is interrupted
620 */
621 public void lockInterruptibly() throws InterruptedException {
622 Thread current = Thread.currentThread();
623 if (!Thread.interrupted()) {
624 if ((canBarge() && acquireOwner(current)) ||
625 doLock(current, null, true, 0))
626 return;
627 Thread.interrupted(); // clear interrupt status on failure
628 }
629 throw new InterruptedException();
630 }
631
632 /**
633 * Acquires the lock only if it is not held by another thread at the time
634 * of invocation.
635 * <p>Acquires the lock if it is not held by another thread and returns
636 * immediately with the value <tt>true</tt>, setting the lock hold count
637 * to one.
638 * <p> If the current thread
639 * already holds this lock then the hold count is incremented by one and
640 * the method returns <tt>true</tt>.
641 * <p>If the lock is held by another thread then this method will return
642 * immediately with the value <tt>false</tt>.
643 *
644 * @return <tt>true</tt>if the lock was free and was acquired by the
645 * current thread, or the lock was already held by the current thread; and
646 * <tt>false</tt> otherwise.
647 */
648 public boolean tryLock() {
649 Thread current = Thread.currentThread();
650 if (acquireOwner(current))
651 return true;
652 if (owner == current) {
653 ++recursions;
654 return true;
655 }
656 return false;
657 }
658
659 /**
660 *
661 * Acquires the lock if it is not held by another thread within the given
662 * waiting time and the current thread has not been
663 * {@link Thread#interrupt interrupted}.
664 * <p>Acquires the lock if it is not held by another thread and returns
665 * immediately with the value <tt>true</tt>, setting the lock hold count
666 * to one.
667 * <p> If the current thread
668 * already holds this lock then the hold count is incremented by one and
669 * the method returns <tt>true</tt>.
670 * <p>If the lock is held by another thread then the
671 * current thread becomes disabled for thread scheduling
672 * purposes and lies dormant until one of three things happens:
673 * <ul>
674 * <li> The lock is acquired by the current thread; or
675 * <li> Some other thread {@link Thread#interrupt interrupts} the current
676 * thread; or
677 * <li> The specified waiting time elapses
678 * </ul>
679 * <p>If the lock is acquired then the value <tt>true</tt> is returned and
680 * the lock hold count is set to one.
681 * <p>If the current thread:
682 * <ul>
683 * <li>has its interrupted status set on entry to this method; or
684 * <li>is {@link Thread#interrupt interrupted} while acquiring
685 * the lock,
686 * </ul>
687 * then {@link InterruptedException} is thrown and the current thread's
688 * interrupted status is cleared.
689 * <p>If the specified waiting time elapses then the value <tt>false</tt>
690 * is returned.
691 * The given waiting time is a best-effort lower bound. If the time is
692 * less than or equal to zero, the method will not wait at all.
693 * <p>In this implementation, as this method is an explicit interruption
694 * point, preference is
695 * given to responding to the interrupt over normal or reentrant
696 * acquisition of the lock, and over reporting the elapse of the waiting
697 * time.
698 *
699 *
700 * @param timeout the time to wait for the lock
701 * @param unit the time unit of the timeout argument
702 *
703 * @return <tt>true</tt> if the lock was free and was acquired by the
704 * current thread, or the lock was already held by the current thread; and
705 * <tt>false</tt> if the waiting time elapsed before the lock could be
706 * acquired.
707 *
708 * @throws InterruptedException if the current thread is interrupted
709 *
710 */
711 public boolean tryLock(long timeout, TimeUnit unit) throws InterruptedException {
712 if (unit == null)
713 throw new NullPointerException();
714 if (Thread.interrupted())
715 throw new InterruptedException();
716 Thread current = Thread.currentThread();
717 if (canBarge() && acquireOwner(current))
718 return true;
719 if (owner == current) { // check recursions before timeout
720 ++recursions;
721 return true;
722 }
723 if (timeout <= 0)
724 return false;
725 if (doLock(current, null, true, unit.toNanos(timeout)))
726 return true;
727 if (Thread.interrupted())
728 throw new InterruptedException();
729 return false; // timed out
730 }
731
732
733 /**
734 * Queries the number of holds on this lock by the current thread.
735 * <p>A thread has a hold on a lock for each lock action that is not
736 * matched by an unlock action.
737 * <p>The hold count information is typically only used for testing and
738 * debugging purposes. For example, if a certain section of code should
739 * not be entered with the lock already held then we can assert that
740 * fact:
741 * <pre>
742 * class X {
743 * ReentrantLock lock = new ReentrantLock();
744 * // ...
745 *
746 * public void m() {
747 * assert lock.getHoldCount() == 0;
748 * lock.lock();
749 * try {
750 * // ... method body
751 * } finally {
752 * lock.unlock()
753 * }
754 * }
755 * }
756 * </pre>
757 *
758 * @return the number of holds on this lock by the current thread,
759 * or zero if this lock is not held by the current thread.
760 **/
761 public int getHoldCount() {
762 return (owner == Thread.currentThread()) ? recursions + 1 : 0;
763 }
764
765 /**
766 * Queries if this lock is held by the current thread.
767 * <p>Analogous to the {@link Thread#holdsLock} method for built-in
768 * monitor locks, this method is typically used for debugging and
769 * testing. For example, a method that should only be called while
770 * a lock is held can assert that this is the case:
771 * <pre>
772 * class X {
773 * ReentrantLock lock = new ReentrantLock();
774 * // ...
775 *
776 * public void m() {
777 * assert lock.isHeldByCurrentThread();
778 * // ... method body
779 * }
780 * }
781 * </pre>
782 *
783 * @return <tt>true</tt> if current thread holds this lock and
784 * <tt>false</tt> otherwise.
785 **/
786 public boolean isHeldByCurrentThread() {
787 return (owner == Thread.currentThread());
788 }
789
790
791 /**
792 * Queries if this lock is held by any thread. This method is
793 * designed for use in monitoring, not for synchronization control.
794 * @return <tt>true</tt> if any thread holds this lock and
795 * <tt>false</tt> otherwise.
796 **/
797 public boolean isLocked() {
798 return owner != null;
799 }
800
801 /**
802 * Reconstitute by resetting head and tail to point back to the lock.
803 * @param s the stream
804 */
805 private void readObject(java.io.ObjectInputStream s)
806 throws java.io.IOException, ClassNotFoundException {
807 s.defaultReadObject();
808 head = tail = this;
809 }
810
811 /**
812 * Returns a {@link Condition} instance for use with this
813 * {@link Lock} instance.
814 *
815 * <p>The returned {@link Condition} instance has the same behavior and
816 * usage
817 * restrictions with this lock as the {@link Object} monitor methods
818 * ({@link Object#wait() wait}, {@link Object#notify notify}, and
819 * {@link Object#notifyAll notifyAll}) have with the built-in monitor
820 * lock:
821 * <ul>
822 * <li>If this lock is not held when any of the {@link Condition}
823 * {@link Condition#await() waiting} or {@link Condition#signal signalling}
824 * methods are called, then an {@link IllegalMonitorStateException} is
825 * thrown.
826 * <li>When the condition {@link Condition#await() waiting} methods are
827 * called the lock is released and before they return the lock is
828 * reacquired and the lock hold count restored to what it was when the
829 * method was called.
830 * <li>If a thread is {@link Thread#interrupt interrupted} while waiting
831 * then the wait will terminate, an {@link InterruptedException} will be
832 * thrown, and the thread's interrupted status will be cleared.
833 * <li>The order in which waiting threads are signalled is not specified.
834 * <li>The order in which threads returning from a wait, and threads trying
835 * to acquire the lock, are granted the lock, is not specified.
836 * </ul>
837 * @return the Condition object
838 */
839 public Condition newCondition() {
840 return new ReentrantLockConditionObject();
841 }
842
843 // Helper methods for Conditions
844
845 /**
846 * Return true if a node, always one that was initially placed on
847 * a condition queue, is off the condition queue (and thus,
848 * normally is now on lock queue.)
849 */
850 boolean isOffConditionQueue(ReentrantLockQueueNode w) {
851 return w.releaseStatus > TRANSFERRING;
852 }
853
854 /**
855 * Transfer a node from a condition queue onto lock queue.
856 * Return true if successful (i.e., node not cancelled)
857 */
858 final boolean transferToLockQueue(ReentrantLockQueueNode node) {
859 /*
860 * Atomically change status to TRANSFERRING to avoid races
861 * with cancelling waiters. We use a special value that causes
862 * any waiters spuriously waking up to re-park until the node
863 * has been placed on lock queue.
864 */
865 if (!casReleaseStatus(node, ON_CONDITION_QUEUE, TRANSFERRING))
866 return false;
867
868 /*
869 * Splice onto queue
870 */
871 ReentrantLockQueueNode p = enq(node);
872
873 /*
874 * Establish normal lock-queue releaseStatus for node. The
875 * CAS can fail if node already was involved in a cancellation
876 * on lock-queue, in which case we signal to be sure.
877 */
878 if (!casReleaseStatus(node, TRANSFERRING, 0))
879 signalSuccessor(node);
880
881 /*
882 * Ensure releaseStatus of predecessor is negative to indicate
883 * that thread is (probably) waiting. If attempt to set releaseStatus
884 * fails or is pred is/becomes cancelled, wake up successor
885 * (which will ordinarily be "node") to resynch.
886 */
887
888 for (;;) {
889 int c = p.releaseStatus;
890 if (c < 0 || (c != CANCELLED && casReleaseStatus(p, c, -1)))
891 break;
892 signalSuccessor(p);
893 if (c == CANCELLED)
894 break;
895 }
896
897 return true;
898 }
899
900 /**
901 * Hook method used by ReentrantReadWriteLock. Called
902 * before unlocking lock to enter wait.
903 */
904 void beforeWait() { }
905
906
907 /**
908 * Hook method used by ReentrantReadWriteLock. Called
909 * after locking lock after exiting wait.
910 */
911 void afterWait() { }
912
913 private class ReentrantLockConditionObject implements Condition, java.io.Serializable {
914 /*
915 * Because condition queues are accessed only when locks are
916 * already held, we just need a simple linked queue to hold
917 * nodes while they are waiting on conditions. They are then
918 * transferred to the lock queue to re-acquire locks.
919 */
920
921 /**
922 * First node of condition queue.
923 */
924 private transient ReentrantLockQueueNode firstWaiter;
925
926 /**
927 * Last node of condition queue.
928 */
929 private transient ReentrantLockQueueNode lastWaiter;
930
931 /**
932 * Basic linked queue insertion.
933 */
934 private ReentrantLockQueueNode addWaiter(Thread current) {
935 ReentrantLockQueueNode w = new ReentrantLockQueueNode(current);
936 w.releaseStatus = ON_CONDITION_QUEUE;
937 if (lastWaiter == null)
938 firstWaiter = lastWaiter = w;
939 else {
940 ReentrantLockQueueNode t = lastWaiter;
941 lastWaiter = w;
942 t.next = w;
943 }
944 return w;
945 }
946
947 /**
948 * Main code for signal. Dequeue and transfer nodes until hit
949 * non-cancelled one or null. Split out from signal to
950 * encourage compilers to inline the case of no waiters.
951 * @param first the first node on condition queue
952 */
953 private void doSignal(ReentrantLockQueueNode first) {
954 do {
955 if ( (firstWaiter = first.next) == null)
956 lastWaiter = null;
957 first.next = null;
958 if (transferToLockQueue(first))
959 return;
960 first = firstWaiter;
961 } while (first != null);
962 }
963
964 public void signal() {
965 if (Thread.currentThread() != owner)
966 throw new IllegalMonitorStateException();
967 ReentrantLockQueueNode w = firstWaiter;
968 if (w != null)
969 doSignal(w);
970 }
971
972 public void signalAll() {
973 if (Thread.currentThread() != owner)
974 throw new IllegalMonitorStateException();
975 // Pull off list all at once and traverse.
976 ReentrantLockQueueNode w = firstWaiter;
977 if (w != null) {
978 lastWaiter = firstWaiter = null;
979 do {
980 ReentrantLockQueueNode n = w.next;
981 w.next = null;
982 transferToLockQueue(w);
983 w = n;
984 } while (w != null);
985 }
986 }
987
988 /*
989 * Various flavors of wait. Each almost the same, but
990 * annoyingly different and no nice way to factor common code.
991 */
992
993 public void await() throws InterruptedException {
994 Thread current = Thread.currentThread();
995 if (current != owner) throw new IllegalMonitorStateException();
996
997 ReentrantLockQueueNode w = addWaiter(current);
998 beforeWait();
999 int recs = recursions;
1000 unlock();
1001
1002 boolean wasInterrupted = false;
1003
1004 while (!isOffConditionQueue(w)) {
1005 JSR166Support.park(false, 0);
1006 if (Thread.interrupted()) {
1007 wasInterrupted = true;
1008 if (casReleaseStatus(w, ON_CONDITION_QUEUE, CANCELLED)) {
1009 w.thread = null;
1010 w = null;
1011 }
1012 break;
1013 }
1014 }
1015
1016 /*
1017 * If we exited above loop due to cancellation, then w is
1018 * null, and doLock will make a new lock node for
1019 * us. Otherwise, upon exit, our node is already in the
1020 * lock queue when doLock is called.
1021 */
1022 doLock(current, w, false, 0);
1023
1024 recursions = recs;
1025 afterWait();
1026
1027 if (wasInterrupted || Thread.interrupted())
1028 throw new InterruptedException();
1029 }
1030
1031 public void awaitUninterruptibly() {
1032 Thread current = Thread.currentThread();
1033 if (current != owner) throw new IllegalMonitorStateException();
1034
1035
1036 ReentrantLockQueueNode w = addWaiter(current);
1037 beforeWait();
1038 int recs = recursions;
1039 unlock();
1040
1041 boolean wasInterrupted = false;
1042 while (!isOffConditionQueue(w)) {
1043 JSR166Support.park(false, 0);
1044 if (Thread.interrupted())
1045 wasInterrupted = true;
1046 }
1047
1048 doLock(current, w, false, 0);
1049 recursions = recs;
1050 afterWait();
1051 // avoid re-interrupts on exit
1052 if (wasInterrupted && !current.isInterrupted())
1053 current.interrupt();
1054 }
1055
1056
1057 public long awaitNanos(long nanos) throws InterruptedException {
1058 Thread current = Thread.currentThread();
1059 if (current != owner) throw new IllegalMonitorStateException();
1060
1061
1062 ReentrantLockQueueNode w = addWaiter(current);
1063 beforeWait();
1064 int recs = recursions;
1065 unlock();
1066
1067 if (nanos <= 0) nanos = 1; // park arg must be positive
1068 long timeLeft = nanos;
1069 long startTime = TimeUnit.nanoTime();
1070 boolean wasInterrupted = false;
1071 boolean cancelled = false;
1072
1073 if (!isOffConditionQueue(w)) {
1074 for (;;) {
1075 JSR166Support.park(false, timeLeft);
1076 if (Thread.interrupted())
1077 wasInterrupted = true;
1078 else if (isOffConditionQueue(w))
1079 break;
1080 else
1081 timeLeft = nanos - (TimeUnit.nanoTime() - startTime);
1082
1083 if (wasInterrupted || timeLeft <= 0) {
1084 if (casReleaseStatus(w, ON_CONDITION_QUEUE, CANCELLED)) {
1085 w.thread = null;
1086 w = null;
1087 }
1088 break;
1089 }
1090 }
1091 }
1092
1093 doLock(current, w, false, 0);
1094 recursions = recs;
1095 afterWait();
1096
1097 if (wasInterrupted || Thread.interrupted())
1098 throw new InterruptedException();
1099 else if (timeLeft <= 0)
1100 return timeLeft;
1101 else
1102 return nanos - (TimeUnit.nanoTime() - startTime);
1103 }
1104
1105 public boolean awaitUntil(Date deadline) throws InterruptedException {
1106 Thread current = Thread.currentThread();
1107 if (current != owner) throw new IllegalMonitorStateException();
1108
1109
1110 ReentrantLockQueueNode w = addWaiter(current);
1111 beforeWait();
1112 int recs = recursions;
1113 unlock();
1114
1115 boolean wasInterrupted = false;
1116 boolean cancelled = false;
1117 long abstime = deadline.getTime();
1118
1119 if (!isOffConditionQueue(w)) {
1120 for (;;) {
1121 JSR166Support.park(true, abstime);
1122
1123 boolean timedOut = false;
1124 if (Thread.interrupted())
1125 wasInterrupted = true;
1126 else if (isOffConditionQueue(w))
1127 break;
1128 else if (System.currentTimeMillis() <= abstime)
1129 timedOut = true;
1130
1131 if (wasInterrupted || timedOut) {
1132 if (casReleaseStatus(w, ON_CONDITION_QUEUE, CANCELLED)) {
1133 w.thread = null;
1134 w = null;
1135 }
1136 break;
1137 }
1138 }
1139 }
1140
1141 doLock(current, w, false, 0);
1142 recursions = recs;
1143 afterWait();
1144
1145 if (wasInterrupted || Thread.interrupted())
1146 throw new InterruptedException();
1147 return !cancelled;
1148 }
1149
1150 public boolean await(long time, TimeUnit unit) throws InterruptedException {
1151 return awaitNanos(unit.toNanos(time)) > 0;
1152 }
1153
1154 }
1155
1156 }
1157
1158 /**
1159 * Node class for threads waiting for locks. This cannot be nested
1160 * inside ReentrantLock because of Java inheritance circularity rules.
1161 */
1162 class ReentrantLockQueueNode {
1163 /**
1164 * Controls whether successor node is allowed to try to obtain
1165 * ownership. Acts as a saturating (in both directions) counting
1166 * semaphore: Upon each wait, the releaseStatus is reduced to zero
1167 * if positive, else reduced to negative, in which case the thread
1168 * will park. The releaseStatus is incremented on each unlock that
1169 * would enable successor thread to obtain lock (succeeding if
1170 * there is no contention). The special value of CANCELLED is used
1171 * to mean that the releaseStatus cannot be either incremented or
1172 * decremented. The special value of ON_CONDITION_QUEUE is used
1173 * when nodes are on conditions queues instead of lock queue, and
1174 * the special value TRANSFERRING is used while signals are in
1175 * progress.
1176 */
1177 transient volatile int releaseStatus;
1178
1179 /**
1180 * Link to predecessor node that current node/thread relies on
1181 * for checking releaseStatus. Assigned once during enqueing,
1182 * and nulled out (for sake of GC) only upon dequeuing. Upon
1183 * cancellation, we do NOT adjust this field, but simply
1184 * traverse through prev's until we hit a non-cancelled node.
1185 * A valid predecessor will always exist because the head node
1186 * is never cancelled.
1187 */
1188 transient volatile ReentrantLockQueueNode prev;
1189
1190 /**
1191 * Link to the successor node that the current node/thread
1192 * unparks upon lock release. Assigned once during enquing.
1193 * Upon cancellation, we do NOT adjust this field, but simply
1194 * traverse through next's until we hit a non-cancelled node,
1195 * (or null if at end of queue). The enqueue operation does
1196 * not assign next field of a predecessor until after
1197 * attachment, so seeing a null next field not necessarily
1198 * mean that node is at end of queue. However, if a next field
1199 * appears to be null, we can scan prev's from the tail to
1200 * double-check.
1201 */
1202 transient volatile ReentrantLockQueueNode next;
1203
1204 /**
1205 * The thread that enqueued this node. Initialized on
1206 * construction and nulled out after use. Note that this need
1207 * not be declared volatile since it is always accessed after
1208 * traversing volatile links, and written before writing
1209 * links.
1210 */
1211 transient Thread thread;
1212
1213 ReentrantLockQueueNode() { }
1214 ReentrantLockQueueNode(Thread t) { thread = t; }
1215 }