ViewVC Help
View File | Revision Log | Show Annotations | Download File | Root Listing
root/jsr166/jsr166/src/main/java/util/concurrent/ReentrantLock.java
Revision: 1.5
Committed: Fri Jun 6 18:42:17 2003 UTC (21 years ago) by dl
Branch: MAIN
Changes since 1.4: +5 -11 lines
Log Message:
Added to emulation
Fixed some javadoc format errors

File Contents

# Content
1 /*
2 * Written by Doug Lea with assistance from members of JCP JSR-166
3 * Expert Group and released to the public domain. Use, modify, and
4 * redistribute this code in any way without acknowledgement.
5 */
6
7 package java.util.concurrent;
8 import java.util.concurrent.atomic.*;
9 import java.util.Date;
10
11
12 /**
13 * A reentrant mutual exclusion lock.
14 * <p><tt>ReentrantLock</tt> defines a stand-alone {@link Lock} class with
15 * the same basic behavior and semantics as the implicit
16 * monitor lock accessed by the use of <tt>synchronized</tt> methods and
17 * statements, but without the forced block-structured locking and unlocking
18 * that occurs with <tt>synchronized</tt> methods and
19 * statements.
20 *
21 * <p>The order in which blocked threads are granted the lock is not
22 * specified.
23 *
24 * <p>If you want a non-reentrant mutual exclusion lock then it is a simple
25 * matter to use a reentrant lock in a non-reentrant way by ensuring that
26 * the lock is not held by the current thread prior to locking.
27 * See {@link #getHoldCount} for a way to check this.
28 *
29 *
30 * <p><tt>ReentrantLock</tt> instances are intended to be used primarily
31 * in before/after constructions such as:
32 *
33 * <pre>
34 * class X {
35 * ReentrantLock lock;
36 * // ...
37 *
38 * public void m() {
39 * lock.lock(); // block until condition holds
40 * try {
41 * // ... method body
42 * } finally {
43 * lock.unlock()
44 * }
45 * }
46 * }
47 * </pre>
48 *
49 * <p>This class supports the interruption of lock acquisition and provides a
50 * {@link #newCondition Condition} implementation that supports the
51 * interruption of thread suspension.
52 *
53 * <p>Except where noted, passing a <tt>null</tt> value for any parameter
54 * will result in a {@link NullPointerException} being thrown.
55 *
56 *
57 *
58 * @since 1.5
59 * @spec JSR-166
60 * @revised $Date: 2003/06/06 17:51:49 $
61 * @editor $Author: dl $
62 * @author Doug Lea
63 *
64 * @fixme (1) We need a non-nested example to motivate this
65 * @fixme (2) Describe performance aspects of interruptible locking for the RI
66 **/
67 public class ReentrantLock extends ReentrantLockQueueNode implements Lock, java.io.Serializable {
68 /*
69 This is a fastpath/slowpath algorithm.
70
71 * The basic algorithm looks like this, ignoring reentrance,
72 cancellation, timeouts, error checking etc:
73 Lock:
74 if (!fair && casOwner(null, currentThread)) // fastpath
75 return;
76 node = create and enq a wait node;
77 for (;;) {
78 if (node is first on queue) {
79 if (casOwner(null, currentThread)) {
80 deq(node);
81 return;
82 }
83 }
84 park(currentThread);
85 }
86
87 Unlock:
88 owner = null; // volatile assignment
89 h = first node on queue;
90 if (h != null) unpark(h's successor's thread);
91
92 * The fast path uses one atomic CAS operation, plus one
93 StoreLoad barrier per lock/unlock pair. The "owner" field is
94 handled as a simple spinlock. To lock, the owner field is set
95 to current thread using conditional atomic update. To unlock,
96 the owner field is set to null, checking if anyone needs
97 waking up, if so doing so. Recursive locks/unlocks instead
98 increment/decrement recursion field.
99
100 * By default, contended locks use a kind of "greedy" /
101 "renouncement" / barging / convoy-avoidance strategy: When a
102 lock is released, a waiting thread is signalled so that it can
103 (re)contend for the lock. It might lose and thus need to
104 rewait. This strategy has much higher throughput than
105 "directed handoff" because it reduces blocking of running
106 threads, but poorer fairness. The wait queue is FIFO, but
107 newly entering threads can barge ahead and grab lock before
108 woken waiters, so acquires are not strictly FIFO, and transfer
109 is not deterministically fair. It is probablistically fair
110 though. Earlier queued threads are allowed to recontend before
111 later queued threads, and each recontention has an unbiased
112 chance to succeed.
113
114 * The base class also sets up support for FairReentrantLock
115 subclass, that differs only in that barging is disabled when
116 there is contention, so locks proceed FIFO. There can be
117 some races in detecting contention, but it is still FIFO from
118 a definable (although complicated to describe) single point,
119 so qualifies as a FIFO lock.
120
121 * The wait queue is a variant of a "CLH" (Craig, Landin, and
122 Hagersten) lock. CLH locks are normally used for spinlocks.
123 We instead use them for blocking locks, but use the same basic
124 tactic of holding the information about whether a thread is
125 released (i.e, eligible to contend for ownership lock) in the
126 predecessor of each node. A node waits until its predecessor
127 says it is released. It is signalled when its predecessor
128 releases the lock. Each node of the queue otherwise serves as
129 a specific-notification-style monitor holding a single waiting
130 thread.
131
132 To enqueue into a CLH lock, you atomically splice it in as new
133 tail. To dequeue, you just set the head field.
134
135 +------+ prev +-----+ +-----+
136 head | | <---- | | <---- | | tail
137 +------+ +-----+ +-----+
138
139 The great thing about CLH Locks is that insertion into a CLH
140 queue requires only a single atomic operation on "tail", so
141 there is a simple atomic point of demarcation from unqueued to
142 queued. Similarly, dequeing involves only updating the
143 "head". However, it takes a bit more work for nodes to
144 determine who their successors are, in part to deal with
145 possible cancellation due to timeouts and interrupts.
146
147 The "prev" links (not used in original CLH locks), are mainly
148 needed to handle cancellation. If a node is cancelled, its
149 successor must be relinked to a non-cancelled predecessor. For
150 explanation in the case of spin locks, see the papers by Scott
151 & Scherer at
152 http://www.cs.rochester.edu/u/scott/synchronization/
153
154 Being first in the queue does not mean that you have the lock,
155 only that you may contend for it (by CAS'ing owner field). So
156 the currently released contender thread may need to rewait.
157
158 We also use "next" links to implement blocking mechanics. The
159 thread id for each node is kept in its node, so a predecessor
160 signals the next node to wake up by traversing next link to
161 determine which thread it is. Determination of successor must
162 avoid races with newly queued nodes to set the "next" fields
163 of their predecessors. This is solved by checking backwards
164 from the atomically updated "tail" when a node's successor
165 appears to be null.
166
167 Cancellation introduces some conservatism to the basic
168 algorithm of these. Since we must poll for cancellation of
169 other nodes, we can miss noticing whether a cancelled node is
170 ahead or behind us. This is dealt with by always unparking
171 successors upon cancellation, and not letting them park again
172 (by saturating release counts) until they stabilize on a new
173 predecessor.
174
175 * Threads waiting on Conditions use the same kind of nodes, but
176 only need to link them in simple (non-concurrent) linked
177 queues because they are only accessed when lock is held. To
178 wait, a thread makes a node inserted into a condition queue.
179 Upon signal, the node is transferred to the lock queue.
180 Special values of releaseStatus fields are used to mark which queue a
181 node is on.
182
183 * All suspension and resumption of threads uses the JSR166
184 native park/unpark API. These are safe versions of
185 suspend/resume (plus timeout support) that internally avoid
186 the intrinsic race problems with suspend/resume: Park suspends
187 if not preceded by an unpark. Unpark resumes if waiting, else
188 causes next park not to suspend. While safe and efficient,
189 these are not general-purpose public operations because we do
190 not want code outside this package to randomly call these
191 methods -- parks and unparks should be matched up. (It is OK
192 to have more unparks than unparks, but it causes threads to
193 spuriously wake up. So minimizing excessive unparks is a
194 performance concern.)
195
196 * The ReentrantLock class extends package-private
197 ReentrantLockQueueNode class as an expedient and efficient
198 (although slightly sleazy) solution to serialization and
199 initialization problems.
200 */
201
202 /*
203 Note that all fields are transient and defined in a way that
204 deserialized locks are in initial unlocked state.
205 */
206
207 /**
208 * Creates an instance of <tt>ReentrantLock</tt>.
209 */
210 public ReentrantLock() { }
211
212 /**
213 * Current owner of lock, or null iff the lock is free. Acquired
214 * only using CAS.
215 */
216 private transient volatile Thread owner;
217
218 /** Number of recursive acquires. Note: total holds = recursions+1 */
219 private transient int recursions;
220
221 /** Head of the wait queue, initialized to point to self as dummy node */
222 private transient volatile ReentrantLockQueueNode head = this;
223
224 /** Tail of the wait queue, initialized to point to self as dummy node */
225 private transient volatile ReentrantLockQueueNode tail = this;
226
227 // Atomics support
228
229 private final static AtomicReferenceFieldUpdater<ReentrantLock, Thread> ownerUpdater = new AtomicReferenceFieldUpdater<ReentrantLock, Thread>(new ReentrantLock[0], new Thread[0], "owner");
230 private final static AtomicReferenceFieldUpdater<ReentrantLock, ReentrantLockQueueNode> tailUpdater = new AtomicReferenceFieldUpdater<ReentrantLock, ReentrantLockQueueNode>(new ReentrantLock[0], new ReentrantLockQueueNode[0], "tail");
231 private final static AtomicReferenceFieldUpdater<ReentrantLock, ReentrantLockQueueNode> headUpdater = new AtomicReferenceFieldUpdater<ReentrantLock, ReentrantLockQueueNode>(new ReentrantLock[0], new ReentrantLockQueueNode[0], "head");
232 private final static AtomicIntegerFieldUpdater<ReentrantLockQueueNode> releaseStatusUpdater =
233 new AtomicIntegerFieldUpdater<ReentrantLockQueueNode>(new ReentrantLockQueueNode[0], "releaseStatus");
234
235 private boolean acquireOwner(Thread current) {
236 return ownerUpdater.compareAndSet(this, null, current);
237 }
238
239 private boolean casTail(ReentrantLockQueueNode cmp, ReentrantLockQueueNode val) {
240 return tailUpdater.compareAndSet(this, cmp, val);
241 }
242
243 private boolean casHead(ReentrantLockQueueNode cmp, ReentrantLockQueueNode val) {
244 return headUpdater.compareAndSet(this, cmp, val);
245 }
246
247 // casReleaseStatus non-private because also accessed by Conditions
248 final boolean casReleaseStatus(ReentrantLockQueueNode node, int cmp, int val) {
249 return releaseStatusUpdater.compareAndSet(node, cmp, val);
250 }
251
252
253 /**
254 * Special value for releaseStatus indicating that node is cancelled.
255 * Must be a large positive number.
256 */
257 static private final int CANCELLED = Integer.MAX_VALUE;
258
259 /**
260 * Special value for node releaseStatus indicating that node is on a
261 * condition queue. Must be large negative number.
262 */
263 static private final int ON_CONDITION_QUEUE = Integer.MIN_VALUE;
264
265 /**
266 * Special value for node releaseStatus indicating that node is in
267 * process of transfer. Must be negative and greater than
268 * ON_CONDITION_QUEUE.
269 */
270 static private final int TRANSFERRING = ON_CONDITION_QUEUE+1;
271
272
273 /**
274 * Utility to throw an exception if lock not held by given thread.
275 */
276 final void checkOwner(Thread current) {
277 if (current != owner)
278 throw new IllegalMonitorStateException();
279 }
280
281 /**
282 * Return whether lock wait queue is empty
283 */
284 final boolean queueEmpty() {
285 ReentrantLockQueueNode h = head; // force order of the volatile reads
286 return h == tail;
287 }
288
289 /**
290 * Insert node into queue. Return predecessor.
291 */
292 private ReentrantLockQueueNode enq(ReentrantLockQueueNode node) {
293 for (;;) {
294 ReentrantLockQueueNode p = tail;
295 node.prev = p; // prev must be valid before/upon CAS
296 if (casTail(p, node)) {
297 p.next = node; // Note: next field assignment lags CAS
298 return p;
299 }
300 }
301 }
302
303 /**
304 * Return true if it is OK to take fast path to lock.
305 * Overridden in FairReentrantLock.
306 */
307 boolean canBarge() {
308 return true;
309 }
310
311 /*
312 * policy/status values for main lock routine. A bit ugly but
313 * worth it to factor the common lock code across the different
314 * policies for suppressing or throwing IE and indicating
315 * timeouts.
316 */
317
318 /**
319 * As param, suppress interrupts.
320 * As return, lock was successfully obtained.
321 */
322 static final int NO_INTERRUPT = 0;
323
324 /**
325 * As param, allow interrupts.
326 * As return, lock was not obtained because thread was interrupted.
327 */
328 static final int INTERRUPT = -1;
329
330
331 /**
332 * As param, use timed waits.
333 * As return, lock was not obtained because timeout was exceeded.
334 */
335 static final int TIMEOUT = -2;
336
337 /**
338 * Main locking code, parameterized across different policies.
339 * @param current current thread
340 * @param node its wait-node, it is alread exists; else null in
341 * which case it is created,
342 * @param policy -- NO_INTERRUPT, INTERRUPT, or TIMEOUT
343 * thread interrupted
344 * @param nanos time to wait, or zero if untimed
345 * @return NO_INTERRUPT on success, else INTERRUPT, or TIMEOUT
346 */
347 final int doLock(Thread current,
348 ReentrantLockQueueNode node,
349 int policy,
350 long nanos) {
351
352 int status = NO_INTERRUPT;
353
354 /*
355 * Bypass queueing if a recursive lock
356 */
357 if (owner == current) {
358 ++recursions;
359 return status;
360 }
361
362
363 long lastTime = 0; // for adjusting timeouts, below
364
365 /*
366 * p is our predecessor node, that holds releaseStatus giving
367 * permission to try to obtain lock if we are first in queue.
368 */
369 ReentrantLockQueueNode p;
370
371 /*
372 * Create and enqueue node if not already created. Nodes
373 * transferred from condition queues will already be created
374 * and queued.
375 */
376 if (node == null) {
377 node = new ReentrantLockQueueNode(current);
378 p = enq(node);
379 }
380 else
381 p = node.prev;
382
383 /*
384 * Repeatedly try to get ownership if first in queue, else
385 * block.
386 */
387 for (;;) {
388 /*
389 * If we are the first thread in queue, try to get the
390 * lock. (We must not try to get lock if we are not
391 * first.) Note that if we become first after p == head
392 * check, all is well -- we can be sure an unlocking
393 * thread will signal us.
394 */
395
396 if (p == head && acquireOwner(current)) {
397 /*
398 * status can be INTERRUPT here if policy is
399 * NO_INTERRUPT but thread was interrupted while
400 * waiting, in which case we must reset status upon
401 * return.
402 */
403 if (status == INTERRUPT)
404 current.interrupt();
405
406 p.next = null; // clear for GC and to suppress signals
407 node.thread = null;
408 node.prev = null;
409 head = node;
410 return NO_INTERRUPT;
411 }
412
413 int releaseStatus = p.releaseStatus;
414
415 /*
416 * If our predecessor was cancelled, use its predecessor.
417 * There will always be a non-cancelled one somewhere
418 * because head node is never cancelled, so at worst we
419 * will hit it. (Note that because head is never
420 * cancelled, we can perform this check after trying to
421 * acquire ownership).
422 */
423 if (releaseStatus == CANCELLED) {
424 node.prev = p = p.prev;
425 }
426
427 /*
428 * Wait if we are not not first in queue, or if we are
429 * first, we have tried to acquire owner and failed since
430 * either entry or last release. (Note that releaseStatus can
431 * already be less than zero if we spuriously returned
432 * from a previous park or got new a predecessor due to
433 * cancellation.)
434 *
435 * We also don't wait if atomic decrement of releaseStatus
436 * fails. We just continue main loop on failure to
437 * atomically update releaseStatus because interference
438 * causing failure is almost surely due to someone
439 * releasing us anyway.
440 *
441 * Each wait consumes all available releases. Normally
442 * there is only one anyway because release doesn't bother
443 * incrementing if already positive.
444 *
445 */
446 else if (casReleaseStatus(p, releaseStatus, (releaseStatus > 0)? 0 : -1) && releaseStatus <= 0) {
447
448 // Update and check timeout value
449 if (nanos > 0) {
450 long now = TimeUnit.nanoTime();
451 if (lastTime != 0) {
452 nanos -= now - lastTime;
453 if (nanos <= 0)
454 status = TIMEOUT;
455 }
456 lastTime = now;
457 }
458
459 if (status != TIMEOUT)
460 JSR166Support.park(false, nanos);
461
462 if (Thread.interrupted())
463 status = INTERRUPT;
464
465 if (status != NO_INTERRUPT && policy != NO_INTERRUPT) {
466 node.thread = null; // disable signals
467 releaseStatusUpdater.set(node, CANCELLED); // don't need CAS here
468 signalSuccessor(node);
469 return status;
470 }
471 }
472 }
473 }
474
475 /**
476 * Wake up node's successor, if one exists
477 */
478 private void signalSuccessor(ReentrantLockQueueNode node) {
479 /*
480 * Find successor -- normally just node.next.
481 */
482 ReentrantLockQueueNode s = node.next;
483
484 /*
485 * if s is cancelled, traverse through next's.
486 */
487
488 while (s != null && s.releaseStatus == CANCELLED) {
489 node = s;
490 s = s.next;
491 }
492
493 /*
494 * If successor appears to be null, check to see if a newly
495 * queued node is successor by starting at tail and working
496 * backwards. If so, help out the enqueing thread by setting
497 * next field. We don't expect this loop to trigger often,
498 * and hardly ever to iterate.
499 */
500
501 if (s == null) {
502 ReentrantLockQueueNode t = tail;
503 for (;;) {
504 /*
505 * If t == node, there is no successor.
506 */
507 if (t == node)
508 return;
509
510 ReentrantLockQueueNode tp = t.prev;
511
512 /*
513 * t's predecessor is null if we are lagging so far
514 * behind the actions of other nodes/threads that an
515 * intervening head.prev was nulled by some
516 * non-cancelled successor of node. In which case,
517 * there's no live successor.
518 */
519
520 if (tp == null)
521 return;
522
523 /*
524 * If we find successor, we can do the assignment to
525 * next (don't even need CAS) on behalf of enqueuing
526 * thread. At worst we will stall now and lag behind
527 * both the setting and the later clearing of next
528 * field. But if so, we will reattach an internal link
529 * in soon-to-be unreachable set of nodes, so no harm
530 * done.
531 */
532
533 if (tp == node) {
534 node.next = s = t;
535 break;
536 }
537
538 t = tp;
539
540 /*
541 * Before iterating, check to see if link has
542 * appeared.
543 */
544 ReentrantLockQueueNode n = node.next;
545 if (n != null) {
546 s = n;
547 break;
548 }
549 }
550 }
551
552 Thread thr = s.thread;
553 if (thr != null && thr != owner) // don't bother signalling if has lock
554 JSR166Support.unpark(thr);
555 }
556
557
558 /**
559 * Increment releaseStatus and signal next thread in queue if one
560 * exists and is waiting. Called only by unlock. This code is split
561 * out from unlock to encourage inlining of non-contended cases.
562 */
563 private void releaseFirst() {
564 for (;;) {
565 ReentrantLockQueueNode h = head;
566 if (h == tail) // No successor
567 return;
568
569 int c = h.releaseStatus;
570 if (c > 0) // Don't need signal if already positive
571 return;
572 if (owner != null) // Don't bother if some thread got lock
573 return;
574 if (casReleaseStatus(h, c, (c < 0)? 0 : 1)) { // saturate at 1
575 if (c < 0)
576 signalSuccessor(h);
577 return;
578 }
579 }
580 }
581
582 /**
583 * Attempts to release this lock. <p>If the current thread is the
584 * holder of this lock then the hold count is decremented. If the
585 * hold count is now zero then the lock is released. <p>If the
586 * current thread is not the holder of this lock then {@link
587 * IllegalMonitorStateException} is thrown.
588 * @throws IllegalMonitorStateException if the current thread does not
589 * hold this lock.
590 */
591 public void unlock() {
592 checkOwner(Thread.currentThread());
593
594 if (recursions > 0) {
595 --recursions;
596 return;
597 }
598
599 ownerUpdater.set(this, null);
600 if (tail != this) // don't bother if never contended
601 releaseFirst();
602 }
603
604 /**
605 * Acquire the lock.
606 * <p>Acquires the lock if it is not held be another thread and returns
607 * immediately, setting the lock hold count to one.
608 * <p>If the current thread
609 * already holds the lock then the hold count is incremented by one and
610 * the method returns immediately.
611 * <p>If the lock is held by another thread then the
612 * the current thread thread becomes disabled for thread scheduling
613 * purposes and lies dormant until the lock has been acquired
614 * at which time the lock hold count is set to one.
615 */
616 public void lock() {
617 Thread current = Thread.currentThread();
618 if (canBarge() && acquireOwner(current))
619 return;
620 int stat = doLock(current, null, NO_INTERRUPT, 0);
621 // assert stat == NO_INTERRUPT;
622 }
623
624 /**
625 * Acquires the lock unless the current thread is
626 * {@link Thread#interrupt interrupted}.
627 * <p>Acquires the lock if it is not held by another thread and returns
628 * immediately, setting the lock hold count to one.
629 * <p>If the current thread already holds this lock then the hold count
630 * is incremented by one and the method returns immediately.
631 * <p>If the lock is held by another thread then the
632 * the current thread becomes disabled for thread scheduling
633 * purposes and lies dormant until one of two things happens:
634 * <ul>
635 * <li> The lock is acquired by the current thread; or
636 * <li> Some other thread {@link Thread#interrupt interrupts} the current
637 * thread.
638 * </ul>
639 * <p>If the lock is acquired by the current thread then the lock hold
640 * count is set to one.
641 * <p>If the current thread:
642 * <ul>
643 * <li>has its interrupted status set on entry to this method; or
644 * <li>is {@link Thread#interrupt interrupted} while waiting to acquire
645 * the lock,
646 * </ul>
647 * then {@link InterruptedException} is thrown and the current thread's
648 * interrupted status is cleared. As this method is an explicit
649 * interruption point, preference is given to responding to the interrupt
650 * over reentrant acquisition of the lock.
651 *
652 *
653 * @throws InterruptedException if the current thread is interrupted
654 */
655 public void lockInterruptibly() throws InterruptedException {
656 Thread current = Thread.currentThread();
657 if (Thread.interrupted())
658 throw new InterruptedException();
659 if (canBarge() && acquireOwner(current))
660 return;
661 if (doLock(current, null, INTERRUPT, 0) == INTERRUPT)
662 throw new InterruptedException();
663 }
664
665 /**
666 * Acquires the lock only if it is not held by another thread at the time
667 * of invocation.
668 * <p>Acquires the lock if it is not held by another thread and returns
669 * immediately with the value <tt>true</tt>, setting the lock hold count
670 * to one.
671 * <p> If the current thread
672 * already holds this lock then the hold count is incremented by one and
673 * the method returns <tt>true</tt>.
674 * <p>If the lock is held by another thread then this method will return
675 * immediately with the value <tt>false</tt>.
676 *
677 * @return <tt>true</tt>if the lock was free and was acquired by the
678 * current thread, or the lock was already held by the current thread; and
679 * <tt>false</tt> otherwise.
680 */
681 public boolean tryLock() {
682 Thread current = Thread.currentThread();
683 if (acquireOwner(current))
684 return true;
685 if (owner == current) {
686 ++recursions;
687 return true;
688 }
689 return false;
690 }
691
692 /**
693 *
694 * Acquires the lock if it is not held by another thread within the given
695 * waiting time and the current thread has not been interrupted.
696 * <p>Acquires the lock if it is not held by another thread and returns
697 * immediately with the value <tt>true</tt>, setting the lock hold count
698 * to one.
699 * <p> If the current thread
700 * already holds this lock then the hold count is incremented by one and
701 * the method returns <tt>true</tt>.
702 * <p>If the lock is held by another thread then the
703 * the current thread becomes disabled for thread scheduling
704 * purposes and lies dormant until one of three things happens:
705 * <ul>
706 * <li> The lock is acquired by the current thread; or
707 * <li> Some other thread {@link Thread#interrupt interrupts} the current
708 * thread; or
709 * <li> The specified waiting time elapses
710 * </ul>
711 * <p>If the lock is acquired then the value <tt>true</tt> is returned and
712 * the lock hold count is set to one.
713 * <p>If the current thread:
714 * <ul>
715 * <li>has its interrupted status set on entry to this method; or
716 * <li>is {@link Thread#interrupt interrupted} before acquiring
717 * the lock,
718 * </ul>
719 * then {@link InterruptedException} is thrown and the current thread's
720 * interrupted status is cleared. As this method is an explicit
721 * interruption point, preference is given to responding to the interrupt
722 * over reentrant acquisition of the lock.
723 * <p>If the specified waiting time elapses then the value <tt>false</tt>
724 * is returned.
725 * <p>The given waiting time is a best-effort lower bound. If the time is
726 * less than or equal to zero, the method will not wait at all.
727 *
728 *
729 * @return <tt>true</tt> if the lock was free and was acquired by the
730 * current thread, or the lock was already held by the current thread; and
731 * <tt>false</tt> if the waiting time elapsed before the lock could be
732 * acquired.
733 *
734 * @throws InterruptedException if the current thread is interrupted
735 */
736 public boolean tryLock(long timeout, TimeUnit unit) throws InterruptedException {
737 if (unit == null)
738 throw new NullPointerException();
739 if (Thread.interrupted())
740 throw new InterruptedException();
741 Thread current = Thread.currentThread();
742 if (canBarge() && acquireOwner(current))
743 return true;
744 if (owner == current) {
745 ++recursions;
746 return true;
747 }
748 if (timeout <= 0)
749 return false;
750 int stat = doLock(current, null, TIMEOUT, unit.toNanos(timeout));
751 if (stat == NO_INTERRUPT)
752 return true;
753 if (stat == INTERRUPT)
754 throw new InterruptedException();
755 assert stat == TIMEOUT;
756 return false;
757 }
758
759
760 /**
761 * Queries the number of holds on this lock by the current thread.
762 * <p>A thread has a hold on a lock for each lock action that is not
763 * matched by an unlock action.
764 * <p>The hold count information is typically only used for testing and
765 * debugging purposes. For example, if a certain section of code should
766 * not be entered with the lock already held then we can assert that
767 * fact:
768 * <pre>
769 * class X {
770 * ReentrantLock lock = new ReentrantLock();
771 * // ...
772 *
773 * public void m() {
774 * assert lock.getHoldCount() == 0;
775 * lock.lock();
776 * try {
777 * // ... method body
778 * } finally {
779 * lock.unlock()
780 * }
781 * }
782 * }
783 * </pre>
784 *
785 * @return the number of holds on this lock by current thread,
786 * or zero if this lock is not held by current thread.
787 **/
788 public int getHoldCount() {
789 return (owner == Thread.currentThread()) ? recursions + 1 : 0;
790 }
791
792 /**
793 * Queries if this lock is held by the current thread.
794 * <p>Analogous to the {@link Thread#holdsLock} method for built-in
795 * monitor locks, this method is typically used for debugging and
796 * testing. For example, a method that should only be called while
797 * a lock is held can assert that this is the case:
798 * <pre>
799 * class X {
800 * ReentrantLock lock = new ReentrantLock();
801 * // ...
802 *
803 * public void m() {
804 * assert lock.isHeldByCurrentThread();
805 * // ... method body
806 * }
807 * }
808 * </pre>
809 *
810 * @return <tt>true</tt> if current thread holds this lock and
811 * <tt>false</tt> otherwise.
812 **/
813 public boolean isHeldByCurrentThread() {
814 return (owner == Thread.currentThread());
815 }
816
817
818 /**
819 * Queries if this lock is held by any thread. THis method is
820 * designed for use in monitoring, not for synchronization control.
821 * @return <tt>true</tt> if any thread holds this lock and
822 * <tt>false</tt> otherwise.
823 **/
824 public boolean isLocked() {
825 return owner != null;
826 }
827
828 /**
829 * Reconstitute by resetting head and tail to point back to the lock.
830 */
831 private void readObject(java.io.ObjectInputStream s)
832 throws java.io.IOException, ClassNotFoundException {
833 s.defaultReadObject();
834 head = tail = this;
835 }
836
837 /**
838 * Returns a {@link Condition} instance for use with this
839 * {@link Lock} instance.
840 *
841 * <p>The returned {@link Condition} instance has the same behavior and
842 * usage
843 * restrictions with this lock as the {@link Object} monitor methods
844 * ({@link Object#wait() wait}, {@link Object#notify notify}, and
845 * {@link Object#notifyAll notifyAll}) have with the built-in monitor
846 * lock:
847 * <ul>
848 * <li>If this lock is not held when any of the {@link Condition}
849 * {@link Condition#await() waiting} or {@link Condition#signal signalling}
850 * methods are called, then an {@link IllegalMonitorStateException} is
851 * thrown.
852 * <li>When the condition {@link Condition#await() waiting} methods are
853 * called the lock is released and before they return the lock is
854 * reacquired and the lock hold count restored to what it was when the
855 * method was called.
856 * <li>If a thread is {@link Thread#interrupt interrupted} while waiting
857 * then the wait will terminate, an {@link InterruptedException} will be
858 * thrown, and the thread's interrupted status will be cleared.
859 * <li>The order in which waiting threads are signalled is not specified.
860 * <li>The order in which threads returning from a wait, and threads trying
861 * to acquire the lock, are granted the lock, is not specified.
862 * </ul>
863 */
864 public Condition newCondition() {
865 return new ReentrantLockConditionObject();
866 }
867
868 // Helper methods for Conditions
869
870 /**
871 * Return true if a node, always one that was initially placed on
872 * a condition queue, is off the condition queue (and thus,
873 * normally is now on lock queue.)
874 */
875 boolean isOffConditionQueue(ReentrantLockQueueNode w) {
876 return w.releaseStatus > TRANSFERRING;
877 }
878
879 /**
880 * Transfer a node from condition queue onto lock queue.
881 * Return true if successful (i.e., node not cancelled)
882 */
883 final boolean transferToLockQueue(ReentrantLockQueueNode node) {
884 /*
885 * Atomically change status to TRANSFERRING to avoid races
886 * with cancelling waiters. We use a special value that causes
887 * any waiters spuriously waking up to re-park until the node
888 * has been placed on lock queue.
889 */
890 if (!casReleaseStatus(node, ON_CONDITION_QUEUE, TRANSFERRING))
891 return false;
892
893 /*
894 * Splice onto queue
895 */
896 ReentrantLockQueueNode p = enq(node);
897
898 /*
899 * Establish normal lock-queue releaseStatus for node. The
900 * CAS can fail if node already was involved in a cancellation
901 * on lock-queue, in which case we signal to be sure.
902 */
903 if (!casReleaseStatus(node, TRANSFERRING, 0))
904 signalSuccessor(node);
905
906 /*
907 * Ensure releaseStatus of predecessor is negative to indicate
908 * that thread is (probably) waiting. If attempt to set releaseStatus
909 * fails or is pred is/becomes cancelled, wake up successor
910 * (which will ordinarily be "node") to resynch.
911 */
912
913 for (;;) {
914 int c = p.releaseStatus;
915 if (c < 0 || (c != CANCELLED && casReleaseStatus(p, c, -1)))
916 break;
917 signalSuccessor(p);
918 if (c == CANCELLED)
919 break;
920 }
921
922 return true;
923 }
924
925 /**
926 * Hook method used by ReentrantReadWriteLock. Called
927 * before unlocking lock to enter wait.
928 */
929 void beforeWait() { }
930
931
932 /**
933 * Hook method used by ReentrantReadWriteLock. Called
934 * after locking lock after exiting wait.
935 */
936 void afterWait() { }
937
938 private class ReentrantLockConditionObject implements Condition, java.io.Serializable {
939 /*
940 * Because condition queues are accessed only when locks are
941 * already held, we just need a simple linked queue to hold
942 * nodes while they are waiting on conditions. They are then
943 * transferred to the lock queue to re-acquire locks.
944 */
945
946 /**
947 * First node of condition queue.
948 */
949 private transient ReentrantLockQueueNode firstWaiter;
950
951 /**
952 * Last node of condition queue.
953 */
954 private transient ReentrantLockQueueNode lastWaiter;
955
956 /**
957 * Basic linked queue insertion.
958 */
959 private ReentrantLockQueueNode addWaiter(Thread current) {
960 ReentrantLockQueueNode w = new ReentrantLockQueueNode(current);
961 w.releaseStatus = ON_CONDITION_QUEUE;
962 if (lastWaiter == null)
963 firstWaiter = lastWaiter = w;
964 else {
965 ReentrantLockQueueNode t = lastWaiter;
966 lastWaiter = w;
967 t.next = w;
968 }
969 return w;
970 }
971
972 /**
973 * Main code for signal. Dequeue and transfer nodes until hit
974 * non-cancelled one or null. Split out from signal to
975 * encourage compilers to inline the case of no waiters.
976 */
977 private void doSignal(ReentrantLockQueueNode first) {
978 do {
979 if ( (firstWaiter = first.next) == null)
980 lastWaiter = null;
981 first.next = null;
982 if (transferToLockQueue(first))
983 return;
984 first = firstWaiter;
985 } while (first != null);
986 }
987
988 public void signal() {
989 checkOwner(Thread.currentThread());
990 ReentrantLockQueueNode w = firstWaiter;
991 if (w != null)
992 doSignal(w);
993 }
994
995 public void signalAll() {
996 checkOwner(Thread.currentThread());
997 // Pull off list all at once and traverse.
998 ReentrantLockQueueNode w = firstWaiter;
999 if (w != null) {
1000 lastWaiter = firstWaiter = null;
1001 do {
1002 ReentrantLockQueueNode n = w.next;
1003 w.next = null;
1004 transferToLockQueue(w);
1005 w = n;
1006 } while (w != null);
1007 }
1008 }
1009
1010 /*
1011 * Various flavors of wait. Each almost the same, but
1012 * annoyingly different and no nice way to factor common code.
1013 */
1014
1015 public void await() throws InterruptedException {
1016 Thread current = Thread.currentThread();
1017 checkOwner(current);
1018
1019 ReentrantLockQueueNode w = addWaiter(current);
1020 beforeWait();
1021 int recs = recursions;
1022 unlock();
1023
1024 boolean wasInterrupted = false;
1025
1026 while (!isOffConditionQueue(w)) {
1027 JSR166Support.park(false, 0);
1028 if (Thread.interrupted()) {
1029 wasInterrupted = true;
1030 if (casReleaseStatus(w, ON_CONDITION_QUEUE, CANCELLED)) {
1031 w.thread = null;
1032 w = null;
1033 }
1034 break;
1035 }
1036 }
1037
1038 /*
1039 * If we exited above loop due to cancellation, then w is
1040 * null, and doLock will make a new lock node for
1041 * us. Otherwise, upon exit, our node is already in the
1042 * lock queue when doLock is called.
1043 */
1044 doLock(current, w, NO_INTERRUPT, 0);
1045
1046 recursions = recs;
1047 afterWait();
1048
1049 if (wasInterrupted)
1050 throw new InterruptedException();
1051 }
1052
1053 public void awaitUninterruptibly() {
1054 Thread current = Thread.currentThread();
1055 checkOwner(current);
1056
1057 ReentrantLockQueueNode w = addWaiter(current);
1058 beforeWait();
1059 int recs = recursions;
1060 unlock();
1061
1062 boolean wasInterrupted = false;
1063 while (!isOffConditionQueue(w)) {
1064 JSR166Support.park(false, 0);
1065 if (Thread.interrupted())
1066 wasInterrupted = true;
1067 }
1068
1069 doLock(current, w, NO_INTERRUPT, 0);
1070 recursions = recs;
1071 afterWait();
1072 // avoid re-interrupts on exit
1073 if (wasInterrupted && !current.isInterrupted())
1074 current.interrupt();
1075 }
1076
1077
1078 public long awaitNanos(long nanos) throws InterruptedException {
1079 Thread current = Thread.currentThread();
1080 checkOwner(current);
1081
1082 ReentrantLockQueueNode w = addWaiter(current);
1083 beforeWait();
1084 int recs = recursions;
1085 unlock();
1086
1087 if (nanos <= 0) nanos = 1; // park arg must be positive
1088 long timeLeft = nanos;
1089 long startTime = TimeUnit.nanoTime();
1090 boolean wasInterrupted = false;
1091 boolean cancelled = false;
1092
1093 if (!isOffConditionQueue(w)) {
1094 for (;;) {
1095 JSR166Support.park(false, timeLeft);
1096 if (Thread.interrupted())
1097 wasInterrupted = true;
1098 else if (isOffConditionQueue(w))
1099 break;
1100 else
1101 timeLeft = nanos - (TimeUnit.nanoTime() - startTime);
1102
1103 if (wasInterrupted || timeLeft <= 0) {
1104 if (casReleaseStatus(w, ON_CONDITION_QUEUE, CANCELLED)) {
1105 w.thread = null;
1106 w = null;
1107 }
1108 break;
1109 }
1110 }
1111 }
1112
1113 doLock(current, w, NO_INTERRUPT, 0);
1114 recursions = recs;
1115 afterWait();
1116
1117 if (wasInterrupted)
1118 throw new InterruptedException();
1119 else if (timeLeft <= 0)
1120 return timeLeft;
1121 else
1122 return nanos - (TimeUnit.nanoTime() - startTime);
1123 }
1124
1125 public boolean awaitUntil(Date deadline) throws InterruptedException {
1126 Thread current = Thread.currentThread();
1127 checkOwner(current);
1128
1129 ReentrantLockQueueNode w = addWaiter(current);
1130 beforeWait();
1131 int recs = recursions;
1132 unlock();
1133
1134 boolean wasInterrupted = false;
1135 boolean cancelled = false;
1136 long abstime = deadline.getTime();
1137
1138 if (!isOffConditionQueue(w)) {
1139 for (;;) {
1140 JSR166Support.park(true, abstime);
1141
1142 boolean timedOut = false;
1143 if (Thread.interrupted())
1144 wasInterrupted = true;
1145 else if (isOffConditionQueue(w))
1146 break;
1147 else if (System.currentTimeMillis() <= abstime)
1148 timedOut = true;
1149
1150 if (wasInterrupted || timedOut) {
1151 if (casReleaseStatus(w, ON_CONDITION_QUEUE, CANCELLED)) {
1152 w.thread = null;
1153 w = null;
1154 }
1155 break;
1156 }
1157 }
1158 }
1159
1160 doLock(current, w, NO_INTERRUPT, 0);
1161 recursions = recs;
1162 afterWait();
1163
1164 if (wasInterrupted)
1165 throw new InterruptedException();
1166 return !cancelled;
1167 }
1168
1169 public boolean await(long time, TimeUnit unit) throws InterruptedException {
1170 return awaitNanos(unit.toNanos(time)) > 0;
1171 }
1172
1173 }
1174
1175 }
1176
1177 /**
1178 * Node class for threads waiting for locks. This cannot be nested
1179 * inside ReentrantLock because of Java inheritance circularity rules.
1180 */
1181 class ReentrantLockQueueNode {
1182 /**
1183 * Controls whether successor node is allowed to try to obtain
1184 * ownership. Acts as a saturating (in both directions) counting
1185 * semaphore: Upon each wait, the releaseStatus is reduced to zero
1186 * if positive, else reduced to negative, in which case the thread
1187 * will park. The releaseStatus is incremented on each unlock that
1188 * would enable successor thread to obtain lock (succeeding if
1189 * there is no contention). The special value of CANCELLED is used
1190 * to mean that the releaseStatus cannot be either incremented or
1191 * decremented. The special value of ON_CONDITION_QUEUE is used
1192 * when nodes are on conditions queues instead of lock queue, and
1193 * the special value TRANSFERRING is used while signals are in
1194 * progress.
1195 */
1196 transient volatile int releaseStatus;
1197
1198 /**
1199 * Link to predecessor node that current node/thread relies on
1200 * for checking releaseStatus. Assigned once during enqueing,
1201 * and nulled out (for sake of GC) only upon dequeuing. Upon
1202 * cancellation, we do NOT adjust this field, but simply
1203 * traverse through prev's until we hit a non-cancelled node.
1204 * A valid predecessor will always exist because the head node
1205 * is never cancelled.
1206 */
1207 transient volatile ReentrantLockQueueNode prev;
1208
1209 /**
1210 * Link to the successor node that the current node/thread
1211 * unparks upon lock release. Assigned once during enquing.
1212 * Upon cancellation, we do NOT adjust this field, but simply
1213 * traverse through next's until we hit a non-cancelled node,
1214 * (or null if at end of queue). The enqueue operation does
1215 * not assign next field of a predecessor until after
1216 * attachment, so seeing a null next field not necessarily
1217 * mean that node is at end of queue. However, if a next field
1218 * appears to be null, we can scan prev's from the tail to
1219 * double-check.
1220 */
1221 transient volatile ReentrantLockQueueNode next;
1222
1223 /**
1224 * The thread that enqueued this node. Initialized on
1225 * construction and nulled out after use. Note that this need
1226 * not be declared volatile since it is always accessed after
1227 * traversing volatile links, and written before writing
1228 * links.
1229 */
1230 transient Thread thread;
1231
1232 ReentrantLockQueueNode() { }
1233 ReentrantLockQueueNode(Thread t) { thread = t; }
1234 }